id
stringlengths 10
10
| title
stringlengths 19
145
| abstract
stringlengths 273
1.91k
| full_text
dict | qas
dict | figures_and_tables
dict | question
sequence | retrieval_gt
sequence | answer_gt
sequence | __index_level_0__
int64 0
887
|
---|---|---|---|---|---|---|---|---|---|
1711.00106 | DCN+: Mixed Objective and Deep Residual Coattention for Question Answering | Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning. The objective uses rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective. In addition to the mixed objective, we improve dynamic coattention networks (DCN) with a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks. Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies. On the Stanford Question Answering Dataset, our model achieves state-of-the-art results with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1. | {
"paragraphs": [
[
"Existing state-of-the-art question answering models are trained to produce exact answer spans for a question and a document. In this setting, a ground truth answer used to supervise the model is defined as a start and an end position within the document. Existing training approaches optimize using cross entropy loss over the two positions. However, this suffers from a fundamental disconnect between the optimization, which is tied to the position of a particular ground truth answer span, and the evaluation, which is based on the textual content of the answer. This disconnect is especially harmful in cases where answers that are textually similar to, but distinct in positions from, the ground truth are penalized in the same fashion as answers that are textually dissimilar. For example, suppose we are given the sentence “Some believe that the Golden State Warriors team of 2017 is one of the greatest teams in NBA history”, the question “which team is considered to be one of the greatest teams in NBA history”, and a ground truth answer of “the Golden State Warriors team of 2017”. The span “Warriors” is also a correct answer, but from the perspective of traditional cross entropy based training it is no better than the span “history”.",
"To address this problem, we propose a mixed objective that combines traditional cross entropy loss over positions with a measure of word overlap trained with reinforcement learning. We obtain the latter objective using self-critical policy learning in which the reward is based on word overlap between the proposed answer and the ground truth answer. Our mixed objective brings two benefits: (i) the reinforcement learning objective encourages answers that are textually similar to the ground truth answer and discourages those that are not; (ii) the cross entropy objective significantly facilitates policy learning by encouraging trajectories that are known to be correct. The resulting objective is one that is both faithful to the evaluation metric and converges quickly in practice.",
"In addition to our mixed training objective, we extend the Dynamic Coattention Network (DCN) by BIBREF0 with a deep residual coattention encoder. This allows the network to build richer representations of the input by enabling each input sequence to attend to previous attention contexts. BIBREF1 show that the stacking of attention layers helps model long-range dependencies. We merge coattention outputs from each layer by means of residual connections to reduce the length of signal paths. BIBREF2 show that skip layer connections facilitate signal propagation and alleviate gradient degradation.",
"The combination of the deep residual coattention encoder and the mixed objective leads to higher performance across question types, question lengths, and answer lengths on the Stanford Question Answering Dataset () BIBREF3 compared to our DCN baseline. The improvement is especially apparent on long questions, which require the model to capture long-range dependencies between the document and the question. Our model, which we call , achieves state-of-the-art results on , with exact match accuracy and F1. When ensembled, the obtains exact match accuracy and F1."
],
[
"We consider the question answering task in which we are given a document and a question, and are asked to find the answer in the document. Our model is based on the DCN by BIBREF0 , which consists of a coattention encoder and a dynamic decoder. The encoder first encodes the question and the document separately, then builds a codependent representation through coattention. The decoder then produces a start and end point estimate given the coattention. The DCN decoder is dynamic in the sense that it iteratively estimates the start and end positions, stopping when estimates between iterations converge to the same positions or when a predefined maximum number of iterations is reached. We make two significant changes to the DCN by introducing a deep residual coattention encoder and a mixed training objective that combines cross entropy loss from maximum likelihood estimation and reinforcement learning rewards from self-critical policy learning."
],
[
"Because it only has a single-layer coattention encoder, the DCN is limited in its ability to compose complex input representations. BIBREF1 proposed stacked self-attention modules to facilitate signal traversal. They also showed that the network's ability to model long-range dependencies can be improved by reducing the length of signal paths. We propose two modifications to the coattention encoder to leverage these findings. First, we extend the coattention encoder with self-attention by stacking coattention layers. This allows the network to build richer representations over the input. Second, we merge coattention outputs from each layer with residual connections. This reduces the length of signal paths. Our encoder is shown in Figure 1 .",
"Suppose we are given a document of $$ words and a question of $$ words. Let $^D \\in ^{\\times }$ and $^Q \\in ^{\\times }$ respectively denote the word embeddings for the document and the question, where $$ is the dimension of the word embeddings. We obtain document encodings $_1^D$ and question encodings $_1^Q$ through a bidirectional Long Short-Term Memory Network (LSTM) BIBREF4 , where we use integer subscripts to denote the coattention layer number. ",
"$$_1^D &=& _1 \\left( ^D \\right) \\in ^{\\times (+1)}\n\\\\\n_1^Q &=& \\text{tanh} \\left( W~\\hspace{2.0pt}px_1 \\left( ^Q \\right) + b \\right) \\in ^{\\times (+1)}$$ (Eq. 3) ",
"Here, $$ denotes the hidden state size and the $+1$ indicates the presence of an additional sentinel word which allows the coattention to not focus on any part of the input. Like the original DCN, we add a non-linear transform to the question encoding.",
"We compute the affinity matrix between the document and the question as $=\n{\\left( _1^Q \\right)}^\\intercal _1^D \\in ^{(+1) \\times (+1)}$ . Let ${X}$ denote the softmax operation over the matrix $X$ that normalizes $X$ column-wise. The document summary vectors and question summary vectors are computed as ",
"$$_1^D &=& _1^Q ~{^\\intercal } \\in ^{\\times (+ 1)}\n\\\\\n_1^Q &=& _1^D ~{} \\in ^{\\times (+ 1)}$$ (Eq. 4) ",
"We define the document coattention context as follows. Note that we drop the dimension corresponding to the sentinel vector – it has already been used during the summary computation and is not a potential position candidate for the decoder. ",
"$$_1^D &=& _1^Q ~{^\\intercal } \\in ^{\\times }$$ (Eq. 5) ",
"We further encode the summaries using another bidirectional LSTM. ",
"$$_2^D &=& _2 \\left( _1^D \\right) \\in ^{2 \\times }\n\\\\\n_2^Q &=& _2 \\left( _1^Q \\right) \\in ^{2 \\times }$$ (Eq. 6) ",
"Equation 4 to equation 5 describe a single coattention layer. We compute the second coattention layer in a similar fashion. Namely, let $$ denote a multi-valued mapping whose inputs are the two input sequences $_1^D$ and $_1^Q$ . We have ",
"$$_1 \\left( _1^D, _1^Q \\right) &\\rightarrow & _1^D, _1^Q, _1^D\n\\\\\n_2 \\left( _2^D, _2^Q \\right) &\\rightarrow & _2^D, _2^Q, _2^D$$ (Eq. 7) ",
"The output of our encoder is then obtained as ",
"$$U = \\left(\n{\n_1^D;\n_2^D;\n_1^D;\n_2^D;\n_1^D;\n_2^D\n}\n\\right) \\in ^{2\\times m}$$ (Eq. 8) ",
"where ${A, B}$ denotes the concatenation between the matrices $A$ and $B$ along the first dimension.",
"This encoder is different than the original DCN in its depth and its use of residual connections. We use not only the output of the deep coattention network $_2^D$ as input to the final bidirectional LSTM, but add skip connections to initial encodings $_1^D$ , $_2^D$ , summary vectors $_1^D$ , $_2^D$ , and coattention context $_1^D$ . This is akin to transformer networks BIBREF1 , which achieved state-of-the-art results on machine translation using deep self-attention layers to help model long-range dependencies, and residual networks BIBREF2 , which achieved state-of-the-art results in image classification through the addition of skip layer connections to facilitate signal propagation and alleviate gradient degradation."
],
[
"The DCN produces a distribution over the start position of the answer and a distribution over the end position of the answer. Let $s$ and $e$ denote the respective start and end points of the ground truth answer. Because the decoder of the DCN is dynamic, we denote the start and end distributions produced at the $t$ th decoding step by $_t \\in ^{m}$ and $_t \\in ^{m}$ . For convenience, we denote the greedy estimate of the start and end positions by the model at the $t$ th decoding step by $s_t$ and $e_t$ . Moreover, let $\\Theta $ denote the parameters of the model.",
"Similar to other question answering models, the DCN is supervised using the cross entropy loss on the start position distribution and the end position distribution: ",
"$$_{ce}(\\Theta ) = - \\sum _t \\left( \\log _t \\left( s \\mid s_{t-1}, e_{t-1} ; \\Theta \\right) + \\log _t \\left( e \\mid s_{t-1}, e_{t-1} ; \\Theta \\right) \\right)$$ (Eq. 10) ",
"Equation 10 states that the model accumulates a cross entropy loss over each position during each decoding step given previous estimates of the start and end positions.",
"The question answering task consists of two evaluation metrics. The first, exact match, is a binary score that denotes whether the answer span produced by the model has exact string match with the ground truth answer span. The second, F1, computes the degree of word overlap between the answer span produced by the model and the ground truth answer span. We note that there is a disconnect between the cross entropy optimization objective and the evaluation metrics. For example, suppose we are given the answer estimates $A$ and $B$ , neither of which match the ground truth positions. However, $A$ has an exact string match with the ground truth answer whereas $B$ does not. The cross entropy objective penalizes $A$ and $B$ equally, despite the former being correct under both evaluation metrics. In the less extreme case where $A$ does not have exact match but has some degree of word overlap with the ground truth, the F1 metric still prefers $A$ over $B$ despite its wrongly predicted positions.",
"We encode this preference using reinforcement learning, using the F1 score as the reward function. Let $\\hat{s_t} \\sim _t$ and $\\hat{e_t} \\sim _t$ denote the sampled start and end positions from the estimated distributions at decoding step $t$ . We define a trajectory $\\hat{\\tau }$ as a sequence of sampled start and end points $\\hat{s_t}$ and $\\hat{e_t}$ through all $T$ decoder time steps. The reinforcement learning objective is then the negative expected rewards $R$ over trajectories. ",
"$$_{rl}\\left(\\Theta \\right) &=&\n- \\mathbb {E}_{\\hat{\\tau } \\sim p_{\\tau }}\n\\left[\nR \\left(s, e, \\hat{s}_T, \\hat{e}_T ; \\Theta \\right)\n\\right]\n\\\\\n&\\approx &\n\n- \\mathbb {E}_{\\hat{\\tau } \\sim p_{\\tau }}\n\\left[\nF_1\n\\left(\n{\\hat{s}_T}{\\hat{e}_T},\n{s}{e}\n\\right)\n-\nF_1\n\\left(\n{s_T}{e_T},\n{s}{e}\n\\right)\n\\right]$$ (Eq. 11) ",
"We use $F_1$ to denote the F1 scoring function and ${s}{e}$ to denote the answer span retrieved using the start point $s$ and end point $e$ . In equation 11 , instead of using only the F1 word overlap as the reward, we subtract from it a baseline. BIBREF5 show that a good baseline reduces the variance of gradient estimates and facilitates convergence. In our case, we employ a self-critic BIBREF6 that uses the F1 score produced by the current model during greedy inference without teacher forcing.",
"For ease of notation, we abbreviate $R \\left(s, e, \\hat{s}_T, \\hat{e}_T ; \\Theta \\right)$ as $R$ . As per BIBREF7 and BIBREF8 , the expected gradient of a non-differentiable reward function can be computed as ",
"$$\\nabla _\\Theta _{rl}\\left(\\Theta \\right) &=&\n- \\nabla _\\Theta \\left(\n\\mathbb {E}_{\\hat{\\tau } \\sim p_{\\tau }}\n\\left[\nR\n\\right]\n\\right)\n\\\\\n&=&\n-\n\\mathbb {E}_{\\hat{\\tau } \\sim p_{\\tau }}\n\\left[\nR\n\\nabla _\\Theta \\log p_\\tau \\left( \\tau ; \\Theta \\right)\n\\right]\n\\\\\n&=&\n-\n\\mathbb {E}_{\\hat{\\tau } \\sim p_{\\tau }}\n\\left[\nR\n\\nabla _\\Theta \\left(\n\\sum _t^T\n\\left(\n\\log _t \\left( \\hat{s}_t \\vert \\hat{s}_{t-1}, \\hat{e}_{t-1}; \\Theta \\right)\n+\n\\log _t \\left( \\hat{e}_t \\vert \\hat{s}_{t-1}, \\hat{e}_{t-1}; \\Theta \\right)\n\\right)\n\\right)\n\\right]\n\\nonumber \\\\\n\n&\\approx &\n-\nR\n\\nabla _\\Theta \\left(\n\\sum _t^T\n\\left(\n\\log _t \\left( \\hat{s}_t \\vert \\hat{s}_{t-1}, \\hat{e}_{t-1}; \\Theta \\right)\n+\n\\log _t \\left( \\hat{e}_t \\vert \\hat{s}_{t-1}, \\hat{e}_{t-1}; \\Theta \\right)\n\\right)\n\\right)$$ (Eq. 12) ",
"In equation 12 , we approximate the expected gradient using a single Monte-Carlo sample $\\tau $ drawn from $p_\\tau $ . This sample trajectory $\\tau $ contains the start and end positions $\\hat{s}_t$ and $\\hat{e}_t$ sampled during all decoding steps.",
"One of the key problems in applying RL to natural language processing is the discontinuous and discrete space the agent must explore in order to find a good policy. For problems with large exploration space, RL approaches tend to be applied as fine-tuning steps after a maximum likelihood model has already been trained BIBREF9 , BIBREF10 . The resulting model is constrained in its exploration during fine-tuning because it is biased by heavy pretraining. We instead treat the optimization problem as a multi-task learning problem. The first task is to optimize for positional match with the ground truth answer using the the cross entropy objective. The second task is to optimize for word overlap with the ground truth answer with the self-critical reinforcement learning objective. In a similar fashion to BIBREF11 , we combine the two losses using homoscedastic uncertainty as task-dependent weightings. ",
"$$= \\frac{1}{2 \\sigma _{ce}^2} _{ce}\\left(\\Theta \\right) + \\frac{1}{2 \\sigma _{rl}^2} _{rl}\\left(\\Theta \\right) + \\log \\sigma _{ce}^2 + \\log \\sigma _{rl}^2$$ (Eq. 13) ",
"Here, $\\sigma _{ce}$ and $\\sigma _{rl}$ are learned parameters. The gradient of the cross entropy objective can be derived using straight-forward backpropagation. The gradient of the self-critical reinforcement learning objective is shown in equation 12 . Figure 2 illustrates how the mixed objective is computed. In practice, we find that adding the cross entropy task significantly facilitates policy learning by pruning the space of candidate trajectories - without the former, it is very difficult for policy learning to converge due to the large space of potential answers, documents, and questions."
],
[
"We train and evaluate our model on the Stanford Question Answering Dataset (). We show our test performance of our model against other published models, and demonstrate the importance of our proposals via ablation studies on the development set. To preprocess the corpus, we use the reversible tokenizer from Stanford CoreNLP BIBREF12 . For word embeddings, we use GloVe embeddings pretrained on the 840B Common Crawl corpus BIBREF13 as well as character ngram embeddings by BIBREF14 . In addition, we concatenate these embeddings with context vectors (CoVe) trained on WMT BIBREF15 . For out of vocabulary words, we set the embeddings and context vectors to zero. We perform word dropout on the document which zeros a word embedding with probability 0.075. In addition, we swap the first maxout layer of the highway maxout network in the DCN decoder with a sparse mixture of experts layer BIBREF16 . This layer is similar to the maxout layer, except instead of taking the top scoring expert, we take the top $k = 2$ expert. The model is trained using ADAM BIBREF17 with default hyperparameters. Hyperparameters of our model are identical to the DCN. We implement our model using PyTorch."
],
[
"The performance of our model is shown in Table 1 . Our model achieves state-of-the-art results on dataset with exact match accuracy and F1. When ensembled, our model obtains exact match accuracy and F1. To illustrate the effectiveness of our proposals, we use the DCN with context vectors as a baseline BIBREF15 . This model is identical to the DCN by BIBREF0 , except that it augments the word representations with context vectors trained on WMT16.",
" outperforms the baseline by $$ exact match accuracy and $$ F1 on the development set. Figure 3 shows the consistent performance gain of over the baseline across question types, question lengths, and answer lengths. In particular, provides a significant advantage for long questions.",
"The contributions of each part of our model are shown in Table 2 . We note that the deep residual coattention yielded the highest contribution to model performance, followed by the mixed objective. The sparse mixture of experts layer in the decoder added minor improvements to the model performance.",
"The training curves for with reinforcement learning and without reinforcement learning are shown in Figure 4 to illustrate the effectiveness of our proposed mixed objective. In particular, we note that without mixing in the cross entropy loss, it is extremely difficult to learn the policy. When we combine the cross entropy loss with the reinforcement learning objective, we find that the model initially performs worse early on as it begins policy learning from scratch (shown in Figure 4 ). However, with the addition of cross entropy loss, the model quickly learns a reasonable policy and subsequently outperforms the purely cross entropy model (shown in Figure 4 ).",
"Figure 5 compares predictions by and by the baseline on the development set. Both models retrieve answers that have sensible entity types. For example, the second example asks for “what game” and both models retrieve an American football game; the third example asks for “type of Turing machine” and both models retrieve a type of turing machine. We find, however, that consistently make less mistakes on finding the correct entity. This is especially apparent in the examples we show, which contain several entities or candidate answers of the correct type. In the first example, Gasquet wrote about the plague and called it “Great Pestilence”. While he likely did think of the plague as a “great pestilence”, the phrase “suggested that it would appear to be some form of ordinary Eastern or bubonic plague” provides evidence for the correct answer – “some form of ordinary Eastern or bubonic plague”. Similarly, the second example states that Thomas Davis was injured in the “NFC Championship Game”, but the game he insisted on playing in is the “Super Bowl”. Finally, “multi-tape” and “single-tape” both appear in the sentence that provides provenance for the answer to the question. However, it is the “single-tape” Turing machine that implies quadratic time. In these examples, finds the correct entity out of ones that have the right type whereas the baseline does not."
],
[],
[
"We introduced , an state-of-the-art question answering model with deep residual coattention trained using a mixed objective that combines cross entropy supervision with self-critical policy learning. We showed that our proposals improve model performance across question types, question lengths, and answer lengths on the Stanford Question Answering Dataset ( ). On , the achieves exact match accuracy and F1. When ensembled, the obtains exact match accuracy and F1."
]
],
"section_name": [
"Introduction",
null,
"Deep residual coattention encoder",
"Mixed objective using self-critical policy learning",
"Experiments",
"Results",
"Related work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"ecdc8031748ab5bb34e7fa598328043196bd09bf"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Ablation study on the development set of SQuAD.",
"The contributions of each part of our model are shown in Table 2 . We note that the deep residual coattention yielded the highest contribution to model performance, followed by the mixed objective. The sparse mixture of experts layer in the decoder added minor improvements to the model performance."
],
"extractive_spans": [],
"free_form_answer": "The mixed objective improves EM by 2.5% and F1 by 2.2%",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Ablation study on the development set of SQuAD.",
"The contributions of each part of our model are shown in Table 2 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"somewhat"
],
"question": [
"How much is the gap between using the proposed objective and using only cross-entropy objective?"
],
"question_id": [
"1f63ccc379f01ecdccaa02ed0912970610c84b72"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: Deep residual coattention encoder.",
"Figure 2: Computation of the mixed objective.",
"Table 1: Test performance on SQuAD. The papers are as follows: rnet (Microsoft Asia Natural Language Computing Group, 2017), SEDT (Liu et al., 2017), BiDAF (Seo et al., 2017), DCN w/ CoVe (McCann et al., 2017), ReasoNet (Shen et al., 2017), Document Reader (Chen et al., 2017), FastQA (Weissenborn et al., 2017), DCN (Xiong et al., 2017). The CoVe authors did not submit their model, which we use as our baseline, for SQuAD test evaluation.",
"Figure 3: Performance comparison between DCN+ and the baseline DCN with CoVe on the SQuAD development set.",
"Table 2: Ablation study on the development set of SQuAD.",
"Figure 4: Training curve of DCN+ with and without reinforcement learning. In the latter case, only the cross entropy objective is used. The mixed objective initially performs worse as it begins policy learning from scratch, but quickly outperforms the cross entropy model.",
"Figure 5: Predictions by DCN+ (red) and DCN with CoVe (blue) on the SQuAD development set."
],
"file": [
"2-Figure1-1.png",
"5-Figure2-1.png",
"6-Table1-1.png",
"6-Figure3-1.png",
"7-Table2-1.png",
"7-Figure4-1.png",
"8-Figure5-1.png"
]
} | [
"How much is the gap between using the proposed objective and using only cross-entropy objective?"
] | [
[
"1711.00106-Results-2",
"1711.00106-7-Table2-1.png"
]
] | [
"The mixed objective improves EM by 2.5% and F1 by 2.2%"
] | 858 |
1811.09529 | Competency Questions and SPARQL-OWL Queries Dataset and Analysis | Competency Questions (CQs) are natural language questions outlining and constraining the scope of knowledge represented by an ontology. Despite that CQs are a part of several ontology engineering methodologies, we have observed that the actual publication of CQs for the available ontologies is very limited and even scarcer is the publication of their respective formalisations in terms of, e.g., SPARQL queries. This paper aims to contribute to addressing the engineering shortcomings of using CQs in ontology development, to facilitate wider use of CQs. In order to understand the relation between CQs and the queries over the ontology to test the CQs on an ontology, we gather, analyse, and publicly release a set of 234 CQs and their translations to SPARQL-OWL for several ontologies in different domains developed by different groups. We analysed the CQs in two principal ways. The first stage focused on a linguistic analysis of the natural language text itself, i.e., a lexico-syntactic analysis without any presuppositions of ontology elements, and a subsequent step of semantic analysis in order to find patterns. This increased diversity of CQ sources resulted in a 5-fold increase of hitherto published patterns, to 106 distinct CQ patterns, which have a limited subset of few patterns shared across the CQ sets from the different ontologies. Next, we analysed the relation between the found CQ patterns and the 46 SPARQL-OWL query signatures, which revealed that one CQ pattern may be realised by more than one SPARQL-OWL query signature, and vice versa. We hope that our work will contribute to establishing common practices, templates, automation, and user tools that will support CQ formulation, formalisation, execution, and general management. | {
"paragraphs": [
[
"Within the field of ontology engineering, Competency Questions (CQs) BIBREF0 are natural language questions outlining the scope of knowledge represented by an ontology. They represent functional requirements in the sense that the developed ontology or an ontology-based information system should be able to answer them; hence contain all the relevant knowledge. For example, a CQ may be What are the implementations of C4.5 algorithm?, indicating that the ontology needs to contain classes, such as Algorithm and C4.5 as subclass of Algorithm, and something about implementations such that the answer to the CQ will be non-empty.",
"CQs are a part of several ontology engineering methodologies, yet the actual publication of CQs for the available ontologies is rather scarce. Even more scarce is the publication of the CQs' respective formalisation in terms of, e.g., SPARQL queries. This suggests CQs are not used widely as intended. We hypothezise that it may be due to the lack of common practices, templates, automation, and user tools that would support CQ formulation, formalisation, execution, and general management; or: it is still a fully manual process. For instance, even if one has specified CQs, there is no automatic way to translate it to, say, a SPARQL-OWL BIBREF1 query (for validation and verification), and not even a systematic manual way either.",
"There have been few attempts to analyse CQs. Ren et al. BIBREF2 analysed CQs and their patterns to determine CQ archetypes, as tried BIBREF3 . Those patterns have a limited coverage, however, for they are based on CQ sets of at most two ontologies (Pizza and Software), which thus may contain domain bias, CQ author bias, and `prejudiced' patterns as the Pizza CQs were created after the ontology. As simple example of the latter issue, one could create a CQ Which pizza has hot as spiciness? that neatly fits with Pizza's hasSpiciness data property, or a more natural phrase Which pizzas are hot? that is fully agnostic of how it is represented in the ontology, be it with a data property, object property, or a class. More generally, it suggests that Ren et al.'s CQ patterns, formulated alike “Which [CE1] [OPE] [CE2]?”, may not be appropriate as CQ pattern, as it presupposes which kind of element it would be in an ontology. The manual process and `free form' formulation of CQs by domain experts also runs onto problems that some turn out not translatable into a test over the ontology for various reasons. For instance, the CQ How can I get problems [with X] fixed? of the Software Ontology cannot be answered by a declarative specification that the ontology is, or take the CQ for the DMOP ontology BIBREF4 : Given a data mining task/data set, which of the valid or applicable workflows/algorithms will yield optimal results (or at least better results than the others)?: assuming that the question may deal with an arbitrary (not pre-defined upfront) dataset, this CQ may only be answered via performing data mining experiments and not by the ontology itself. Therefore, without a clear guidelines of what kind of CQs may be meaningfully expressed and used as requirement specification for an ontology's content, their uptake and usage likely will remain limited. This paper aims to contribute to addressing the engineering shortcomings of using CQs in ontology development.",
"To clear up the CQ muddle and trying to understand the relation between CQs and the queries over the ontology to test the CQs on an ontology, we gather, analyse, and publicly release a larger set of CQs and their translations to SPARQL-OWL for several ontologies in different domains developed by different groups. For the analysis in particular, we seek to address the following research questions:",
"A total of 234 CQs for 5 ontologies have been collected and translated into SPARQL-OWL queries, and made available as a data resource. We analysed them in two principal ways. The first stage focused on a linguistic analysis of the natural language text itself, i.e., a lexico-syntactic analysis without any presuppositions of ontology elements, and a subsequent step of semantic analysis. This revealed 17 CQ patterns at the natural language layer. While a few patterns occur in multiple CQ sets, there are also patterns unique to a CQ set, supporting the expectation that a broad sampling is required to obtain a more representative set of patterns. The second phase consists of designing SPARQL-OWL queries for all CQs, where possible, and examining the signature of the queries. We found 46 query signatures resulting from the collected 131 SPARQL-OWL queries. The third step consists of the analysis of the relation between the CQ patterns and the SPARQL-OWL query signatures. This is, as hypothesised, a INLINEFORM0 : INLINEFORM1 relation, or: one CQ pattern may be realised by more than one SPARQL-OWL query and there may be more than one CQ pattern for a SPARQL-OWL query signature.",
"The remainder of the paper is structured as follows. We first discuss related works on CQs and CQ patterns in Section SECREF2 . Section SECREF3 is devoted to the linguistic analysis of CQs and Section SECREF4 to the generation and analysis of the SPARQL-OWL queries. We discuss and return to the research questions in Section SECREF5 and conclude in Section SECREF6 . The data is available from a Git repository at https://github.com/CQ2SPARQLOWL/Dataset."
],
[
"The aim of the analysis of the CQs is to examine whether there are some popular linguistic structures that can be reused to specify requirements for, and validate, new and existing ontologies. This section describes the collection of the materials, the methods, and subsequently the results of the CQ analysis."
],
[
"We describe and motivate the materials first and then proceed to the methods and motivations thereof.",
"There are multiple ontologies available over internet with competency questions provided, but since the focus of our research is on SPARQL-OWL queries, we selected only those ontologies with CQs stated against ontology schema (T-Box). As a result we selected 5 ontologies with 234 competency questions in total. Table TABREF8 summarizes our dataset size and source of each ontology.",
"The Software Ontology (SWO) BIBREF5 is included because its set of CQs is of substantial size and it was part of Ren et al.'s set of analysed CQs. The CQ sets of Dem@Care BIBREF8 and OntoDT BIBREF9 were included because they were available. CQs for the Stuff BIBREF6 and African Wildlife (AWO) BIBREF7 ontologies were added to the set, because the ontologies were developed by one of the authors (therewith facilitating in-depth domain analysis, if needed), they cover other topics, and are of a different `type' (a tutorial ontology (AWO) and a core ontology (Stuff)), thus contributing to maximising diversity in source selection."
],
[
"In this section, we carry out and examine the `translation' of CQs to a form that can be evaluated against an ontology.",
"As first preliminary observation, we observe that an OWL ontology can be serialized as an RDF/XML graph BIBREF10 and thus queried using SPARQL Query Language BIBREF11 . In its base form SPARQL is basically a pattern matching language and as such does not provide any reasoning capabilities; however, it is possible to introduce these by using SPARQL Entailment Regimes BIBREF12 . In particular, we employ OWL 2 Direct Semantics Entailment Regime. Intuitively, it allows us to construct a SPARQL query such that its WHERE clause contains OWL axioms, possibly with some of its IRIs and literals replaced by SPARQL variables. The results of the execution of such a query are all the variable mappings such that the axioms obtained by applying these mapping to the axioms in the query, are entailed by the queried ontology. SPARQL, being a query language for RDF, employs Turtle syntax BIBREF13 to express Basic Graph Patterns (BGPs) and this convention is kept also for expressing OWL axioms, i.e., their RDF representation is used BIBREF10 . This is consistent with how the only available implementation behaves BIBREF1 , BIBREF14 .",
"The second preliminary comment is that we note that, unlike Dennis et al. BIBREF15 's claim, CQs do not have to have specific presuppositions other than vocabulary, but queries do, for it is the queries that are specific to the ontology and the modelling style used and other modelling decisions made. We can make this distinction here, because of the separation of concerns between the linguistics of the CQs on the one hand and the queries and ontology how it it realised on the other hand, rather than having the two combined as in BIBREF3 , BIBREF2 , BIBREF15 ."
],
[
"This work was partly supported by the Polish National Science Center (Grant No 2014/13/D/ST6/02076). Jedrzej Potoniec acknowledges support from the grant 09/91/DSPB/0627."
]
],
"section_name": [
"Introduction",
"Analysis of Competency Questions",
"Materials and Methods",
"Generating SPARQL-OWL queries from CQs",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"f095fe23d134fff3040f17b5ddf1bc3bbed0c401"
],
"answer": [
{
"evidence": [
"The Software Ontology (SWO) BIBREF5 is included because its set of CQs is of substantial size and it was part of Ren et al.'s set of analysed CQs. The CQ sets of Dem@Care BIBREF8 and OntoDT BIBREF9 were included because they were available. CQs for the Stuff BIBREF6 and African Wildlife (AWO) BIBREF7 ontologies were added to the set, because the ontologies were developed by one of the authors (therewith facilitating in-depth domain analysis, if needed), they cover other topics, and are of a different `type' (a tutorial ontology (AWO) and a core ontology (Stuff)), thus contributing to maximising diversity in source selection."
],
"extractive_spans": [],
"free_form_answer": "5 domains: software, stuff, african wildlife, healthcare, datatypes",
"highlighted_evidence": [
"The Software Ontology (SWO) BIBREF5 is included because its set of CQs is of substantial size and it was part of Ren et al.'s set of analysed CQs. The CQ sets of Dem@Care BIBREF8 and OntoDT BIBREF9 were included because they were available. CQs for the Stuff BIBREF6 and African Wildlife (AWO) BIBREF7 ontologies were added to the set, because the ontologies were developed by one of the authors (therewith facilitating in-depth domain analysis, if needed), they cover other topics, and are of a different `type' (a tutorial ontology (AWO) and a core ontology (Stuff)), thus contributing to maximising diversity in source selection."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"How many domains of ontologies do they gather data from?"
],
"question_id": [
"b2254f9dd0e416ee37b577cef75ffa36cbcb8293"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Table 1: Competency questions dataset summary",
"Table 3: Replacing complex entity expressions with single identifier.",
"Table 2: Normalization of words into common forms. REMOVED means that given text is deleted from pattern.",
"Table 4: Number of pattern candidates and actual patterns",
"Table 5: Patterns that are shared by CQ sets of multiple ontologies.",
"Table 6: Higher Level Patterns that occur in more than one CQ set.",
"Figure 1: Average CQs covered by a single pattern per given CQ set of an ontology.",
"Figure 2: Average CQs covered by a single higher level pattern per given CQ set of an ontology.",
"Table 7: Prefixes used thruought the paper in the SPARQL-OWL queries",
"Table 8: Translatability of competency questions .",
"Table 9: Keywords usage among SPARQL-OWL queries .",
"Table 10: The signatures that are common for at least three queries in the dataset. In the column Ontologies listed are ontologies from which the queries originated along with the number of queries having that signature.",
"Table 11: The signatures that are shared between queries coming from at least two ontologies. In the column Ontologies listed are ontologies from which the queries originated along with the number of queries having that signature.",
"Table 12: Frequent signal phrase with most frequently SPARQL-OWL queries cooccurrences count .",
"Table 13: Signal words ."
],
"file": [
"4-Table1-1.png",
"6-Table3-1.png",
"6-Table2-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"8-Figure1-1.png",
"8-Figure2-1.png",
"10-Table7-1.png",
"14-Table8-1.png",
"15-Table9-1.png",
"17-Table10-1.png",
"17-Table11-1.png",
"19-Table12-1.png",
"19-Table13-1.png"
]
} | [
"How many domains of ontologies do they gather data from?"
] | [
[
"1811.09529-Materials and Methods-2"
]
] | [
"5 domains: software, stuff, african wildlife, healthcare, datatypes"
] | 860 |
1808.04314 | Comparing morphological complexity of Spanish, Otomi and Nahuatl | We use two small parallel corpora for comparing the morphological complexity of Spanish, Otomi and Nahuatl. These are languages that belong to different linguistic families, the latter are low-resourced. We take into account two quantitative criteria, on one hand the distribution of types over tokens in a corpus, on the other, perplexity and entropy as indicators of word structure predictability. We show that a language can be complex in terms of how many different morphological word forms can produce, however, it may be less complex in terms of predictability of its internal structure of words. | {
"paragraphs": [
[
"Morphology deals with the internal structure of words BIBREF0 , BIBREF1 . Languages of the world have different word production processes. Morphological richness vary from language to language, depending on their linguistic typology. In natural language processing (NLP), taking into account the morphological complexity inherent to each language could be important for improving or adapting the existing methods, since the amount of semantic and grammatical information encoded at the word level, may vary significantly from language to language.",
"Conceptualizing and quantifying linguistic complexity is not an easy task, many quantitative and qualitative dimensions must be taken into account BIBREF2 . On one hand we can try to answer what is complexity in a language and which mechanisms express it, on the other hand, we can try to find out if there is a language with more complex phenomena (phonological, morphological, syntactical) than other and how can we measure it. miestamo2008grammatical distinguishes between two types of complexity: the absolute, which defines complexity in terms of the number of parts of a system; and the relative, which is related to the cost and difficulty faced by language users. Some authors focuses in the absolute approach since it is less subjective. Another common complexity distinction is between global and particular. Global complexity characterizes entire languages, e.g., as easy or difficult to learn BIBREF2 , while particular complexity refers only to a level of the whole language (for example phonological complexity, morphological complexity, syntactical complexity).",
"We focus on morphological complexity. Many definitions of this term have been proposed BIBREF3 , BIBREF4 , BIBREF5 . From the computational linguistics perspective there has been a special interest in corpus based approaches to quantify it, i.e., methods that estimate the morphological complexity of a language directly from the production of morphological instances over a corpus. This type of approach usually represents a relatively easy and reproducible way to quantify complexity without the strict need of linguistic annotated data. The underlying intuition of corpus based methods is that morphological complexity depends on the morphological system of a language, like its inflectional and derivational processes. A very productive system will produce a lot of different word forms. This morphological richness can be captured with several statistical measures, e.g., information theory measures BIBREF6 or type token relationships. For example, [p. 9]bybee2010language affirms that “the token frequency of certain items in constructions [i.e., words] as well as the range of types [...] determines representation of the construction as well as its productivity”.",
"In this work, we are interested in using corpus based approaches; however, we would like to quantify the complexity not only by the type and token distributions over a corpus, but also by taking into account other important dimension: the predictability of a morph sequence BIBREF7 . This is a preliminary work that takes as a case of study the distant languages Otomi, Nahuatl and Spanish. The general idea is to use parallel corpora, type-token relationship and some NLP strategies for measuring the predictability in statistical language models.",
"Additionally, most of the previous works do not analyze how the complexity changes when different types of morphological normalization procedures are applied to a language, e.g., lemmatization, stemming, morphological segmentation. This information could be useful for linguistic analysis and for measuring the impact of different word form normalization tools depending of the language. In this work, we analyze how the type-token relationship changes using different types of morphological normalization techniques."
],
[
"The type-token relationship (TTR) is the relationship that exists between the number of distinct words (types) and the total word count (tokens) within a text. This measure has been used for several purposes, e.g., as an indicator of vocabulary richness and style of an author BIBREF8 , BIBREF9 , information flow of a text BIBREF10 and it has also been used in child language acquisition, psychiatry and literary studies BIBREF11 , BIBREF12 .",
"TTR has proven to be a simple, yet effective, way to quantify the morphological complexity of a language. This is why it has been used to estimate morphological complexity using relatively small corpora BIBREF13 . It has also shown a high correlation with other types of complexity measures like entropy and paradigm-based approaches that are based on typological information databases BIBREF14 ",
"It is important to notice that the value of TTR is affected by the type and length of the texts. However, one natural way to make TTRs comparable between languages is to use a parallel corpus, since the same meaning and functions are, more or less, expressed in the two languages. When TTR is measured over a parallel corpus, it provides a useful way to compare typological and morphological characteristics of languages. kelih2010type works with parallel texts of the Slavic language family to analyze morphological and typological features of the languages, i.e., he uses TTR for comparing the morphological productivity and the degree of syntheticity and analycity between the languages. Along the same line, mayer2014extraction automatically extract typological features of the languages, e.g., morphological synthesis degree, by using TTR.",
"There exist several models that have been developed to examine the relationship between the types and tokens within a text BIBREF15 . The most common one is the ratio $\\frac{types}{tokens}$ and it is the one that we use in this work."
],
[
"In NLP, statistical language models are a useful tool for calculating the probability of any sequence of words in a language. These models need a corpus as training data, they are usually based on n-grams, and more recently, in neural representations of words.",
"Information theory based measures can be used to estimate the predictiveness of these models, i.e., perplexity and entropy. Perplexity is a common measure for the complexity of n-grams models in NLP BIBREF16 . Perplexity is based in Shannon's entropy BIBREF17 as the perplexity of a model $\\mu $ is defined by the equation $2^{H(\\mu )}$ , where $H(\\mu )$ es the entropy of the model (or random variable). Shannon's entropy had been used for measuring complexity of different systems. In linguistics, entropy is commonly used to measure the complexity of morphological systems BIBREF6 , BIBREF18 , BIBREF19 . Higher values of perplexity and entropy mean less predictability.",
"Perplexity depends on how the model is represented (this includes the size of the data). In this work, we compare two different models for calculating the entropy and perplexity: a typical bigram model adapted to a morph level BIBREF16 ; and our proposal based on using the word as a context instead of ngrams.",
"We rely in parallel corpora to compare the measures across languages, since the same meaning and functions are shared in the two languages.",
"This model takes into consideration bigrams BIBREF16 as context for determining the joint probabilities of the sub-strings. Here the bigrams are sequences of two morphs in the text (whether they belong to the same word or not). This is a typical statistical language model but instead of using sequences of words, we use morphological segmented texts. In addition, we use a Laplacian (or add one) smoothing for the conditional probabilities BIBREF20 .",
"The word level representation takes the whole word as context for the determination of joint probabilities. Therefore, the frequency of co-occurrence is different from zero only if the sub-word units (morphs) are part of the same word. For example, if $xby$ is a word with a prefix $x$ and a suffix $y$ , the co-occurrence of $x$ with $b$ will be different from zero as both morphs are part of the word $xby$ . Similarly, the co-occurrence of $y$ with $b$ will be different from zero. Conversely, if two morphs are sub-strings of different words, its co-occurrence will be zero. To calculate the conditional probabilities we use and add one estimator defined as: ",
"$$p(x|y) = \\frac{fr(x,y) + 1 }{fr(x,y) + V}$$ (Eq. 5) ",
"Where $V$ is the number of types and $fr(\\cdot )$ is the frequency of co-occurrence function."
],
[
"We work with two language pairs that are spoken in the same country (Mexico) but they are typologically distant languages: Spanish (Indo-European)-Nahuatl (Uto-Aztecan) and Spanish-Otomi (Oto-Manguean). Both, Nahuatl and Otomi are low-resource languages that face scarcity of digital parallel and monolingual corpora.",
"Nahuatl is an indigenous language with agglutinative and polysynthethic morphological phenomena. It can agglutinate many different prefixes and suffixes to build complex words. Spanish also has rich morphology, but it mainly uses suffixes and it can have a fusional behavior, where morphemes can be fused or overlaid into a single one that encodes several grammatical meanings. Regarding to Otomi, its morphology also has a fusional tendency, and it is head-marking. Otomi morphology is usually considered quite complex BIBREF21 as it exhibits different phenomena like stem alternation, inflectional class changes and suprasegmental variation, just to mention some.",
"Since we are dealing with low resource languages that have a lot of dialectal and orthographic variation, it is difficult to obtain a standard big parallel corpus. We work with two different parallel corpora, i.e., Spanish-Nahuatl and Spanish-Otomi. Therefore the complexity comparisons are always in reference to Spanish.",
"We used a Spanish-Nahuatl parallel corpus created by GUTIERREZVASQUES16.1068. However, we used only a subset since the whole corpus is not homogeneous, i.e., it comprises several Nahuatl dialects, sources, periods of time and it lacks of a general orthographic normalization. We chose the texts that had a more or less systematic writing. On the other hand, we used a Spanish-Otomi parallel corpus BIBREF22 conformed by 38 texts transcribed from speech. This corpus was obtained in San Andrés Cuexcontitlan. It is principally composed by narrative texts, but also counts with dialogues and elicited data. Table 1 shows the size of the parallel corpora used for the experiments."
],
[
"We used different morphological analysis tools, in order to explore the morphological complexity variation among languages and between the different types of morphological representations. We performed lemmatization for Spanish language, and morphological segmentation for all languages.",
"In NLP, morphology is usually tackled by building morphological analysis (taggers) tools. And more commonly, lemmatization and stemming methods are used to reduce the morphological variation by converting words forms to a standard form, i.e., a lemma or a stem. However, most of these technologies are focused in a reduced set of languages. For languages like English, with plenty of resources and relatively poor morphology, morphological processing may be considered solved.",
"However, this is not the case for all the languages. Specially for languages with rich morphological phenomena where it is not enough to remove inflectional endings in order to obtain a stem.",
"Lemmatization and stemming aim to remove inflectional endings. Spanish has available tools to perform this task. We used the tool Freeling. Regarding to morphological segmentation, we used semi-supervised statistical segmentation models obtained with the tool Morfessor BIBREF23 . In particular, we used the same segmentation models reported in ximena2017bilingual for Spanish and Nahuatl. As for Otomi, we used manual morphological segmentation of the corpus, provided by a specialist."
],
[
"We calculated the type-token relationship for every language in each parallel corpus. Table 2 shows the TTR of the texts without any processing ( $ES$ , $NA$ ) and with the different types of morphological processing: morphological segmentation ( $ES_{morph}$ , $NA_{morph}$ ), lemmatization ( $ES_{lemma}$ ). In a similar way, Table 3 shows the TTR values for the Spanish-Otomi corpus. It is worth mentioning that the TTR values are only comparable within the same parallel corpus.",
"We also calculate the perplexity and complexity for the different languages. Since we are focusing on morphological complexity, we took only the segmented data for computing the entropy and the perplexity. We do not use the lemmatized or non segmented data since this would be equivalent to measuring the combinatorial complexity between words, i.e. syntax. In this sense, the entropy and perplexity reflects the predictability of the morphs sequences. Tables 4 and 5 shows the perplexity and entropy in each language pair."
],
[
"When no morphological processing is applied, Nahuatl has a lot higher TTR value than Spanish, i.e., a greater proportion of different word forms (types). In spite of Nahuatl having fewer tokens because of its agglutinative nature, it has a lot more types than Spanish. This suggests that Nahuatl has a highly productive system that can generate a great number of different morphological forms. In other words, it is more likely to find a repeated word in Spanish than in a Nahuatl corpus. In the case of Otomi-Spanish, Otomi also has a bigger complexity compared to Spanish in terms of TTR. Even though both Otomi and Spanish show fusional patterns in its inflection, Otomi also count with a lot of derivational processes and shows regular stem alternations.",
"In every case, morphological segmentation induced the smallest values of TTR for all languages. Suggesting that greater reduction of the morphological complexity is achieved when the words are split into morphs, making it more likely to find a repeated item. For instance, when Nahuatl was morphologically segmented, TTR had a dramatic decrease (from $26.22$ to $1.23$ ). This TTR reduction could be the result of eliminating the combinatorial variety of the agglutinative and polysynthetical morphology of the language. Therefore, when we segment the text we break this agglutination, leading to significantly less diverse units.",
"In the case of Otomi language, a similar trend can be observed. Otomi seems to be morphologically more complex than Spanish in terms of TTR, i.e., more diverse types or word forms. When morphological segmentation is applied, TTR decreases and Otomi language has a lower TTR compared to Spanish. Even though Otomi is not a polysynthetic language like Nahuatl, these results suggest that Otomi has also a great combinatory potential of its morphs, i.e, when Otomi gets morphologically segmented we obtain less diverse types, these morphs may be recurrent in the text but they can be combined in many several ways within the Otomi word structure. Linguistic studies have shown that Otomi language can concatenate several affixes, specially in derivative processes BIBREF22 .",
"It has brought to our attention that Spanish has a higher TTR than Nahuatl and Otomi, only when the languages are morphologically segmented. It seems that the morphs inventory is bigger in Spanish, we conjecture this is related to the fact that Spanish has more suppletion or “irregular” forms phenomena BIBREF24 ."
],
[
"The predictability of the internal structure of word is other dimension of complexity. It reflects the difficulty of producing novel words given a set of lexical items (stems, suffixes or morphs). First of all, as a general overview, we can see that word level models have the lower perplexity and entropy (Tables 4 and 5 ). We believe that this type of models capture better the morphological structure, since they take into account the possible combinations of morphs within a word and not outside the bounds of it (like the bigram model).",
"It is interesting to compare the TTR and the predictability measures for each language. In the case of Nahuatl, TTR shows that there is a lot of complexity at lexical level (many different word forms, few repetitions), however, this contrasts with the predictability of the elements that conform a lexical item: the combination of morphs within a word is more predictable than Spanish, since it obtains lower values of Perplexity and entropy. The combinatorial structure of Nahuatl morphology shows less uncertainty than Spanish one, despite the fact that Nahuatl is capable of producing many more different types in the corpus due to its agglutinative and polysynthetic nature.",
"The case of Otomi language is different, since it seems that it is not only complex in terms of TTR but also in terms of predictability. It obtains higher entropy and perplexity than Spanish. We conjecture this is related to several phenomena. For instance, Otomi and Nahuatl allow a large number of morphs combinations to modify a stem (inflectional and derivational). However, Otomi shows phenomena that is not easy to predict; for example, it has a complex system of inflectional classes, stem alternations and prefix changes. Moreover, tones and prosody plays an important role in the morphology of Otomi verbs BIBREF25 , BIBREF26 . Also, we mentioned before that many of the affixes concatenations in Otomi take place in derivative processes. Derivation tends to be less predictable than inflection phenomena (derivation is less frequent and less regular), and this could be an additional reason of why the entropy values of this language are high."
],
[
"In this work we used corpus based measures like TTR, entropy and perplexity for exploring the morphological complexity of three languages, using two small parallel corpora. We use TTR as a measure of morphological productivity of a language, and we use the entropy and perplexity calculated over a sequence of morphs, as a measure of predictability.",
"There may be a common believe that polysynthetical languages are far more complex than analytic ones. However, it is important to take into account the many factors that lay a role in the complexity of the system. We stressed out that morphological complexity has several dimensions that must be taken into account BIBREF3 .",
"While some agglutinative polysynthetical languages, like Nahuatl, could be considered complex by the number of morphemes the combinations and the information than can be encoded in a single word; the sequence of these elements may be more predictable than fusional languages like Spanish.",
"Languages like Otomi, showed high complexity in the two dimensions that we focused in this work (this is consistent with qualitative perspectives BIBREF26 ).",
"These two dimensions of complexity are valid and complementary. Measures like TTR reflect the amount of information that words can encode in a language, languages that have a high TTR have the potential of encoding a lot of functions at the word level, therefore, they produce many different word forms. Perplexity and entropy measured over a sequence of morphs reflect the predictability or degree of uncertainty of these combinations. The higher the entropy (hence, the perplexity), the higher the uncertainty in the combinations of morphs.",
"This was a preliminary work. Deeper linguistic analysis, more corpora and more languages are needed. However, we believe that quantitative measures extracted from parallel corpora can complement and deepen the study of linguistic complexity. Efforts are currently being made BIBREF27 . However, more studies are needed, especially for low resources languages."
],
[
"Languages of the world have a wide range of functions that can be codified at the world level. Therefore, it would be interesting to consider the study of more complexity dimensions in our work. Popular quantitative approaches are successful in reflecting how many morphs can be combined into a single word. However, it is also important to take into account how complex the format of a word can be, i.e., not only how many elements can be combined but also what type of elements. For example, dahl2009testing argues that when a phoneme is added to a word, this process is not as complex as adding a tone.",
"Another interesting dimension is the complexity of the morphology in terms of acquisition (of native and L2 speakers). miestamo2008grammatical points out that this typo of complexity should be made on the basis of psycho-linguistics analysis in both processing and acquisition.",
"Finally, one important factor that influences language complexity is culture. In many languages, pragmatics nuances are produced via morphological processes. For instance, languages like Nahuatl have a complex honorific or reverential system that is expressed using different types of affixes. Spanish expresses this type of phenomena with morphosyntactic processes. It is a challenging task to be able to quantify all these factors that play a role in the complexity of a language."
],
[
"This work was supported by the Mexican Council of Science and Technology (CONACYT), fund 2016-01-2225, and CB-2016/408885. We also thank the reviewers for their valuable comments and to our friend Morrisé P. Martinez for his unconditional support."
]
],
"section_name": [
"Introduction",
"The type-token relationship (TTR)",
"Entropy and Perplexity",
"The corpus",
"Morphological analysis tools",
"Complexity measures",
"TTR as a measure of morphological complexity",
"Predictability",
"Conclusions",
"Future work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"f188eb6cafbe8f8fcf17d2fe6b516eea1b786a03"
],
"answer": [
{
"evidence": [
"Morphology deals with the internal structure of words BIBREF0 , BIBREF1 . Languages of the world have different word production processes. Morphological richness vary from language to language, depending on their linguistic typology. In natural language processing (NLP), taking into account the morphological complexity inherent to each language could be important for improving or adapting the existing methods, since the amount of semantic and grammatical information encoded at the word level, may vary significantly from language to language.",
"Additionally, most of the previous works do not analyze how the complexity changes when different types of morphological normalization procedures are applied to a language, e.g., lemmatization, stemming, morphological segmentation. This information could be useful for linguistic analysis and for measuring the impact of different word form normalization tools depending of the language. In this work, we analyze how the type-token relationship changes using different types of morphological normalization techniques."
],
"extractive_spans": [],
"free_form_answer": "Improve existing NLP methods. Improve linguistic analysis. Measure impact of word normalization tools.",
"highlighted_evidence": [
"In natural language processing (NLP), taking into account the morphological complexity inherent to each language could be important for improving or adapting the existing methods, since the amount of semantic and grammatical information encoded at the word level, may vary significantly from language to language.",
"This information could be useful for linguistic analysis and for measuring the impact of different word form normalization tools depending of the language."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"eca216170c00be9528a4f86abcb3ffe7115a9be2"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"what is the practical application for this paper?"
],
"question_id": [
"d5256d684b5f1b1ec648d996c358e66fe51f4904"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Table 1: Size of the parallel corpus",
"Table 2: TTR for Nahuatl-Spanish corpus",
"Table 3: TTR for Otomi-Spanish corpus",
"Table 4: Perplexity obtained in the different parallel corpora",
"Table 5: Entropy obtained in the different parallel corpora"
],
"file": [
"4-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png"
]
} | [
"what is the practical application for this paper?"
] | [
[
"1808.04314-Introduction-0",
"1808.04314-Introduction-4"
]
] | [
"Improve existing NLP methods. Improve linguistic analysis. Measure impact of word normalization tools."
] | 862 |
1909.08752 | Summary Level Training of Sentence Rewriting for Abstractive Summarization | As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set. | {
"paragraphs": [
[
"The task of automatic text summarization aims to compress a textual document to a shorter highlight while keeping salient information of the original text. In general, there are two ways to do text summarization: Extractive and Abstractive BIBREF0. Extractive approaches generate summaries by selecting salient sentences or phrases from a source text, while abstractive approaches involve a process of paraphrasing or generating sentences to write a summary.",
"Recent work BIBREF1, BIBREF2 demonstrates that it is highly beneficial for extractive summarization models to incorporate pre-trained language models (LMs) such as BERT BIBREF3 into their architectures. However, the performance improvement from the pre-trained LMs is known to be relatively small in case of abstractive summarization BIBREF4, BIBREF5. This discrepancy may be due to the difference between extractive and abstractive approaches in ways of dealing with the task—the former classifies whether each sentence to be included in a summary, while the latter generates a whole summary from scratch. In other words, as most of the pre-trained LMs are designed to be of help to the tasks which can be categorized as classification including extractive summarization, they are not guaranteed to be advantageous to abstractive summarization models that should be capable of generating language BIBREF6, BIBREF7.",
"On the other hand, recent studies for abstractive summarization BIBREF8, BIBREF9, BIBREF10 have attempted to exploit extractive models. Among these, a notable one is BIBREF8, in which a sophisticated model called Reinforce-Selected Sentence Rewriting is proposed. The model consists of both an extractor and abstractor, where the extractor picks out salient sentences first from a source article, and then the abstractor rewrites and compresses the extracted sentences into a complete summary. It is further fine-tuned by training the extractor with the rewards derived from sentence-level ROUGE scores of the summary generated from the abstractor.",
"In this paper, we improve the model of BIBREF8, addressing two primary issues. Firstly, we argue there is a bottleneck in the existing extractor on the basis of the observation that its performance as an independent summarization model (i.e., without the abstractor) is no better than solid baselines such as selecting the first 3 sentences. To resolve the problem, we present a novel neural extractor exploiting the pre-trained LMs (BERT in this work) which are expected to perform better according to the recent studies BIBREF1, BIBREF2. Since the extractor is a sort of sentence classifier, we expect that it can make good use of the ability of pre-trained LMs which is proven to be effective in classification.",
"Secondly, the other point is that there is a mismatch between the training objective and evaluation metric; the previous work utilizes the sentence-level ROUGE scores as a reinforcement learning objective, while the final performance of a summarization model is evaluated by the summary-level ROUGE scores. Moreover, as BIBREF11 pointed out, sentences with the highest individual ROUGE scores do not necessarily lead to an optimal summary, since they may contain overlapping contents, causing verbose and redundant summaries. Therefore, we propose to directly use the summary-level ROUGE scores as an objective instead of the sentence-level scores. A potential problem arising from this apprsoach is the sparsity of training signals, because the summary-level ROUGE scores are calculated only once for each training episode. To alleviate this problem, we use reward shaping BIBREF12 to give an intermediate signal for each action, preserving the optimal policy.",
"We empirically demonstrate the superiority of our approach by achieving new state-of-the-art abstractive summarization results on CNN/Daily Mail and New York Times datasets BIBREF13, BIBREF14. It is worth noting that our approach shows large improvements especially on ROUGE-L score which is considered a means of assessing fluency BIBREF11. In addition, our model performs much better than previous work when testing on DUC-2002 dataset, showing better generalization and robustness of our model.",
"Our contributions in this work are three-fold: a novel successful application of pre-trained transformers for abstractive summarization; suggesting a training method to globally optimize sentence selection; achieving the state-of-the-art results on the benchmark datasets, CNN/Daily Mail and New York Times."
],
[
"In this paper, we focus on single-document multi-sentence summarization and propose a neural abstractive model based on the Sentence Rewriting framework BIBREF8, BIBREF15 which consists of two parts: a neural network for the extractor and another network for the abstractor. The extractor network is designed to extract salient sentences from a source article. The abstractor network rewrites the extracted sentences into a short summary."
],
[
"The most common way to train extractor to select informative sentences is building extractive oracles as gold targets, and training with cross-entropy (CE) loss. An oracle consists of a set of sentences with the highest possible ROUGE scores. Building oracles is finding an optimal combination of sentences, where there are $2^n$ possible combinations for each example. Because of this, the exact optimization for ROUGE scores is intractable. Therefore, alternative methods identify the set of sentences with greedy search BIBREF16, sentence-level search BIBREF9, BIBREF17 or collective search using the limited number of sentences BIBREF15, which construct suboptimal oracles. Even if all the optimal oracles are found, training with CE loss using these labels will cause underfitting as it will only maximize probabilities for sentences in label sets and ignore all other sentences.",
"Alternatively, reinforcement learning (RL) can give room for exploration in the search space. BIBREF8, our baseline work, proposed to apply policy gradient methods to train an extractor. This approach makes an end-to-end trainable stochastic computation graph, encouraging the model to select sentences with high ROUGE scores. However, they define a reward for an action (sentence selection) as a sentence-level ROUGE score between the chosen sentence and a sentence in the ground truth summary for that time step. This leads the extractor agent to a suboptimal policy; the set of sentences matching individually with each sentence in a ground truth summary isn't necessarily optimal in terms of summary-level ROUGE score.",
"BIBREF11 proposed policy gradient with rewards from summary-level ROUGE. They defined an action as sampling a summary from candidate summaries that contain the limited number of plausible sentences. After training, a sentence is ranked high for selection if it often occurs in high scoring summaries. However, their approach still has a risk of ranking redundant sentences high; if two highly overlapped sentences have salient information, they would be ranked high together, increasing the probability of being sampled in one summary.",
"To tackle this problem, we propose a training method using reinforcement learning which globally optimizes summary-level ROUGE score and gives intermediate rewards to ease the learning."
],
[
"Transferring representations from pre-trained transformer language models has been highly successful in the domain of natural language understanding tasks BIBREF18, BIBREF3, BIBREF19, BIBREF20. These methods first pre-train highly stacked transformer blocks BIBREF21 on a huge unlabeled corpus, and then fine-tune the models or representations on downstream tasks."
],
[
"Our model consists of two neural network modules, i.e. an extractor and abstractor. The extractor encodes a source document and chooses sentences from the document, and then the abstractor paraphrases the summary candidates. Formally, a single document consists of $n$ sentences $D=\\lbrace s_1,s_2,\\cdots ,s_n\\rbrace $. We denote $i$-th sentence as $s_i=\\lbrace w_{i1},w_{i2},\\cdots ,w_{im}\\rbrace $ where $w_{ij}$ is the $j$-th word in $s_i$. The extractor learns to pick out a subset of $D$ denoted as $\\hat{D}=\\lbrace \\hat{s}_1,\\hat{s}_2,\\cdots ,\\hat{s}_k|\\hat{s}_i\\in D\\rbrace $ where $k$ sentences are selected. The abstractor rewrites each of the selected sentences to form a summary $S=\\lbrace f(\\hat{s}_1),f(\\hat{s}_2),\\cdots ,f(\\hat{s}_k)\\rbrace $, where $f$ is an abstracting function. And a gold summary consists of $l$ sentences $A=\\lbrace a_1,a_2,\\cdots ,a_l\\rbrace $."
],
[
"The extractor is based on the encoder-decoder framework. We adapt BERT for the encoder to exploit contextualized representations from pre-trained transformers. BERT as the encoder maps the input sequence $D$ to sentence representation vectors $H=\\lbrace h_1,h_2,\\cdots ,h_n\\rbrace $, where $h_i$ is for the $i$-th sentence in the document. Then, the decoder utilizes $H$ to extract $\\hat{D}$ from $D$."
],
[
"Although we require the encoder to output the representation for each sentence, the output vectors from BERT are grounded to tokens instead of sentences. Therefore, we modify the input sequence and embeddings of BERT as BIBREF1 did.",
"In the original BERT's configure, a [CLS] token is used to get features from one sentence or a pair of sentences. Since we need a symbol for each sentence representation, we insert the [CLS] token before each sentence. And we add a [SEP] token at the end of each sentence, which is used to differentiate multiple sentences. As a result, the vector for the $i$-th [CLS] symbol from the top BERT layer corresponds to the $i$-th sentence representation $h_i$.",
"In addition, we add interval segment embeddings as input for BERT to distinguish multiple sentences within a document. For $s_i$ we assign a segment embedding $E_A$ or $E_B$ conditioned on $i$ is odd or even. For example, for a consecutive sequence of sentences $s_1, s_2, s_3, s_4, s_5$, we assign $E_A, E_B, E_A, E_B, E_A$ in order. All the words in each sentence are assigned to the same segment embedding, i.e. segment embeddings for $w_{11}, w_{12},\\cdots ,w_{1m}$ is $E_A,E_A,\\cdots ,E_A$. An illustration for this procedure is shown in Figure FIGREF1."
],
[
"We use LSTM Pointer Network BIBREF22 as the decoder to select the extracted sentences based on the above sentence representations. The decoder extracts sentences recurrently, producing a distribution over all of the remaining sentence representations excluding those already selected. Since we use the sequential model which selects one sentence at a time step, our decoder can consider the previously selected sentences. This property is needed to avoid selecting sentences that have overlapping information with the sentences extracted already.",
"As the decoder structure is almost the same with the previous work, we convey the equations of BIBREF8 to avoid confusion, with minor modifications to agree with our notations. Formally, the extraction probability is calculated as:",
"where $e_t$ is the output of the glimpse operation:",
"In Equation DISPLAY_FORM9, $z_t$ is the hidden state of the LSTM decoder at time $t$ (shown in green in Figure FIGREF1). All the $W$ and $v$ are trainable parameters."
],
[
"The abstractor network approximates $f$, which compresses and paraphrases an extracted document sentence to a concise summary sentence. We use the standard attention based sequence-to-sequence (seq2seq) model BIBREF23, BIBREF24 with the copying mechanism BIBREF25 for handling out-of-vocabulary (OOV) words. Our abstractor is practically identical to the one proposed in BIBREF8."
],
[
"In our model, an extractor selects a series of sentences, and then an abstractor paraphrases them. As they work in different ways, we need different training strategies suitable for each of them. Training the abstractor is relatively obvious; maximizing log-likelihood for the next word given the previous ground truth words. However, there are several issues for extractor training. First, the extractor should consider the abstractor's rewriting process when it selects sentences. This causes a weak supervision problem BIBREF26, since the extractor gets training signals indirectly after paraphrasing processes are finished. In addition, thus this procedure contains sampling or maximum selection, the extractor performs a non-differentiable extraction. Lastly, although our goal is maximizing ROUGE scores, neural models cannot be trained directly by maximum likelihood estimation from them.",
"To address those issues above, we apply standard policy gradient methods, and we propose a novel training procedure for extractor which guides to the optimal policy in terms of the summary-level ROUGE. As usual in RL for sequence prediction, we pre-train submodules and apply RL to fine-tune the extractor."
],
[
"Starting from a poor random policy makes it difficult to train the extractor agent to converge towards the optimal policy. Thus, we pre-train the network using cross entropy (CE) loss like previous work BIBREF27, BIBREF8. However, there is no gold label for extractive summarization in most of the summarization datasets. Hence, we employ a greedy approach BIBREF16 to make the extractive oracles, where we add one sentence at a time incrementally to the summary, such that the ROUGE score of the current set of selected sentences is maximized for the entire ground truth summary. This doesn't guarantee optimal, but it is enough to teach the network to select plausible sentences. Formally, the network is trained to minimize the cross-entropy loss as follows:",
"where $s^*_t$ is the $t$-th generated oracle sentence."
],
[
"For the abstractor training, we should create training pairs for input and target sentences. As the abstractor paraphrases on sentence-level, we take a sentence-level search for each ground-truth summary sentence. We find the most similar document sentence $s^{\\prime }_t$ by:",
"And then the abstractor is trained as a usual sequence-to-sequence model to minimize the cross-entropy loss:",
"where $w^a_j$ is the $j$-th word of the target sentence $a_t$, and $\\Phi $ is the encoded representation for $s^{\\prime }_t$."
],
[
"To optimize ROUGE metric directly, we assume the extractor as an agent in reinforcement learning paradigm BIBREF28. We view the extractor has a stochastic policy that generates actions (sentence selection) and receives the score of final evaluation metric (summary-level ROUGE in our case) as the return",
"While we are ultimately interested in the maximization of the score of a complete summary, simply awarding this score at the last step provides a very sparse training signal. For this reason we define intermediate rewards using reward shaping BIBREF12, which is inspired by BIBREF27's attempt for sequence prediction. Namely, we compute summary-level score values for all intermediate summaries:",
"The reward for each step $r_t$ is the difference between the consecutive pairs of scores:",
"This measures an amount of increase or decrease in the summary-level score from selecting $\\hat{s}_t$. Using the shaped reward $r_t$ instead of awarding the whole score $R$ at the last step does not change the optimal policy BIBREF12. We define a discounted future reward for each step as $R_t=\\sum _{t=1}^{k}\\gamma ^tr_{t+1}$, where $\\gamma $ is a discount factor.",
"Additionally, we add `stop' action to the action space, by concatenating trainable parameters $h_{\\text{stop}}$ (the same dimension as $h_i$) to $H$. The agent treats it as another candidate to extract. When it selects `stop', an extracting episode ends and the final return is given. This encourages the model to extract additional sentences only when they are expected to increase the final return.",
"Following BIBREF8, we use the Advantage Actor Critic BIBREF29 method to train. We add a critic network to estimate a value function $V_t(D,\\hat{s}_1,\\cdots ,\\hat{s}_{t-1})$, which then is used to compute advantage of each action (we will omit the current state $(D,\\hat{s}_1,\\cdots ,\\hat{s}_{t-1})$ to simplify):",
"where $Q_t(s_i)$ is the expected future reward for selecting $s_i$ at the current step $t$. We maximize this advantage with the policy gradient with the Monte-Carlo sample ($A_t(s_i) \\approx R_t - V_t$):",
"where $\\theta _\\pi $ is the trainable parameters of the actor network (original extractor). And the critic is trained to minimize the square loss:",
"where $\\theta _\\psi $ is the trainable parameters of the critic network."
],
[
"We evaluate the proposed approach on the CNN/Daily Mail BIBREF13 and New York Times BIBREF30 dataset, which are both standard corpora for multi-sentence abstractive summarization. Additionally, we test generalization of our model on DUC-2002 test set.",
"CNN/Daily Mail dataset consists of more than 300K news articles and each of them is paired with several highlights. We used the standard splits of BIBREF13 for training, validation and testing (90,226/1,220/1,093 documents for CNN and 196,961/12,148/10,397 for Daily Mail). We did not anonymize entities. We followed the preprocessing methods in BIBREF25 after splitting sentences by Stanford CoreNLP BIBREF31.",
"The New York Times dataset also consists of many news articles. We followed the dataset splits of BIBREF14; 100,834 for training and 9,706 for test examples. And we also followed the filtering procedure of them, removing documents with summaries that are shorter than 50 words. The final test set (NYT50) contains 3,452 examples out of the original 9,706.",
"The DUC-2002 dataset contains 567 document-summary pairs for single-document summarization. As a single document can have multiple summaries, we made one pair per summary. We used this dataset as a test set for our model trained on CNN/Daily Mail dataset to test generalization."
],
[
"Our extractor is built on $\\text{BERT}_\\text{BASE}$ with fine-tuning, smaller version than $\\text{BERT}_\\text{LARGE}$ due to limitation of time and space. We set LSTM hidden size as 256 for all of our models. To initialize word embeddings for our abstractor, we use word2vec BIBREF32 of 128 dimensions trained on the same corpus. We optimize our model with Adam optimizer BIBREF33 with $\\beta _1=0.9$ and $\\beta _2=0.999$. For extractor pre-training, we use learning rate schedule following BIBREF21 with $warmup=10000$:",
"And we set learning rate $1e^{-3}$ for abstractor and $4e^{-6}$ for RL training. We apply gradient clipping using L2 norm with threshold $2.0$. For RL training, we use $\\gamma =0.95$ for the discount factor. To ease learning $h_{\\text{stop}}$, we set the reward for the stop action to $\\lambda \\cdot \\text{ROUGE-L}^{\\text{summ}}_{F_1}(S, A)$, where $\\lambda $ is a stop coefficient set to $0.08$. Our critic network shares the encoder with the actor (extractor) and has the same architecture with it except the output layer, estimating scalar for the state value. And the critic is initialized with the parameters of the pre-trained extractor where it has the same architecture."
],
[
"We evaluate the performance of our method using different variants of ROUGE metric computed with respect to the gold summaries. On the CNN/Daily Mail and DUC-2002 dataset, we use standard ROUGE-1, ROUGE-2, and ROUGE-L BIBREF34 on full length $F_1$ with stemming as previous work did BIBREF16, BIBREF25, BIBREF8. On NYT50 dataset, following BIBREF14 and BIBREF35, we used the limited length ROUGE recall metric, truncating the generated summary to the length of the ground truth summary."
],
[
"Table TABREF24 shows the experimental results on CNN/Daily Mail dataset, with extractive models in the top block and abstractive models in the bottom block. For comparison, we list the performance of many recent approaches with ours."
],
[
"As BIBREF25 showed, the first 3 sentences (lead-3) in an article form a strong summarization baseline in CNN/Daily Mail dataset. Therefore, the very first objective of extractive models is to outperform the simple method which always returns 3 or 4 sentences at the top. However, as Table TABREF27 shows, ROUGE scores of lead baselines and extractors from previous work in Sentence Rewrite framework BIBREF8, BIBREF15 are almost tie. We can easily conjecture that the limited performances of their full model are due to their extractor networks. Our extractor network with BERT (BERT-ext), as a single model, outperforms those models with large margins. Adding reinforcement learning (BERT-ext + RL) gives higher performance, which is competitive with other extractive approaches using pre-trained Transformers (see Table TABREF24). This shows the effectiveness of our learning method."
],
[
"Our abstractive approaches combine the extractor with the abstractor. The combined model (BERT-ext + abs) without additional RL training outperforms the Sentence Rewrite model BIBREF8 without reranking, showing the effectiveness of our extractor network. With the proposed RL training procedure (BERT-ext + abs + RL), our model exceeds the best model of BIBREF8. In addition, the result is better than those of all the other abstractive methods exploiting extractive approaches in them BIBREF9, BIBREF8, BIBREF10."
],
[
"Although the proposed RL training inherently gives training signals that induce the model to avoid redundancy across sentences, there can be still remaining overlaps between extracted sentences. We found that the additional methods reducing redundancies can improve the summarization quality, especially on CNN/Daily Mail dataset.",
"We tried Trigram Blocking BIBREF1 for extractor and Reranking BIBREF8 for abstractor, and we empirically found that the reranking only improves the performance. This helps the model to compress the extracted sentences focusing on disjoint information, even if there are some partial overlaps between the sentences. Our best abstractive model (BERT-ext + abs + RL + rerank) achieves the new state-of-the-art performance for abstractive summarization in terms of average ROUGE score, with large margins on ROUGE-L.",
"However, we empirically found that the reranking method has no effect or has negative effect on NYT50 or DUC-2002 dataset. Hence, we don't apply it for the remaining datasets."
],
[
"Before seeing the effects of our summary-level rewards on final results, we check the upper bounds of different training signals for the full model. All the document sentences are paraphrased with our trained abstractor, and then we find the best set for each search method. Sentence-matching finds sentences with the highest ROUGE-L score for each sentence in the gold summary. This search method matches with the best reward from BIBREF8. Greedy Search is the same method explained for extractor pre-training in section SECREF11. Combination Search selects a set of sentences which has the highest summary-level ROUGE-L score, from all the possible combinations of sentences. Due to time constraints, we limited the maximum number of sentences to 5. This method corresponds to our final return in RL training.",
"Table TABREF31 shows the summary-level ROUGE scores of previously explained methods. We see considerable gaps between Sentence-matching and Greedy Search, while the scores of Greedy Search are close to those of Combination Search. Note that since we limited the number of sentences for Combination Search, the exact scores for it would be higher. The scores can be interpreted to be upper bounds for corresponding training methods. This result supports our training strategy; pre-training with Greedy Search and final optimization with the combinatorial return.",
"Additionally, we experiment to verify the contribution of our training method. We train the same model with different training signals; Sentence-level reward from BIBREF8 and combinatorial reward from ours. The results are shown in Table TABREF34. Both with and without reranking, the models trained with the combinatorial reward consistently outperform those trained with the sentence-level reward."
],
[
"We also conduct human evaluation to ensure robustness of our training procedure. We measure relevance and readability of the summaries. Relevance is based on the summary containing important, salient information from the input article, being correct by avoiding contradictory/unrelated information, and avoiding repeated/redundant information. Readability is based on the summarys fluency, grammaticality, and coherence. To evaluate both these criteria, we design a Amazon Mechanical Turk experiment based on ranking method, inspired by BIBREF36. We randomly select 20 samples from the CNN/Daily Mail test set and ask the human testers (3 for each sample) to rank summaries (for relevance and readability) produced by 3 different models: our final model, that of BIBREF8 and that of BIBREF1. 2, 1 and 0 points were given according to the ranking. The models were anonymized and randomly shuffled. Following previous work, the input article and ground truth summaries are also shown to the human participants in addition to the three model summaries. From the results shown in Table TABREF36, we can see that our model is better in relevance compared to others. In terms of readability, there was no noticeable difference."
],
[
"Table TABREF38 gives the results on NYT50 dataset. We see our BERT-ext + abs + RL outperforms all the extractive and abstractive models, except ROUGE-1 from BIBREF1. Comparing with two recent models that adapted BERT on their summarization models BIBREF1, BIBREF4, we can say that we proposed another method successfully leveraging BERT for summarization. In addition, the experiment proves the effectiveness of our RL training, with about 2 point improvement for each ROUGE metric."
],
[
"We also evaluated the models trained on the CNN/Daily Mail dataset on the out-of-domain DUC-2002 test set as shown in Table TABREF41. BERT-ext + abs + RL outperforms baseline models with large margins on all of the ROUGE scores. This result shows that our model generalizes better."
],
[
"There has been a variety of deep neural network models for abstractive document summarization. One of the most dominant structures is the sequence-to-sequence (seq2seq) models with attention mechanism BIBREF37, BIBREF38, BIBREF39. BIBREF25 introduced Pointer Generator network that implicitly combines the abstraction with the extraction, using copy mechanism BIBREF40, BIBREF41. More recently, there have been several studies that have attempted to improve the performance of the abstractive summarization by explicitly combining them with extractive models. Some notable examples include the use of inconsistency loss BIBREF9, key phrase extraction BIBREF42, BIBREF10, and sentence extraction with rewriting BIBREF8. Our model improves Sentence Rewriting with BERT as an extractor and summary-level rewards to optimize the extractor.",
"Reinforcement learning has been shown to be effective to directly optimize a non-differentiable objective in language generation including text summarization BIBREF43, BIBREF27, BIBREF35, BIBREF44, BIBREF11. BIBREF27 use actor-critic methods for language generation, using reward shaping BIBREF12 to solve the sparsity of training signals. Inspired by this, we generalize it to sentence extraction to give per step reward preserving optimality."
],
[
"We have improved Sentence Rewriting approaches for abstractive summarization, proposing a novel extractor architecture exploiting BERT and a novel training procedure which globally optimizes summary-level ROUGE metric. Our approach achieves the new state-of-the-art on both CNN/Daily Mail and New York Times datasets as well as much better generalization on DUC-2002 test set."
],
[
"We thank anonymous reviewers for their constructive and fruitful comments. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF2016M3C4A7952587)."
]
],
"section_name": [
"Introduction",
"Background ::: Sentence Rewriting",
"Background ::: Learning Sentence Selection",
"Background ::: Pre-trained Transformers",
"Model",
"Model ::: Extractor Network",
"Model ::: Extractor Network ::: Leveraging Pre-trained Transformers",
"Model ::: Extractor Network ::: Sentence Selection",
"Model ::: Abstractor Network",
"Training",
"Training ::: Training Submodules ::: Extractor Pre-training",
"Training ::: Training Submodules ::: Abstractor Training",
"Training ::: Guiding to the Optimal Policy",
"Experimental Setup ::: Datasets",
"Experimental Setup ::: Implementation Details",
"Experimental Setup ::: Evaluation",
"Results ::: CNN/Daily Mail",
"Results ::: CNN/Daily Mail ::: Extractive Summarization",
"Results ::: CNN/Daily Mail ::: Abstractive Summarization",
"Results ::: CNN/Daily Mail ::: Redundancy Control",
"Results ::: CNN/Daily Mail ::: Combinatorial Reward",
"Results ::: CNN/Daily Mail ::: Human Evaluation",
"Results ::: New York Times corpus",
"Results ::: DUC-2002",
"Related Work",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"f1bd64269e5df9c6c967398a92262401d734ad10"
],
"answer": [
{
"evidence": [
"Our model consists of two neural network modules, i.e. an extractor and abstractor. The extractor encodes a source document and chooses sentences from the document, and then the abstractor paraphrases the summary candidates. Formally, a single document consists of $n$ sentences $D=\\lbrace s_1,s_2,\\cdots ,s_n\\rbrace $. We denote $i$-th sentence as $s_i=\\lbrace w_{i1},w_{i2},\\cdots ,w_{im}\\rbrace $ where $w_{ij}$ is the $j$-th word in $s_i$. The extractor learns to pick out a subset of $D$ denoted as $\\hat{D}=\\lbrace \\hat{s}_1,\\hat{s}_2,\\cdots ,\\hat{s}_k|\\hat{s}_i\\in D\\rbrace $ where $k$ sentences are selected. The abstractor rewrites each of the selected sentences to form a summary $S=\\lbrace f(\\hat{s}_1),f(\\hat{s}_2),\\cdots ,f(\\hat{s}_k)\\rbrace $, where $f$ is an abstracting function. And a gold summary consists of $l$ sentences $A=\\lbrace a_1,a_2,\\cdots ,a_l\\rbrace $.",
"The extractor is based on the encoder-decoder framework. We adapt BERT for the encoder to exploit contextualized representations from pre-trained transformers. BERT as the encoder maps the input sequence $D$ to sentence representation vectors $H=\\lbrace h_1,h_2,\\cdots ,h_n\\rbrace $, where $h_i$ is for the $i$-th sentence in the document. Then, the decoder utilizes $H$ to extract $\\hat{D}$ from $D$.",
"We use LSTM Pointer Network BIBREF22 as the decoder to select the extracted sentences based on the above sentence representations. The decoder extracts sentences recurrently, producing a distribution over all of the remaining sentence representations excluding those already selected. Since we use the sequential model which selects one sentence at a time step, our decoder can consider the previously selected sentences. This property is needed to avoid selecting sentences that have overlapping information with the sentences extracted already.",
"The abstractor network approximates $f$, which compresses and paraphrases an extracted document sentence to a concise summary sentence. We use the standard attention based sequence-to-sequence (seq2seq) model BIBREF23, BIBREF24 with the copying mechanism BIBREF25 for handling out-of-vocabulary (OOV) words. Our abstractor is practically identical to the one proposed in BIBREF8."
],
"extractive_spans": [],
"free_form_answer": "Two neural networks: an extractor based on an encoder (BERT) and a decoder (LSTM Pointer Network BIBREF22) and an abstractor identical to the one proposed in BIBREF8.",
"highlighted_evidence": [
"Our model consists of two neural network modules, i.e. an extractor and abstractor. The extractor encodes a source document and chooses sentences from the document, and then the abstractor paraphrases the summary candidates. ",
"The extractor is based on the encoder-decoder framework. We adapt BERT for the encoder to exploit contextualized representations from pre-trained transformers.",
"We use LSTM Pointer Network BIBREF22 as the decoder to select the extracted sentences based on the above sentence representations.",
" Our abstractor is practically identical to the one proposed in BIBREF8."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"What's the method used here?"
],
"question_id": [
"0b411f942c6e2e34e3d81cc855332f815b6bc123"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Figure 1: The overview architecture of the extractor netwrok",
"Table 1: Performance on CNN/Daily Mail test set using the full length ROUGE F1 score. R-AVG calculates average score of ROUGE-1, ROUGE-2 and ROUGE-L.",
"Table 2: Comparison of extractor networks.",
"Table 3: Comparison of different methods building upper bound for full model.",
"Table 4: Comparison of RL training.",
"Table 5: Results of human evaluation.",
"Table 6: Performance on NYT50 test set using the limited length ROUGE recall score.",
"Table 7: Performance on DUC-2002 test set using the full length ROUGE F1 score.",
"Table 8: Example from the CNN/Dail Mail test set showing the generated summary of our best model. The colored sentences in the source document are the corresponding extracted sentences.",
"Table 9: Example from the CNN/Dail Mail test set showing the generated summary of our best model. The colored sentences in the source document are the corresponding extracted sentences."
],
"file": [
"3-Figure1-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"8-Table7-1.png",
"12-Table8-1.png",
"13-Table9-1.png"
]
} | [
"What's the method used here?"
] | [
[
"1909.08752-Model ::: Abstractor Network-0",
"1909.08752-Model ::: Extractor Network-0",
"1909.08752-Model-0",
"1909.08752-Model ::: Extractor Network ::: Sentence Selection-0"
]
] | [
"Two neural networks: an extractor based on an encoder (BERT) and a decoder (LSTM Pointer Network BIBREF22) and an abstractor identical to the one proposed in BIBREF8."
] | 864 |
1905.10247 | Contextual Out-of-Domain Utterance Handling With Counterfeit Data Augmentation | Neural dialog models often lack robustness to anomalous user input and produce inappropriate responses which leads to frustrating user experience. Although there are a set of prior approaches to out-of-domain (OOD) utterance detection, they share a few restrictions: they rely on OOD data or multiple sub-domains, and their OOD detection is context-independent which leads to suboptimal performance in a dialog. The goal of this paper is to propose a novel OOD detection method that does not require OOD data by utilizing counterfeit OOD turns in the context of a dialog. For the sake of fostering further research, we also release new dialog datasets which are 3 publicly available dialog corpora augmented with OOD turns in a controllable way. Our method outperforms state-of-the-art dialog models equipped with a conventional OOD detection mechanism by a large margin in the presence of OOD utterances. | {
"paragraphs": [
[
"Recently, there has been a surge of excitement in developing chatbots for various purposes in research and enterprise. Data-driven approaches offered by common bot building platforms (e.g. Google Dialogflow, Amazon Alexa Skills Kit, Microsoft Bot Framework) make it possible for a wide range of users to easily create dialog systems with a limited amount of data in their domain of interest. Although most task-oriented dialog systems are built for a closed set of target domains, any failure to detect out-of-domain (OOD) utterances and respond with an appropriate fallback action can lead to frustrating user experience. There have been a set of prior approaches for OOD detection which require both in-domain (IND) and OOD data BIBREF0 , BIBREF1 . However, it is a formidable task to collect sufficient data to cover in theory unbounded variety of OOD utterances. In contrast, BIBREF2 introduced an in-domain verification method that requires only IND utterances. Later, with the rise of deep neural networks, BIBREF3 proposed an autoencoder-based OOD detection method which surpasses prior approaches without access to OOD data. However, those approaches still have some restrictions such that there must be multiple sub-domains to learn utterance representation and one must set a decision threshold for OOD detection. This can prohibit these methods from being used for most bots that focus on a single task.",
"The goal of this paper is to propose a novel OOD detection method that does not require OOD data by utilizing counterfeit OOD turns in the context of a dialog. Most prior approaches do not consider dialog context and make predictions for each utterance independently. We will show that this independent decision leads to suboptimal performance even when actual OOD utterances are given to optimize the model and that the use of dialog context helps reduce OOD detection errors. To consider dialog context, we need to connect the OOD detection task with the overall dialog task. Thus, for this work, we build upon Hybrid Code Networks (HCN) BIBREF4 since HCNs achieve state-of-the-art performance in a data-efficient way for task-oriented dialogs, and propose AE-HCNs which extend HCNs with an autoencoder (Figure FIGREF8 ). Furthermore, we release new dialog datasets which are three publicly available dialog corpora augmented with OOD turns in a controlled way (exemplified in Table TABREF2 ) to foster further research. "
],
[
"In this section, we first present the standard HCN model. Then we introduce the proposed AE-HCN(-CNN) model, consisting of an autoencoder and a reconstruction score-aware HCN model. Finally, we describe the counterfeit data augmentation method for training the proposed model."
],
[
"As shown in Figure FIGREF8 , HCN considers a dialog as a sequence of turns. At each turn, HCN takes a tuple, INLINEFORM0 , as input to produce the next system action INLINEFORM1 , where INLINEFORM2 is a user utterance consisting of INLINEFORM3 tokens, i.e., INLINEFORM4 , INLINEFORM5 a one-hot vector encoding the previous system action and INLINEFORM6 a contextual feature vector generated by domain-specific code. The user utterance is encoded as a concatenation of a bag-of-words representation and an average of word embeddings of the user utterance: DISPLAYFORM0 ",
"where INLINEFORM0 denotes a word embedding layer initialized with GloVe BIBREF5 with 100 dimensions. HCN then considers the input tuple, INLINEFORM1 , to update the dialog state through an LSTM BIBREF6 with 200 hidden units: DISPLAYFORM0 ",
"Finally, a distribution over system actions is calculated by a dense layer with a softmax activation: DISPLAYFORM0 "
],
[
"On top of HCN, AE-HCN additionally takes as input an autoencoder's reconstruction score INLINEFORM0 for the user utterance for dialog state update (Figure FIGREF8 ): DISPLAYFORM0 ",
"The autoencoder is a standard seq2seq model which projects a user utterance into a latent vector and reconstructs the user utterance. Specifically, the encoder reads INLINEFORM0 using a GRU BIBREF7 to produce a 512-dimensional hidden vector INLINEFORM1 which in turn gets linearly projected to a 200-dimensional latent vector INLINEFORM2 : DISPLAYFORM0 DISPLAYFORM1 ",
"The output of the decoder at step INLINEFORM0 is a distribution over words: DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 has 512 hidden units. The reconstruction score INLINEFORM1 is the normalized generation probability of INLINEFORM2 : DISPLAYFORM0 "
],
[
"AE-HCN-CNN is a variant of AE-HCN where user utterances are encoded using a CNN layer with max-pooling (following BIBREF8 ) rather than equation EQREF5 : DISPLAYFORM0 ",
"The CNN layer considers two kernel sizes (2 and 3) and has 100 filters for each kernel size."
],
[
"To endow an AE-HCN(-CNN) model with a capability of detecting OOD utterances and producing fallback actions without requiring real OOD data, we augment training data with counterfeit turns. We first select arbitrary turns in a dialog at random according to a counterfeit OOD probability INLINEFORM0 , and insert counterfeit turns before the selected turns. A counterfeit turn consists of a tuple INLINEFORM1 as input and a fallback action INLINEFORM2 as output. We copy INLINEFORM3 and INLINEFORM4 of each selected turn to the corresponding counterfeit turns since OOD utterances do not affect previous system action and feature vectors generated by domain-specific code. Now we generate a counterfeit INLINEFORM5 and INLINEFORM6 . Since we don't know OOD utterances a priori, we randomly choose one of the user utterances of the same dialog to be INLINEFORM7 . This helps the model learn to detect OOD utterances because a random user utterance is contextually inappropriate just like OOD utterances are. We generate INLINEFORM8 by drawing a sample from a uniform distribution, INLINEFORM9 , where INLINEFORM10 is the maximum reconstruction score of training data and INLINEFORM11 is an arbitrary large number. The rationale is that the reconstruction scores of OOD utterances are likely to be larger than INLINEFORM12 but we don't know what distribution the reconstruction scores of OOD turns would follow. Thus we choose the most uninformed distribution, i.e., a uniform distribution so that the model may be encouraged to consider not only reconstruction score but also other contextual features such as the appropriateness of the user utterance given the context, changes in the domain-specific feature vector, and what action the system previously took."
],
[
"To study the effect of OOD input on dialog system's performance, we use three task-oriented dialog datasets: bAbI6 BIBREF9 initially collected for Dialog State Tracking Challenge 2 BIBREF10 ; GR and GM taken from Google multi-domain dialog datasets BIBREF11 . Basic statistics of the datasets are shown in Table TABREF22 . bAbI6 deals with restaurant finding tasks, GM buying a movie ticket, and GR reserving a restaurant table, respectively. We generated distinct action templates by replacing entities with slot types and consolidating based on dialog act annotations.",
"We augment test datasets (denoted as Test-OOD in Table TABREF22 ) with real user utterances from other domains in a controlled way. Our OOD augmentations are as follows:",
"These two augmentation types reflect a specific dialog pattern of interest (see Table TABREF2 ): first, the user utters a request from another domain at an arbitrary point in the dialog (each turn is augmented with the probability INLINEFORM0 , which is set to 0.2 for this study), and the system answers accordingly. This may go on for several turns in a row —each following turn is augmented with the probability INLINEFORM1 , which is set to 0.4 for this study. Eventually, the OOD sequence ends up and the dialog continues as usual, with a segment-level OOD content of the user affirming their mistake. While we introduce the OOD augmentations in a controlled programmatic way, the actual OOD content is natural. The OOD utterances are taken from dialog datasets in several foreign domains: 1) Frames dataset BIBREF12 — travel booking (1198 utterances); 2) Stanford Key-Value Retrieval Network Dataset BIBREF13 — calendar scheduling, weather information retrieval, city navigation (3030 utterances); 3) Dialog State Tracking Challenge 1 BIBREF14 — bus information (968 utterances).",
"In order to avoid incomplete/elliptical phrases, we only took the first user's utterances from the dialogs. For segment-level OOD content, we mined utterances with the explicit affirmation of a mistake from Twitter and Reddit conversations datasets — 701 and 500 utterances respectively."
],
[
"We comparatively evaluate four different models: 1) an HCN model trained on in-domain training data; 2) an AE-HCN-Indep model which is the same as the HCN model except that it deals with OOD utterances using an independent autoencoder-based rule to mimic BIBREF3 – when the reconstruction score is greater than a threshold, the fallback action is chosen; we set the threshold to the maximum reconstruction score of training data; 3) an AE-HCN(-CNN) model trained on training data augmented with counterfeit OOD turns – the counterfeit OOD probability INLINEFORM0 is set to 15% and INLINEFORM1 to 30. We apply dropout to the user utterance encoding with the probability 0.3. We use the Adam optimizer BIBREF15 , with gradients computed on mini-batches of size 1 and clipped with norm value 5. The learning rate was set to INLINEFORM2 throughout the training and all the other hyperparameters were left as suggested in BIBREF15 . We performed early stopping based on the performance of the evaluation data to avoid overfitting. We first pretrain the autoencoder on in-domain training data and keep it fixed while training other components.",
"The result is shown in Table TABREF23 . Since there are multiple actions that are appropriate for a given dialog context, we use per-utterance Precision@K as performance metric. We also report f1-score for OOD detection to measure the balance between precision and recall. The performances of HCN on Test-OOD are about 15 points down on average from those on Test, showing the detrimental impact of OOD utterances to such models only trained on in-domain training data. AE-HCN(-CNN) outperforms HCN on Test-OOD by a large margin about 17(20) points on average while keeping the minimum performance trade-off compared to Test. Interestingly, AE-HCN-CNN has even better performance than HCN on Test, indicating that, with the CNN encoder, counterfeit OOD augmentation acts as an effective regularization. In contrast, AE-HCN-Indep failed to robustly detect OOD utterances, resulting in much lower numbers for both metrics on Test-OOD as well as hurting the performance on Test. This result indicates two crucial points: 1) the inherent difficulty of finding an appropriate threshold value without actually seeing OOD data; 2) the limitation of the models which do not consider context. For the first point, Figure FIGREF24 plots histograms of reconstruction scores for IND and OOD utterances of bAbI6 Test-OOD. If OOD utterances had been known a priori, the threshold should have been set to a much higher value than the maximum reconstruction score of IND training data (6.16 in this case).",
"For the second point, Table TABREF25 shows the search for the best threshold value for AE-HCN-Indep on the bAbI6 task when given actual OOD utterances (which is highly unrealistic for the real-world scenario). Note that the best performance achieved at 9 is still not as good as that of AE-HCN(-CNN). This implies that we can perform better OOD detection by jointly considering other context features.",
"Finally, we conduct a sensitivity analysis by varying counterfeit OOD probabilities. Table TABREF26 shows performances of AE-HCN-CNN on bAbI6 Test-OOD with different INLINEFORM0 values, ranging from 5% to 30%. The result indicates that our method manages to produce good performance without regard to the INLINEFORM1 value. This superior stability nicely contrasts with the high sensitivity of AE-HCN-Indep with regard to threshold values as shown in Table TABREF25 ."
],
[
"We proposed a novel OOD detection method that does not require OOD data without any restrictions by utilizing counterfeit OOD turns in the context of a dialog. We also release new dialog datasets which are three publicly available dialog corpora augmented with natural OOD turns to foster further research. In the presence of OOD utterances, our method outperforms state-of-the-art dialog models equipped with an OOD detection mechanism by a large margin — more than 17 points in Precision@K on average — while minimizing performance trade-off on in-domain test data. The detailed analysis sheds light on the difficulty of optimizing context-independent OOD detection and justifies the necessity of context-aware OOD handling models. We plan to explore other ways of scoring OOD utterances than autoencoders. For example, variational autoencoders or generative adversarial networks have great potential. We are also interested in using generative models to produce more realistic counterfeit user utterances."
]
],
"section_name": [
"Introduction",
"METHODS",
"HCN",
"AE-HCN",
"AE-HCN-CNN",
"Counterfeit Data Augmentation",
"DATASETS",
"EXPERIMENTAL SETUP AND EVALUATION",
"CONCLUSION"
]
} | {
"answers": [
{
"annotation_id": [
"f1c2a827810769a9cf34bc27c839132884fe45f2"
],
"answer": [
{
"evidence": [
"The goal of this paper is to propose a novel OOD detection method that does not require OOD data by utilizing counterfeit OOD turns in the context of a dialog. Most prior approaches do not consider dialog context and make predictions for each utterance independently. We will show that this independent decision leads to suboptimal performance even when actual OOD utterances are given to optimize the model and that the use of dialog context helps reduce OOD detection errors. To consider dialog context, we need to connect the OOD detection task with the overall dialog task. Thus, for this work, we build upon Hybrid Code Networks (HCN) BIBREF4 since HCNs achieve state-of-the-art performance in a data-efficient way for task-oriented dialogs, and propose AE-HCNs which extend HCNs with an autoencoder (Figure FIGREF8 ). Furthermore, we release new dialog datasets which are three publicly available dialog corpora augmented with OOD turns in a controlled way (exemplified in Table TABREF2 ) to foster further research.",
"The result is shown in Table TABREF23 . Since there are multiple actions that are appropriate for a given dialog context, we use per-utterance Precision@K as performance metric. We also report f1-score for OOD detection to measure the balance between precision and recall. The performances of HCN on Test-OOD are about 15 points down on average from those on Test, showing the detrimental impact of OOD utterances to such models only trained on in-domain training data. AE-HCN(-CNN) outperforms HCN on Test-OOD by a large margin about 17(20) points on average while keeping the minimum performance trade-off compared to Test. Interestingly, AE-HCN-CNN has even better performance than HCN on Test, indicating that, with the CNN encoder, counterfeit OOD augmentation acts as an effective regularization. In contrast, AE-HCN-Indep failed to robustly detect OOD utterances, resulting in much lower numbers for both metrics on Test-OOD as well as hurting the performance on Test. This result indicates two crucial points: 1) the inherent difficulty of finding an appropriate threshold value without actually seeing OOD data; 2) the limitation of the models which do not consider context. For the first point, Figure FIGREF24 plots histograms of reconstruction scores for IND and OOD utterances of bAbI6 Test-OOD. If OOD utterances had been known a priori, the threshold should have been set to a much higher value than the maximum reconstruction score of IND training data (6.16 in this case)."
],
"extractive_spans": [],
"free_form_answer": "AE-HCN outperforms by 17%, AE-HCN-CNN outperforms by 20% on average",
"highlighted_evidence": [
"Thus, for this work, we build upon Hybrid Code Networks (HCN) BIBREF4 since HCNs achieve state-of-the-art performance in a data-efficient way for task-oriented dialogs, and propose AE-HCNs which extend HCNs with an autoencoder (Figure FIGREF8 ). ",
"AE-HCN(-CNN) outperforms HCN on Test-OOD by a large margin about 17(20) points on average while keeping the minimum performance trade-off compared to Test. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"By how much does their method outperform state-of-the-art OOD detection?"
],
"question_id": [
"01123a39574bdc4684aafa59c52d956b532d2e53"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. The architecture of AE-HCN which is the same as HCN except for the autoencoder component.",
"Table 2. Data statistics. The numbers of distinct system actions are 58, 247, and 194 for bAbI6, GR, and GM, respectively.",
"Table 3. Evaluation results. P@K means Precision@K. OOD F1 denotes f1-score for OOD detection over utterances.",
"Table 4. Performances of AE-HCN-Indep on bAbI6 TestOOD with different thresholds.",
"Table 5. Performances of AE-HCN-CNN on bAbI6 Test-OOD with varying counterfeit OOD rates.",
"Fig. 2. Histograms of AE reconstruction scores for the bAbI6 test data. The histograms for other datasets follow similar trends."
],
"file": [
"2-Figure1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png",
"4-Figure2-1.png"
]
} | [
"By how much does their method outperform state-of-the-art OOD detection?"
] | [
[
"1905.10247-EXPERIMENTAL SETUP AND EVALUATION-1"
]
] | [
"AE-HCN outperforms by 17%, AE-HCN-CNN outperforms by 20% on average"
] | 865 |
1811.07684 | Efficient keyword spotting using dilated convolutions and gating | We explore the application of end-to-end stateless temporal modeling to small-footprint keyword spotting as opposed to recurrent networks that model long-term temporal dependencies using internal states. We propose a model inspired by the recent success of dilated convolutions in sequence modeling applications, allowing to train deeper architectures in resource-constrained configurations. Gated activations and residual connections are also added, following a similar configuration to WaveNet. In addition, we apply a custom target labeling that back-propagates loss from specific frames of interest, therefore yielding higher accuracy and only requiring to detect the end of the keyword. Our experimental results show that our model outperforms a max-pooling loss trained recurrent neural network using LSTM cells, with a significant decrease in false rejection rate. The underlying dataset -"Hey Snips"utterances recorded by over 2.2K different speakers - has been made publicly available to establish an open reference for wake-word detection. | {
"paragraphs": [
[
"Keyword spotting (KWS) aims at detecting a pre-defined keyword or set of keywords in a continuous stream of audio. In particular, wake-word detection is an increasingly important application of KWS, used to initiate an interaction with a voice interface. In practice, such systems run on low-resource devices and listen continuously for a specific wake word. An effective on-device KWS therefore requires real-time response and high accuracy for a good user experience, while limiting memory footprint and computational cost.",
"Traditional approaches in keyword spotting tasks involve Hidden Markov Models (HMMs) for modeling both keyword and background BIBREF0 , BIBREF1 , BIBREF2 . In recent years, Deep Neural Networks (DNNs) have proven to yield efficient small-footprint solutions, as shown first by the fully-connected networks introduced in BIBREF3 . More advanced architectures have been successfully applied to KWS problems, such as Convolutional Neural Networks (CNNs) exploiting local dependencies BIBREF4 , BIBREF5 . They have demonstrated efficiency in terms of inference speed and computational cost but fail at capturing large patterns with reasonably small models. Recent works have suggested RNN based keyword spotting using LSTM cells that can leverage longer temporal context using gating mechanism and internal states BIBREF6 , BIBREF7 , BIBREF8 . However, because RNNs may suffer from state saturation when facing continuous input streams BIBREF9 , their internal state needs to be periodically reset.",
"In this work we focus on end-to-end stateless temporal modeling which can take advantage of a large context while limiting computation and avoiding saturation issues. By end-to-end model, we mean a straight-forward model with a binary target that does not require a precise phoneme alignment beforehand. We explore an architecture based on a stack of dilated convolution layers, effectively operating on a broader scale than with standard convolutions while limiting model size. We further improve our solution with gated activations and residual skip-connections, inspired by the WaveNet style architecture explored previously for text-to-speech applications BIBREF10 and voice activity detection BIBREF9 , but never applied to KWS to our knowledge. In BIBREF11 , the authors explore Deep Residual Networks (ResNets) for KWS. ResNets differ from WaveNet models in that they do not leverage skip-connections and gating, and apply convolution kernels in the frequency domain, drastically increasing the computational cost.",
"In addition, the long-term dependency our model can capture is exploited by implementing a custom “end-of-keyword” target labeling, increasing the accuracy of our model. A max-pooling loss trained LSTM initialized with a cross-entropy pre-trained network is chosen as a baseline, as it is one of the most effective models taking advantage of longer temporal contexts BIBREF7 . The rest of the paper is organized in two main parts. Section \"System description\" describes the different components of our model as well as our labeling. Section \"Experiments\" focuses on the experimental setup and performance results obtained on a publicly available “Hey Snips” dataset."
],
[
"The acoustic features are 20-dimensional log-Mel filterbank energies (LFBEs), extracted from the input audio every 10ms over a window of 25ms. A binary target is used, see Section \"End-of-keyword labeling\" for more details about labeling. During decoding, the system computes smoothed posteriors by averaging the output of a sliding context window containing $w_{smooth}$ frames, a parameter chosen after experimental tuning. End-to-end models such as the one presented here do not require any post-processing step besides smoothing, as opposed to multi-class models such as BIBREF3 , BIBREF4 . Indeed, the system triggers when the smoothed keyword posterior exceeds a pre-defined threshold."
],
[
"WaveNet was initially proposed in BIBREF10 , as a generative model for speech synthesis and other audio generation tasks. It consists in stacked causal convolution layers wrapped in a residual block with gated activation units as depicted in Figure 1 .",
"Standard convolutional networks cannot capture long temporal patterns with reasonably small models due to the increase in computational cost yielded by larger receptive fields. Dilated convolutions skip some input values so that the convolution kernel is applied over a larger area than its own. The network therefore operates on a larger scale, without the downside of increasing the number of parameters. The receptive field $r$ of a network made of stacked convolutions indeed reads: $r = \\sum _i d_i (s_i - 1),$ ",
"where $d_i$ refers to the dilation rate ( $d_i=1$ for normal convolutions) and $s_i$ the filter size of the $i^{th}$ layer. Additionally, causal convolutions kernels ensure a causal ordering of input frames: the prediction emitted at time $t$ only depends on previous time stamps. It allows to reduce the latency at inference time.",
"As mentioned in BIBREF10 , gated activations units – a combination of tanh and sigmoid activations controlling the propagation of information to the next layer – prove to efficiently model audio signals. Residual learning strategies such as skip connections are also introduced to speed up convergence and address the issue of vanishing gradients posed by the training of models of higher depth. Each layer yields two outputs: one is directly fed to the next layer as usual, but the second one skips it. All skip-connections outputs are then summed into the final output of the network. A large temporal dependency, can therefore be achieved by stacking multiple dilated convolution layers. By inserting residual connections between each layer, we are able to train a network of 24 layers on relatively small amount of data, which corresponds to a receptive field of 182 frames or 1.83s. The importance of gating and residual connections is analyzed in Section 3.3.2."
],
[
"In addition to reducing the model size, dilated convolutions allow the network to run in a streaming fashion during inference, drastically reducing the computational cost. When receiving a new input frame, the corresponding posteriors are recovered using previous computations, kept in memory for efficiency purposes as described in Figure 2 . This cached implementation allows to reduce the amount of Floating Point Operations per Second (FLOPS) to a level suiting production requirements."
],
[
"Our approach consists in associating a target 1 to frames within a given time interval $\\Delta t$ before and after the end of the keyword. The optimal value for $\\Delta t$ is tuned on the dev set. Additionally, a masking scheme is applied, discarding background frames outside of the labeling window in positive samples. A traditional labeling approach, however, associates a target 1 to all frames aligned with the keyword. In this configuration, the model has a tendency to trigger as soon as the keyword starts, whether or not the sample contains only a fraction of the keyword. One advantage of our approach is that the network will trigger near the end of keyword, once it has seen enough context. Moreover, our labeling does not need any phoneme alignment, but only to detect the end of the keyword, which is easily obtained with a VAD system. Furthermore, thanks to masking, the precise frontiers of the labeling window are not learned, making the network more robust to labeling imprecisions. The relative importance of end-of-keyword labeling and masking are analyzed in Section UID18 ."
],
[
"The proposed approach is evaluated on a crowdsourced close-talk dataset. The chosen keyword is “Hey Snips” pronounced with no pause between the two words. The dataset contains a large variety of English accents and recording environments. Around 11K wake word utterances and 86.5K ( $\\sim $ 96 hours) negative examples have been recorded, see Table 1 for more details. Note that negative samples have been recorded in the same conditions than wake-word utterances, therefore arising from the same domain (speaker, hardware, environment, etc.). It thus prevents the model from discerning the two classes based on their domain-dependent acoustic features.",
"Positive data has been cleaned by automatically removing samples of extreme duration, or samples with repeated occurrences of the wake word. Positive dev and test sets have been manually cleaned to discard any mispronunciations of the wake word (e.g. “Hi Snips” or “Hey Snaips”), leaving the training set untouched. Noisy conditions are simulated by augmenting samples with music and noise background audio from Musan BIBREF12 . The positive dev and test datasets are augmented at 5dB of Signal-to-noise Ratio (SNR).",
"The full dataset and its metadata are available for research purposes. Although some keyword spotting datasets are freely available, such as the Speech Commands dataset BIBREF13 for voice commands classification, there is no equivalent in the specific wake-word detection field. By establishing an open reference for wake-word detection, we hope to contribute to promote transparency and reproducibility in a highly concurrent field where datasets are often kept private."
],
[
"The network consists in an initial causal convolution layer (filter size of 3) and 24 layers of gated dilated convolutions (filter size of 3). The 24 dilation rates are a repeating sequence of $\\lbrace 1, 2, 4, 8, 1, 2, 4, 8...\\rbrace $ . Residual connections are created between each layer and skip connections are accumulated at each layer and are eventually fed to a DNN followed by a softmax for classification as depicted in Figure 1 . We used projection layers of size 16 for residual connections and of size 32 for skip connections. The optimal duration of the end-of-keyword labeling interval as defined in Section \"End-of-keyword labeling\" is $\\Delta t = 160ms$ (15 frames before and 15 frames after the end of the keyword). The posteriors are smoothed over a sliding context window of $w_{smooth}=30$ frames, also tuned on the dev set.",
"The main baseline model is a LSTM trained with a max-pooling based loss initialized with a cross-entropy pre-trained network, as it is another example of end-to-end temporal model BIBREF7 . The idea of the max-pooling loss is to teach the network to fire at its highest confidence time by back-propagating loss from the most informative keyword frame that has the maximum posterior for the corresponding keyword. More specifically, the network is a single layer of unidirectional LSTM with 128 memory blocks and a projection layer of dimension 64, following a similar configuration to BIBREF7 but matching the same number of parameters than the proposed architecture (see Section UID15 ). 10 frames in the past and 10 frames in the future are stacked to the input frame. Standard frame labeling is applied, but with the frame masking strategy described in Section \"End-of-keyword labeling\" . The authors of BIBREF7 mentioned back-propagating loss only from the last few frames, but said that the LSTM network performed poorly in this setting. The same smoothing strategy is applied on an window $w_{smooth}=8$ frames, after tuning on dev data. For comparison, we also add as a CNN variant the base architecture trad-fpool3 from BIBREF4 , a multi-class model with 4 output labels (“hey”, “sni”, “ps”, and background). Among those proposed in BIBREF4 , this is the architecture with the lowest amount of FLOPS while having a similar number of parameters as the two other models studied here (see Section UID15 ).",
"The Adam optimization method is used for the three models with a learning rate of $10^{-3}$ for the proposed architecture, $10^{-4}$ for the CNN, and $5 \\cdot 10^{-5}$ for the LSTM baseline. Additionally, gradient norm clipping to 10 is applied. A scaled uniform distribution for initialization BIBREF14 (or “Xavier” initialization) yielded the best performance for the three models. We also note that the LSTM network is much more sensitive to the chosen initialization scheme."
],
[
"The performance of the three models is first measured by observing the False Rejection Rate (FRR) on clean and noisy (5dB SNR) positives samples at the operating threshold of 0.5 False Alarms per Hour (FAH) computed on the collected negative data. Hyper parameters are tuned on the dev set and results are reported on the test set. Table 2 displays these quantities as well as the number of parameters and multiplications per second performed during inference. The proposed architecture yields a lower FRR than the LSTM (resp. CNN) baseline with a 94% (resp. 95%) and 86% (resp. 88%) decrease in clean and noisy conditions. The number of parameters is similar for the three architectures, but the amount of FLOPS is higher by an order of magnitude for the CNN baseline while resulting in a poorer FRR in a noisy environment. Figure 3 provides the Detection Error Tradeoff (DET) curves and shows that the WaveNet model also outperforms the baselines on a whole range of triggering thresholds.",
"To assess the relative importance of some characteristics of the proposed architecture, we study the difference in FRR observed once each of them is removed separately, all things being equal. Table 3 shows that the end-of-keyword labeling is particularly helpful in improving the FRR at a fixed FAH, especially in noisy conditions. Masking background frames in positive samples also helps, but in a lower magnitude. Similarly to what is observed in BIBREF9 , gating contributes to improving the FRR especially in noisy conditions. We finally observed that removing either residual or skip connections separately has little effect on the performance. However, we could not properly train the proposed model without any of these connections. It seems to confirm that implementing at least one bypassing strategy is key for constructing deeper network architectures."
],
[
"This paper introduces an end-to-end stateless modeling for keyword spotting, based on dilated convolutions coupled with residual connections and gating encouraged by the success of the WaveNet architecture in audio generation tasks BIBREF10 , BIBREF9 . Additionally, a custom frame labeling is applied, associating a target 1 to frames located within a small time interval around the end of the keyword. The proposed architecture is compared against a LSTM baseline, similar to the one proposed in BIBREF7 . Because of their binary targets, both the proposed model and the LSTM baseline do not require any phoneme alignment or post-processing besides posterior smoothing. We also added a multi-class CNN baseline BIBREF4 for comparison. We have shown that the presented WaveNet model significantly reduces the false rejection rate at a fixed false alarm rate of 0.5 per hour, in both clean and noisy environments, on a crowdsourced dataset made publicly available for research purposes. The proposed model seems to be very efficient in the specific domain defined by this dataset and future work will focus on domain adaptation in terms of recording hardware, accents, or far-field settings, to be deployed easily in new environments."
],
[
"We thank Oleksandr Olgashko for his contribution in developing the training framework. We are grateful to the crowd of contributors who recorded the dataset. We are indebted to the users of the Snips Voice Platform for valuable feedback."
]
],
"section_name": [
"Introduction",
"System description",
"Neural network architecture",
"Streaming inference",
"End-of-keyword labeling",
"Open dataset",
"Experimental setup",
"Results",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"f243d0d355fcfb68db9964c9c04fb66450511347"
],
"answer": [
{
"evidence": [
"In this work we focus on end-to-end stateless temporal modeling which can take advantage of a large context while limiting computation and avoiding saturation issues. By end-to-end model, we mean a straight-forward model with a binary target that does not require a precise phoneme alignment beforehand. We explore an architecture based on a stack of dilated convolution layers, effectively operating on a broader scale than with standard convolutions while limiting model size. We further improve our solution with gated activations and residual skip-connections, inspired by the WaveNet style architecture explored previously for text-to-speech applications BIBREF10 and voice activity detection BIBREF9 , but never applied to KWS to our knowledge. In BIBREF11 , the authors explore Deep Residual Networks (ResNets) for KWS. ResNets differ from WaveNet models in that they do not leverage skip-connections and gating, and apply convolution kernels in the frequency domain, drastically increasing the computational cost.",
"Standard convolutional networks cannot capture long temporal patterns with reasonably small models due to the increase in computational cost yielded by larger receptive fields. Dilated convolutions skip some input values so that the convolution kernel is applied over a larger area than its own. The network therefore operates on a larger scale, without the downside of increasing the number of parameters. The receptive field $r$ of a network made of stacked convolutions indeed reads: $r = \\sum _i d_i (s_i - 1),$"
],
"extractive_spans": [],
"free_form_answer": "Similar to standard convolutional networks but instead they skip some input values effectively operating on a broader scale.",
"highlighted_evidence": [
"We explore an architecture based on a stack of dilated convolution layers, effectively operating on a broader scale than with standard convolutions while limiting model size.",
"Standard convolutional networks cannot capture long temporal patterns with reasonably small models due to the increase in computational cost yielded by larger receptive fields. Dilated convolutions skip some input values so that the convolution kernel is applied over a larger area than its own. The network therefore operates on a larger scale, without the downside of increasing the number of parameters."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two"
],
"paper_read": [
"no"
],
"question": [
"What are dilated convolutions?"
],
"question_id": [
"954c4756e293fd5c26dc50dc74f505cc94b3f8cc"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Fig. 2: Dilated convolution layers with an exponential dilation rate of 1, 2, 4, 8 and filter size of 2. Blue nodes are input frame vectors, orange nodes are cached intermediate vectors used for streaming inference, green nodes are output vectors which are actually computed. refers to background.",
"Fig. 1: WaveNet architecture [11].",
"Table 1: Dataset statistics.",
"Table 2: Number of parameters, multiplications per second, and false rejection rate in percent on clean (FRR clean) and 5dB SNR noisy (FRR noisy) positive samples, at 0.5 false alarms per hour.",
"Table 3: Variation in FRR (absolute) for the proposed architecture when removing different characteristics separately, all things being equal.",
"Fig. 3: DET curves for the proposed architecture (green) compared to the LSTM (dotted yellow) and CNN (dashed blue) baselines in clean (a) and noisy (b) environments."
],
"file": [
"2-Figure2-1.png",
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"4-Figure3-1.png"
]
} | [
"What are dilated convolutions?"
] | [
[
"1811.07684-Introduction-2"
]
] | [
"Similar to standard convolutional networks but instead they skip some input values effectively operating on a broader scale."
] | 866 |
1911.08976 | Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation | The TextGraphs-13 Shared Task on Explanation Regeneration asked participants to develop methods to reconstruct gold explanations for elementary science questions. Red Dragon AI's entries used the language of the questions and explanation text directly, rather than a constructing a separate graph-like representation. Our leaderboard submission placed us 3rd in the competition, but we present here three methods of increasing sophistication, each of which scored successively higher on the test set after the competition close. | {
"paragraphs": [
[
"The Explanation Regeneration shared task asked participants to develop methods to reconstruct gold explanations for elementary science questions BIBREF1, using a new corpus of gold explanations BIBREF2 that provides supervision and instrumentation for this multi-hop inference task.",
"Each explanation is represented as an “explanation graph”, a set of atomic facts (between 1 and 16 per explanation, drawn from a knowledge base of 5,000 facts) that, together, form a detailed explanation for the reasoning required to answer and explain the resoning behind a question.",
"Linking these facts to achieve strong performance at rebuilding the gold explanation graphs requires methods to perform multi-hop inference - which has been shown to be far harder than inference of smaller numbers of hops BIBREF3, particularly for the case here, where there is considerable uncertainty (at a lexical level) of how individual explanations logically link somewhat `fuzzy' graph nodes."
],
[
"The WorldTree corpus BIBREF2 is a new dataset is a comprehensive collection of elementary science exam questions and explanations. Each explanation sentence is a fact that is related to science or common sense, and is represented in a structured table that can be converted to free-text. For each question, the gold explanations have lexical overlap (i.e. having common words), and are denoted as having a specific explanation role such as CENTRAL (core concepts); GROUNDING (linking core facts to the question); and LEXICAL GLUE (linking facts which may not have lexical overlap)."
],
[
"As described in the introduction, the general task being posed is one of multi-hop inference, where a number of `atomic fact' sentences must be combined to form a coherent chain of reasoning to solve the elementary science problem being posed.",
"These explanatory facts must be retrieved from a semi-structured knowledge base - in which the surface form of the explanation is represented as a series of terms gathered by their functional role in the explanation.",
"For instance, for the explanation “Grass snakes live in grass” is encoded as “[Grass snakes] [live in] [grass]”, and this explanation is found in a PROTO-HABITATS table. However, in the same table there are also more elaborate explanations, for example : “Mice live in in holes in the ground in fields / in forests.” is expressed as : “[mice] [live in] [in holes in the ground] [in fields OR in forests]”. And more logically complex : “Most predators live in/near the same environment as their prey.” being expressed as : “[most] [predators] [live in OR live near] [the same environment as their prey]”.",
"So, whereas the simpler explanations fit in the usual Knowledge-Base triples paradigm, the more complex ones are much more nuanced about what actually constitutes a node, and how reliable the arcs are between them. Indeed, there is also a collection of if/then explanations, including examples such as : “[if] [something] [has a] [positive impact on] [something else] [then] [increasing] [the] [amount of] [that something] [has a] [positive impact on] [that something else]” - where the explanation has meta-effect on the graph itself, and includes `unbound variables'."
],
[
"In this work, we used the pure textual form of each explanation, problem and correct answer, rather than using the semi-structured form given in the column-oriented files provided in the dataset. For each of these we performed Penn-Treebank tokenisation, followed by lemmatisation using the lemmatisation files provided with the dataset, and then stop-word removal.",
"Concerned by the low performance of the Python Baseline method (compared to the Scala Baseline, which seemed to operate using an algorithm of similar `strength'), we identified an issue in the organizer's evaluation script where predicted explanations that were missing any of the gold explanations were assigned a MAP score of zero. This dramatically penalised the Python Baseline, since it was restricted to only returning 10 lines of explanation. It also effectively forces all submissions to include a ranking over all explanations - a simple fix (with the Python Baseline rescored in Table 1) will be submitted via GitHub. This should also make the upload/scoring process faster, since only the top $\\scriptstyle \\sim $1000 explanation lines meaningfully contribute to the rank scoring."
],
[
"Although more classic graph methods were initially attempted, along the lines of BIBREF4, where the challenge of semantic drift in multi-hop inference was analysed and the effectiveness of information extraction methods was demonstrated, the following 3 methods (which now easily surpass the score of our competition submission) were ultimately pursued due to their simplicity/effectiveness."
],
[
"As mentioned above, the original TF-IDF implementation of the provided Python baseline script did not predict a full ranking, and was penalized by the evaluation script. When this issue was remedied, its MAP score rose to 0.2140.",
"However, there are three main steps that significantly improve the performance of this baseline:",
"The original question text included all the answer choices, only one of which was correct (while the others are distractors). Removing the distractors resulted in improvement;",
"The TF-IDF algorithm is very sensitive to keywords. Using the provided lemmatisation set and NLTK for tokenisation helped to align the different forms of the same keyword and reduce the vocabulary size needed;",
"Stopword removal gave us approximately 0.04 MAP improvement throughout - removing noise in the texts that was evidently `distracting' for TF-IDF.",
"As shown in Table 2, these optimisation steps increased the Python Baseline score significantly, without introducing algorithmic complexity."
],
[
"While graph methods have shown to be effective for multi-hop question answering, the schema in the textgraphs dataset is unconventional (as illustrated earlier). To counter this, the previous TF-IDF method was extended to simulate jumps between explanations, inspired by graph methods, but without forming any actual graphs:",
"TF-IDF vectors are pre-computed for all questions and explanation candidates;",
"For each question, the closest explanation candidate by cosine proximity is selected, and their TF-IDF vectors are aggregated by a max operation;",
"The next closest (unused) explanation is selected, and this process was then applied iteratively up to maxlen=128 times, with the current TF-IDF comparison vector progressively increasing in expressiveness. At each iteration, the current TF-IDF vector was down-scaled by an exponential factor of the length of the current explanation set, as this was found to increase development set results by up to +0.0344.",
"By treating the TF-IDF vector as a representation of the current chain of reasoning, each successive iteration builds on the representation to accumulate a sequence of explanations.",
"The algorithm outlined above was additionally enhanced by adding a weighting factor to each successive explanation as it is added to the cumulative TF-IDF vector. Without this factor, the effectiveness was lower because the TF-IDF representation itself was prone to semantic drift away from the original question. Hence, each successive explanation’s weight was down-scaled, and this was shown to work well."
],
[
"Large pretrained language models have been proven effective on a wide range of downstream tasks, including multi-hop question answering, such as in BIBREF5 on the RACE dataset, and BIBREF6 which showed that large finetuned language models can be beneficial for complex question answering domains (especially in a data-constrained context).",
"Inspired by this, we decided to adapt BERT BIBREF7 - a popular language model that has produced competitive results on a variety of NLP tasks - for the explanation generation task.",
"For our `BERT Re-ranking' method, we attach a regression head to a BERT Language Model. This regression head is then trained to predict a relevance score for each pair of question and explanation candidate. The approach is as follows :",
"Calculate a TF-IDF relevance score for every tokenised explanation against the tokenised `[Problem] [CorrectAnswer] [Gold explanations]' in the training set. This will rate the true explanation sentences very highly, but also provide a `soft tail' of rankings across all explanations;",
"Use this relevance score as the prediction target of the BERT regression head, where BERT makes its predictions from the original `[Problem] [CorrectAnswer]' text combined with each potential Explanation text in turn (over the training set);",
"At prediction time, the explanations are ranked according to their relevance to `[Problem] [CorrectAnswer]' as predicted by the BERT model's output.",
"We cast the problem as a regression task (rather than a classification task), since treating it as a task to classify which explanations are relevant would result in an imbalanced dataset because the gold explanation sentences only comprise a small proportion of the total set. By using soft targets (given to us by the TF-IDF score against the gold answers in the training set), even explanations which are not designated as “gold” but have some relevance to the gold paragraph can provide learning signal for the model.",
"Due to constraints in compute and time, the model is only used to rerank the $top_n=64$ predictions made by the TF-IDF methods.",
"The BERT model selected was of “Base” size with 110M parameters, which had been pretrained on BooksCorpus and English Wikipedia. We did not further finetune it on texts similar to the TextGraphs dataset prior to regression training. In other tests, we found that the “Large” size model did not help improve the final MAP score."
],
[
"The authors' initial attempts at tackling the Shared Task focussed on graph-based methods. However, as identified in BIBREF3, the uncertainty involved with interpreting each lexical representation, combined with the number of hops required, meant that this line of enquiry was put to one side.",
"While the graph-like approach is clearly attractive from a reasoning point of view (and will be the focus of future work), we found that using purely the textual aspects of the explanation database bore fruit more readily. Also. the complexity of the resulting systems could be minimised such that the description of each system could be as consise as possible.",
"Specifically, we were able to optimise the TF-IDF baseline to such an extent that our `Optimised TF-IDF' would now place 2nd in the submission rankings, even though it used no special techniques at all.",
"The Iterated TF-IDF method, while more algorithmically complex, also does not need any training on the data before it is used. This shows how effective traditional text processing methods can be, when used strategically.",
"The BERT Re-ranking method, in contrast, does require training, and also applies one of the more sophisticated Language Models available to extract more meaning from the explanation texts.",
"Figure 1 illustrates how there is a clear trend towards being able to build longer explanations as our semantic relevance methods become more sophisticated.",
"There are also clear trends across the data in Table 3 that show that the more sophisticated methods are able to bring more CENTRAL explanations into the mix, even though they are more `textually distant' from the original Question and Answer statements. Surprisingly, this is at the expense of some of the GROUNDING statements.",
"Since these methods seem to focus on different aspects of solving the ranking problem, we have also explored averaging the ranks they assign to the explanations (essentially ensembling their decisions). Empirically, this improves performance at the expense of making the model more obscure."
],
[
"Despite our apparent success with less sophisticated methods, it seems clear that more explicit graph-based methods appears will be required to tackle the tougher questions in this dataset (for instance those that require logical deductions, as illustrated earlier, or hypothetical situations such as some `predictor-prey equilibrium' problems). Even some simple statements (such as `Most predators ...') present obstacles to existing Knowledge-Base representations.",
"In terms of concrete next steps, we are exploring the idea of creating intermediate forms of representation, where textual explanations can be linked using a graph to plan out the logical steps. However these grander schemes suffer from being incrementally less effective than finding additional `smart tricks' for existing methods!",
"In preparation, we have begun to explore doing more careful preprocessing, notably :",
"Exploiting the structure of the explanation tables individually, since some columns are known to be relationship-types that would be suitable for labelling arcs between nodes in a typical Knowledge Graph setting;",
"Expanding out the conjunction elements within the explanation tables. For instance in explanations like “[coral] [lives in the] [ocean OR warm water]”, the different sub-explanations “(Coral, LIVES-IN, Ocean)” and “(Coral, LIVES-IN, WarmWater)” can be generated, which are far closer to a `graph-able' representation;",
"Better lemmatisation : For instance `ice cube' covers both `ice' and `ice cube' nodes. We need some more `common sense' to cover these cases.",
"Clearly, it is early days for this kind of multi-hop inference over textual explanations. At this point, we have only scratched the surface of the problem, and look forward to helping to advance the state-of-the-art in the future."
],
[
"The authors would like to thank Google for access to the TFRC TPU program which was used in training and fine-tuning models during experimentation for this paper."
]
],
"section_name": [
"Introduction",
"Introduction ::: Dataset Review",
"Introduction ::: Problem Review",
"Preliminary Steps",
"Model Architectures",
"Model Architectures ::: Optimized TF-IDF",
"Model Architectures ::: Iterated TF-IDF",
"Model Architectures ::: BERT Re-ranking",
"Discussion",
"Discussion ::: Further Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"f39b4f8aa0ac7b057672c649aa285175746159b4"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: MAP scoring of new methods. The timings are in seconds for the whole dev-set, and the BERT Re-ranking figure includes the initial Iterated TF-IDF step."
],
"extractive_spans": [],
"free_form_answer": "Optimized TF-IDF, iterated TF-IDF, BERT re-ranking.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: MAP scoring of new methods. The timings are in seconds for the whole dev-set, and the BERT Re-ranking figure includes the initial Iterated TF-IDF step."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"what are the three methods presented in the paper?"
],
"question_id": [
"dac2591f19f5bbac3d4a7fa038ff7aa09f6f0d96"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Table 1: Base MAP scoring - where the Python Baseline1e9 is the same as the original Python Baseline, but with the evaluate.py code updated to assume missing explanations have rank of 109",
"Table 2: MAP scoring of new methods. The timings are in seconds for the whole dev-set, and the BERT Re-ranking figure includes the initial Iterated TF-IDF step.",
"Figure 1: Mean MAP score against gold explanation lengths",
"Table 3: Contribution of Explanation Roles - Dev-Set MAP per role (computed by filtering explanations of other roles out of the gold explanation list then computing the MAP as per normal)"
],
"file": [
"1-Table1-1.png",
"2-Table2-1.png",
"4-Figure1-1.png",
"4-Table3-1.png"
]
} | [
"what are the three methods presented in the paper?"
] | [
[
"1911.08976-2-Table2-1.png"
]
] | [
"Optimized TF-IDF, iterated TF-IDF, BERT re-ranking."
] | 869 |
1812.01704 | Impact of Sentiment Detection to Recognize Toxic and Subversive Online Comments | The presence of toxic content has become a major problem for many online communities. Moderators try to limit this problem by implementing more and more refined comment filters, but toxic users are constantly finding new ways to circumvent them. Our hypothesis is that while modifying toxic content and keywords to fool filters can be easy, hiding sentiment is harder. In this paper, we explore various aspects of sentiment detection and their correlation to toxicity, and use our results to implement a toxicity detection tool. We then test how adding the sentiment information helps detect toxicity in three different real-world datasets, and incorporate subversion to these datasets to simulate a user trying to circumvent the system. Our results show sentiment information has a positive impact on toxicity detection against a subversive user. | {
"paragraphs": [
[
"Online communities abound today, forming on social networks, on webforums, within videogames, and even in the comments sections of articles and videos. While this increased international contact and exchange of ideas has been a net positive, it has also been matched with an increase in the spread of high-risk and toxic content, a category which includes cyberbullying, racism, sexual predation, and other negative behaviors that are not tolerated in society. The two main strategies used by online communities to moderate themselves and stop the spread of toxic comments are automated filtering and human surveillance. However, given the sheer number of messages sent online every day, human moderation simply cannot keep up, and either leads to a severe slowdown of the conversation (if messages are pre-moderated before posting) or allows toxic messages to be seen and shared thousands of times before they are deleted (if they are post-moderated after being posted and reported). In addition, human moderation cannot scale up easily to the number of messages to monitor; for example, Facebook has a team of 20,000 human moderators, which is both massive compared to the total of 25,000 other employees in the company, and minuscule compared to the fact its automated algorithms flagged messages that would require 180,000 human moderators to review. Keyword detection, on the other hand, is instantaneous, scales up to the number of messages, and prevents toxic messages from being posted at all, but it can only stop messages that use one of a small set of denied words, and, are thus fairly easy to circumvent by introducing minor misspellings (i.e. writing \"kl urself\" instead of \"kill yourself\"). In BIBREF0 , the authors show how minor changes can elude even complex systems. These attempts to bypass the toxicity detection system are called subverting the system, and toxic users doing it are referred to as subversive users.",
"In this paper, we consider an alternative strategy for toxic message filtering. Our intuition is that, while toxic keywords can easily be disguised, the toxic emotional tone of the message cannot. Consequently, we will study the correlation between sentiment and toxicity and its usefulness for toxic message detection both in subversive and non-subversive contexts.",
"The rest of this paper is structured as follows. After a review of the relevant literature in the next section, we will consider the problem of sentiment detection in online messages in Section SECREF3 . Next, we will study the measure of toxicity and its correlation to message sentiment in Section SECREF4 . Finally, we will draw some concluding remarks in Section SECREF5 ."
],
[
"Given the limitations of human and keyword-based toxicity detection systems mentioned previously, several authors have studied alternative means of detecting toxicity. In one of the earliest works on the detection of hate speech, the authors of BIBREF1 used n-grams enhanced by part-of-speech information as features to train an SVM classifier to accurately pick out anti-semitic online messages. Following a similar idea, the authors of BIBREF2 conducted a study of the usefulness of various linguistic features to train a machine learning algorithm to pick out hate speech. They found that the most useful single feature was character n-grams, followed closely by word n-grams. However, it was a combination of all their features (n-grams, features of language, features of syntax, and word embedding vectors) that achieved the highest performance. The authors of BIBREF3 studied hate speech through the detection of othering language. They built a custom lexicon of pronouns and semantic relationships in order to capture the linguistic differences when describing the in-group and out-group in messages, and trained a word embedding model on that data.",
"Hate speech is not the only form of toxicity that has been studied. In BIBREF4 , the authors studied cyberbullying. They developed a list of 300 \"bad\" words sorted in five levels of severity. Next, they used the number and density of \"bad\" words found in each online message as the features to train a set of machine learning systems. The authors of BIBREF5 also used words as featured in two systems, this time to detect sexual predators. One used the TFxIDF values of the words of the text to train a single-class SVM classifier, and the other used a bag-of-words vector of the text as input to a deep neural network. The authors found that the latter system offered the better performance in their experiments.",
"Recently, deep learning has become very popular for NLP applications, and pre-trained word embeddings have been shown to be very effective in most text-based neural network applications. In BIBREF6 , four different deep learning models were implemented and shown to outperform benchmark techniques for cyberbullying detection on three different datasets. In BIBREF7 , a deep neural network taking a word embedding vector as input was used to detect cyberbullying on Twitter.",
"It thus appears from the related literature that authors have tried a variety of alternative features to automatically detect toxic messages without relying strictly on keyword detection. However, sentiment has rarely been considered. It was one of the inputs of the deep neural network of BIBREF7 , but the paper never discussed its importance or analyzed its impact. The authors of BIBREF8 conducted the first study of cyberbullying in Dutch, and considered several features, including a subjectivity keyword lexicon. They found its inclusion helped improve results, but that a more sophisticated source of information than simple keyword detection was required. And the study of BIBREF9 used the sentiment of messages, as measured by the SentiStrength online system, as one of several features to detect cyberbullying messages. However, an in-dept analysis of how sentiment can benefit toxicity detection has not been done in any of these papers, and a study of the use of sentiment in a subversive context has never been done."
],
[
"Sentiment detection, or the task of determining whether a document has a positive or negative tone, has been frequently studied in the literature. It is usually done by using a sentiment lexicon that either classifies certain words as positive or negative, or quantifies their level of positivity or negativity. We decided to consider six such lexicons:",
"SentiWordNet is a widely-used resource for sentiment mining. It is based on WordNet, and assigns three scores to each synset, namely positivity, negativity, and objectivity, with the constraint that the sum of all three must be 1. Using this lexicon requires a bit of preprocessing for us, since the same word can occur in multiple different synsets with different meanings and therefore different scores. Since picking out the intended meaning and synset of a polysemous word found in a message is beyond our scope, we instead chose to merge the different meanings and compute a weighted average of the scores of the word. The weights are the ranks of the synsets, which correspond to the popularity of that meaning of the word in documents. The average score equation is : DISPLAYFORM0 ",
"where INLINEFORM0 is the number of times the word occurs with the same part of speech. We compute the average positivity and negativity scores, but not the objectivity scores, since they are not useful for our purpose and since they are simply the complement of the other two. This allows us to extract 155,287 individual words from the lexicon, with a positivity and negativity score between 0 and 1 for each. We should note that SentiWordNet differentiates a word based on part-of-speech, and we maintain this distinction in our work",
"Afinn is a lexicon of 3,382 words that are rated between -5 (maximum negativity) and 5 (maximum positivity). To match SentiWordNet, we split this score into positivity and negativity scores between 0 and 1. For example, a word with a INLINEFORM0 score was changed to have a positive score of 0 and a negative score of INLINEFORM1 .",
"Bing Liu compiled lists of 6,789 positive or negative words. Given no other information, we assigned each word in the positive list a positivity score of 1 and a negativity score of 0, and vice-versa for the negative-list words.",
"General Inquirer is a historically-popular lexicon of 14,480 words, though only 4,206 of them are tagged as either positive or negative. As for the Bing Liu lexicon, we assigned binary positive and negative scores to each word that was tagged as positive or negative.",
"Subjectivity Clues extends the sentiment tags of the General Inquirer up to 8,222 words using a dictionary and thesaurus. It also adds a binary strength level (strong or weak) to the polarity information. We merged polarity and strength as a measure of 0.5 and 1 for weak or strong positivity or negativity.",
"NRC has a list of 14,182 words that are marked as associated (1) or not associated (0) with 8 emotions (anger, fear, anticipation, trust, surprise, sadness, joy, disgust) and two sentiments (negative and positive). We transform this association into binary positive and negative scores in the same way we did for Bing Liu and General Inquirer.",
"All six of these lexicons have limitations, which stem from their limited vocabulary and the ambiguity of the problem. Indeed, despite being thousands of words each and covering the same subject and purpose, our six lexicons have only 394 words in common, indicating that each is individually very incomplete compared to the others. And we can easily find inconsistencies between the ratings of words, both internally within each lexicon and externally when we compare the same words between lexicons. Table TABREF16 illustrate some of these inconsistencies: for instance, the word \"helpless\" is very negative in SentiWordNet but less so in Afinn and Subjectivity Clues, while the word \"terrorize\" is more strongly negative in the latter two resources but less negative (and even a bit positive) in SentiWordNet. Likewise, the word \"joke\" is strongly positive, weakly positive, or even negative, depending on the lexicon used, and the word \"merry\" is more positive than \"joke\" according to every lexicon except SentiWordnet, which rates it equally positive and negative. By contrast the word \"splendid\" has the same positivity values as \"merry\" in all lexicons except SentiWordnet, where it has the highest possible positivity score. In a longer document, such as the customer reviews these lexicons are typically used on BIBREF10 , BIBREF11 , BIBREF12 , these problems are minor: the abundance and variety of vocabulary in the text will insure that the correct sentiment emerges overall despite the noise these issues cause. This is not true for the short messages of online conversations, and it has forced some authors who study the sentiments of microblogs to resort to creating or customizing their own lexicons BIBREF13 . This, incidentally, is also why we could not simply use an existing sentiment classifier. We will instead opt to combine these lexicons into a more useful resource."
],
[
"The first preprocessing step is to detect the presence and scope of negations in a message. Negations have an important impact; the word \"good\" may be labeled positive in all our lexicons, but its actual meaning will differ in the sentences \"this movie is good\" and \"this movie is not good\". We thus created a list of negation keywords by combining together the lists of the negex algorithm and of BIBREF14 , filtering out some irrelevant words from these lists, and adding some that were missing from the lists but are found online.",
"Next, we need to determine the scope of the negation, which means figuring out how many words in the message are affected by it. This is the challenge of, for example, realizing that the negation affects the word \"interesting\" in \"this movie is not good or interesting\" but not in \"this movie is not good but interesting\". We considered two algorithms to detect the scope of negations. The first is to simply assume the negation affects a fixed window of five words after the keyword BIBREF15 , while the second discovers the syntactic dependencies in the sentence in order to determine precisely which words are affected BIBREF16 .",
"We tested both algorithms on the SFU review corpus of negation and speculation. As can be seen in Table TABREF21 the dependency algorithm gave generally better results, and managed to find the exact scope of the negation in over 43% of sentences. However, that algorithm also has a larger standard deviation in its scope, meaning that when it fails to find the correct scope, it can be off by quite a lot, while the fixed window is naturally bounded in its errors. Moreover, the increased precision of the dependencies algorithm comes at a high processing cost, requiring almost 30 times longer to analyze a message as the fixed window algorithm. Given that online communities frequently deal with thousands of new messages every second, efficiency is a major consideration, and we opted for the simple fixed window algorithm for that reason.",
"The second preprocessing step is to detect sentiment-carrying idioms in the messages. For example, while the words \"give\" and \"up\" can both be neutral or positive, the idiom \"give up\" has a clear negative sentiment. Several of these idioms can be found in our lexicons, especially SentiWordNet (slightly over INLINEFORM0 ). We detect them in our messages and mark them so that our algorithm will handle them as single words going forward.",
"Finally, we use the NLTK wordpunkt_tokenizer to split sentences into words, and the Stanford fasterEnglishPOSTagger to get the part-of-speech of each word. Since our lexicons contain only four parts-of-speech (noun, verb, adverb, and adjective) and Stanford's tagger has more than 30 possible tags, we manually mapped each tag to one of the four parts-of-speech (for example, \"verb, past participle\" maps to \"verb\")."
],
[
"Once every word has a positivity and a negativity score, we can use them to determine the sentiment of an entire message. We do this by computing separately the sum of positive scores and of negative scores of words in the message, and subtracting the negative total from the positive total. In this way, a score over 0 means a positive message, and a score under 0 means a negative message. We consider two alternatives at this point: one in which we sum the sentiment value of all words in the sentence, and one where we only sum the sentiment value of the top-three words with the highest scores for each polarity. We label these \"All words\" and \"Top words\" in our results. The impact of this difference is felt when we consider a message with a few words with a strong polarity and a lot of words with a weak opposite polarity; in the \"Top words\" scheme these weak words will be ignored and the strong polarity words will dictate the polarity of the message, while in the \"All words\" scheme the many weak words can sum together to outweigh the few strong words and change the polarity of the message.",
"We optionally take negations into account in our sentiment computation. When a word occurs in the word window of a negation, we flip its positivity and negativity scores. In other words, instead of adding its positivity score to the positivity total of the sentence, we added its negativity score, and the other way round for the negativity total. Experiments where we do that are labeled \"Negativity\" in our results.",
"Finally, we optionally incorporate word weights based on their frequency in our datasets. When applied, the score of each word is multiplied by a frequency modifier, which we adapted from BIBREF10 : DISPLAYFORM0 ",
"where INLINEFORM0 is the number of times the word appears in a dataset, and INLINEFORM1 is the number of times the most frequent word appears in that dataset. Experiments using this frequency modifier are labeled \"Frequency\" in our results."
],
[
"Our experiments have four main objectives: (1) to determine whether the \"All words\" or the \"Top words\" strategy is preferable; (2) to determine whether the inclusion of \"Negation\" and \"Frequency\" modifiers is useful; (3) to determine which of the six lexicons is most accurate; and (4) to determine whether a weighted combination of the six lexicons can outperform any one lexicon.",
"To conduct our experiments, we used the corpus of annotated news comments available from the Yahoo Webscope program. The comments in this dataset are annotated by up to three professional, trained editors to label various attributes, including type, sentiment and tone. Using these three attributes, we split the dataset into two categories, sarcastic and non-sarcastic, and then again into five categories, clear negative, slight negative, neutral, slight positive, and clear positive. Finally, we kept only the non-sarcastic comments where all annotators agreed to reduce noise. This gives us a test corpus of 2,465 comments.",
"To evaluate our results, we compute the sentiment score of each comment in our test corpus using our various methods, and we then compute the average sentiment score of comments in each of the five sentiment categories. For ease of presentation, we give a simplified set of results in Table TABREF26 , with only the average score of the two negative and the two positive labels combined, along with the overlap of the two distributions. The overlap is obtained by taking two normal distributions with the the means and standard deviations of the positive and the negative sets, and calculating the area in common under both curves. It gives us a measure of the ambiguous region where comments may be positive or negative. A good sentiment classifier will thus have very distant positive and negative scores and a very low overlap.",
"These results show that there are important differences between the lexicons. Three of the six are rather poor at picking out negative sentiments, namely Subjectivity Clues (where negative sentences are on average detected as more positive than the positive sentences), General Inquirer, and NRC. This bias for positivity is an issue for a study on toxicity, which we expect to be expressed using negative sentiments. The other three lexicons give a good difference between positive and negative sentences. For these three lexicons, we find that using All words increases the gap between positive and negative sentence scores but greatly increases the standard deviation of each sentiment class, meaning the sentiment of the messages becomes ambiguous. On the other hand, using Top words reduces the overlap between the distributions and thus gives a better separation of positive and negative sentiments. And while adding frequency information or negations does not cause a major change in the results, it does give a small reduction in overlap.",
"To study combinations of lexicons, we decided to limit our scope to SentiWordNet, Afinn, and Bing Liu, the three lexicons that could accurately pick out negative sentiments, and on the Top words strategy. We consider three common strategies to combine the results of independent classifiers: majority voting, picking the one classifier with the maximum score (which is assumed to be the one with the highest confidence in its classification), and taking the average of the scores of all three classifiers. For the average, we tried using a weighted average of the lexicons and performed a grid search to find the optimal combination. However, the best results were obtained when the three lexicons were taken equally. For the majority vote, we likewise take the average score of the two or three classifiers in the majority sentiment.",
"Table TABREF27 presents the results we obtained with all three strategies. It can be seen that combining the three classifiers outperforms taking any one classifier alone, in the sense that it creates a wider gap between the positive and negative sentences and a smaller overlap. It can also be seen that the addition of negation and frequency information gives a very small improvement in the results in all three cases. Comparing the three strategies it can be seen that the maximum strategy is the one with the biggest gap in between positive and negative distribution, which was to be expected since the highest positive or negative sentiment is selected each time while it gets averaged out in the other two classifiers. However, the average score strategy creates a significantly smaller standard deviation of sentiment scores and a lower overlap between the distributions of positive and negative sentences. For that reason, we find the average score to be the best of the three combination strategies.",
"In all cases, we find that most misclassified sentences in our system are due to the lack of insults in the vocabulary. For example, none of the lexicons include colorful insults like \"nut job\" and \"fruitcake\", so sentences where they appear cannot be recognized as negative. Likewise, some words, such as the word \"gay\", are often used as insults online, but have positive meanings in formal English; this actually leads to labeling insult messages as positive sentences. This issue stems from the fact that these lexicons were designed for sentiment analysis in longer and more traditional documents, such as customer reviews and editorials. One will seldom, if ever, find insults (especially politically-incorrect ones such as the previous examples) in these documents."
],
[
"The main contribution of this paper is to study how sentiment can be used to detect toxicity in subversive online comments. To do this, we will use three new test corpora:"
],
[
"Our first experiment consists in computing the sentiment of each message in each of our three test corpora, and verifying how they correlate with the different toxicity scores of each of the corpora. Following the results we found in Section SECREF3 , we used the best three lexicons (SentiWordNet, Afinn, and Bing Liu), combined them by taking the average score, and used our four algorithm variations. The results are presented in Table TABREF37 . It can be seen that there is a clear negative correlation between toxicity and sentiment in the messages, as expected. Our results also show that using words only or including frequency information makes the relationship clearer, while adding negations muddies it. These results are consistent over all three test corpora, despite being from different sources and labeled using different techniques. The lower score on the Reddit dataset may simply be due to the fact it was labeled automatically by a system that flags potentially dangerous content and not by human editors, so its labels may be noisier. For example, mentioning sexual body parts will be labeled as toxicity level 5 even if they are used in a positive sentence, because they carry more potential risk."
],
[
"Our second experiment consists in studying the benefits of taking sentiments into account when trying to determine whether a comment is toxic or not. The toxicity detector we implemented in this experiment is a deep neural network inspired by the most successful systems in the Kaggle toxicity competition we used as a dataset. It uses a bi-GRU layer with kernel size of 40. The final state is sent into a single linear classifier. To avoid overfitting, two 50% dropout layers are added, one before and one after the bi-GRU layer.",
"The network takes as input a sentence split into words and into individual characters. The words are represented by the 300d fastText pre-trained word embeddings, and characters are represented by a one-hot character encoding but restricted to the set of 60 most common characters in the messages to avoid the inclusion of noise. Finally, we used our \"top + frequency\" sentiment classifier with the average of the best three lexicons (SentiWordNet, Afinn, and Bing Liu) to determine the sentiment of each message. We input that information into the neural network as three sentiment values, corresponding to each of the three lexicons used, for each of the frequent words retained for the message. Words that are not among the selected frequent words or that are not found in a lexicon receive a sentiment input value of 0. Likewise, experiments that do not make use of sentiment information have inputs of 0 for all words. These input values are then concatenated together into a vector of 363 values, corresponding to the 300 dimensions of fastText, the 60 one-hot character vector, and the 3 sentiment lexicons.",
"The output of our network is a binary \"toxic or non-toxic\" judgment for the message. In the Kaggle dataset, this corresponds to whether the \"toxic\" label is active or not. In the Reddit dataset, it is the set of messages evaluated at levels 5, 6 or 7 by Community Sift in any of the topics mentioned earlier. And in the Wikipedia dataset, it is any message marked as toxic by 5 workers or more. We chose this binary approach to allow the network to learn to recognize toxicity, as opposed to types of toxic messages on Kaggle, keyword severity on Reddit, or a particular worker's opinions on Wikipedia. However, this simplification created a balance problem: while the Reddit dataset is composed of 12% toxic messages and 88% non-toxic messages, the Wikipedia dataset is composed of 18% toxic messages and the Kaggle dataset of 10% toxic messages. To create balanced datasets, we kept all toxic messages and undersampled randomly the set of non-toxic messages to equal the number of toxic messages.",
"Our experiment consists in comparing the toxicity detection accuracy of our network when excluding or including sentiment information and in the presence of subversion. Indeed, as mentioned in Sections SECREF1 and SECREF2 , it is trivial for a subversive user to mask toxic keywords to bypass toxicity filters. In order to simulate this behavior and taking ideas from BIBREF0 , we created a substitution list that replaces popular toxic keywords with harmless versions. For example, the word \"kill\" is replaced by \"kilt\", and \"bitch\" by \"beach\". Our list contains 191 words, and its use adds noise to INLINEFORM0 of the toxic Kaggle messages, INLINEFORM1 of the Wikipedia messages, and INLINEFORM2 of the Reddit messages. These substitutions are only done at testing time, and not taken into account in training, to simulate the fact that users can create never-before-seen modifications.",
"We trained and tested our neural network with and without sentiment information, with and without subversion, and with each corpus three times to mitigate the randomness in training. In every experiment, we used a random 70% of messages in the corpus as training data, another 20% as validation data, and the final 10% as testing data. The average results of the three tests are given in Table TABREF40 . It can be seen that sentiment information helps improve toxicity detection in all cases. The improvement is smaller when the text is clean. However, the introduction of subversion leads to an important drop in the accuracy of toxicity detection in the network that uses the text alone, and the inclusion of sentiment information gives an important improvement in that case. Comparing the different corpora, it can be seen that the improvement is smallest in the Reddit dataset experiment, which is expected since it is also the dataset in which toxicity and sentiment had the weakest correlation in Table TABREF37 .",
"We can note that the system performs very well in all cases, even with subversion and without sentiment information. This may be due to the fact that the messages in all datasets are user-generated and therefore noisy already. In addition, the character encoding of the neural network is robust to misspellings, as opposed to a keyword lookup system."
],
[
"In this paper, we explored the relationship between sentiment and toxicity in social network messages. We began by implementing a sentiment detection tool using different lexicons and different features such as word frequencies and negations. This tool allowed us to demonstrate that there exists a clear correlation between sentiment and toxicity. Next, we added sentiment information to a toxicity detection neural network, and demonstrated that it does improve detection accuracy. Finally, we simulated a subversive user who attempts to circumvent the toxicity filter by masking toxic keywords in their messages, and found that using sentiment information improved toxicity detection by as much as 3%. This confirms our fundamental intuition, that while it is possible for a user to mask toxic words with simple substitutions, it is a lot harder for a user to conceal the sentiment of a message.",
"Our work so far has focused on single-line messages and general toxicity detection. There are however several different types of toxicity, some of which correlate to different sentiments. For instance, while cyber-bullying and hate speech have negative sentiments, other forms of toxicity such as fraud or sexual grooming will use more positive sentiments in order to lure victims. We expect that differentiating between these types of toxicity will strengthen the correlation to message sentiment and further improve our results. Likewise, handling entire conversations instead of individual messages will allow us to include contextual information to better model the sentiment of the message, and to detect sudden changes in the sentiment of the conversation that may correspond to a disruptive toxic comment."
],
[
"This research was made possible by the financial, material, and technical support of Two Hat Security Research Corp, and the financial support of the Canadian research organization MITACS."
]
],
"section_name": [
"Introduction",
"Related Work",
"Lexicons",
"Message Preprocessing",
"Message Sentiment",
"Experimental Results",
"Toxicity Detection",
"Correlation",
"Subversive Toxicity Detection",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"f515e2df44349bff529e2c7890913bd141eab0df"
],
"answer": [
{
"evidence": [
"We trained and tested our neural network with and without sentiment information, with and without subversion, and with each corpus three times to mitigate the randomness in training. In every experiment, we used a random 70% of messages in the corpus as training data, another 20% as validation data, and the final 10% as testing data. The average results of the three tests are given in Table TABREF40 . It can be seen that sentiment information helps improve toxicity detection in all cases. The improvement is smaller when the text is clean. However, the introduction of subversion leads to an important drop in the accuracy of toxicity detection in the network that uses the text alone, and the inclusion of sentiment information gives an important improvement in that case. Comparing the different corpora, it can be seen that the improvement is smallest in the Reddit dataset experiment, which is expected since it is also the dataset in which toxicity and sentiment had the weakest correlation in Table TABREF37 .",
"FLOAT SELECTED: Table 7: Accuracy of toxicity detection with and without sentiment"
],
"extractive_spans": [],
"free_form_answer": "Kaggle\nSubversive Kaggle\nWikipedia\nSubversive Wikipedia\nReddit\nSubversive Reddit ",
"highlighted_evidence": [
"n every experiment, we used a random 70% of messages in the corpus as training data, another 20% as validation data, and the final 10% as testing data. The average results of the three tests are given in Table TABREF40 .",
"FLOAT SELECTED: Table 7: Accuracy of toxicity detection with and without sentiment"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"what datasets did the authors use?"
],
"question_id": [
"f62c78be58983ef1d77049738785ec7ab9f2a3ee"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Table 1: Sentiment of words per lexicon",
"Table 2: Comparison between fixed window and syntactic dependencies negation detection algorithms",
"Table 3: Average sentiment scores of negative and positive (respectively) labeled sentences, and their overlap.",
"Table 4: Sentiment scores using combinations of lexicons.",
"Table 5: Toxicity levels in Community Sift.",
"Table 6: Correlation between sentiment and toxicity.",
"Table 7: Accuracy of toxicity detection with and without sentiment"
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"8-Table7-1.png"
]
} | [
"what datasets did the authors use?"
] | [
[
"1812.01704-Subversive Toxicity Detection-4",
"1812.01704-8-Table7-1.png"
]
] | [
"Kaggle\nSubversive Kaggle\nWikipedia\nSubversive Wikipedia\nReddit\nSubversive Reddit "
] | 870 |
1712.03556 | Stochastic Answer Networks for Machine Reading Comprehension | We propose a simple yet robust stochastic answer network (SAN) that simulates multi-step reasoning in machine reading comprehension. Compared to previous work such as ReasoNet which used reinforcement learning to determine the number of steps, the unique feature is the use of a kind of stochastic prediction dropout on the answer module (final layer) of the neural network during the training. We show that this simple trick improves robustness and achieves results competitive to the state-of-the-art on the Stanford Question Answering Dataset (SQuAD), the Adversarial SQuAD, and the Microsoft MAchine Reading COmprehension Dataset (MS MARCO). | {
"paragraphs": [
[
"Machine reading comprehension (MRC) is a challenging task: the goal is to have machines read a text passage and then answer any question about the passage. This task is an useful benchmark to demonstrate natural language understanding, and also has important applications in e.g. conversational agents and customer service support. It has been hypothesized that difficult MRC problems require some form of multi-step synthesis and reasoning. For instance, the following example from the MRC dataset SQuAD BIBREF0 illustrates the need for synthesis of information across sentences and multiple steps of reasoning:",
" $Q$ : What collection does the V&A Theator & Performance galleries hold?",
" $P$ : The V&A Theator & Performance galleries opened in March 2009. ... They hold the UK's biggest national collection of material about live performance.",
"To infer the answer (the underlined portion of the passage $P$ ), the model needs to first perform coreference resolution so that it knows “They” refers “V&A Theator”, then extract the subspan in the direct object corresponding to the answer.",
"This kind of iterative process can be viewed as a form of multi-step reasoning. Several recent MRC models have embraced this kind of multi-step strategy, where predictions are generated after making multiple passes through the same text and integrating intermediate information in the process. The first models employed a predetermined fixed number of steps BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Later, shen2016reasonet proposed using reinforcement learning to dynamically determine the number of steps based on the complexity of the question. Further, shen2017empirical empirically showed that dynamic multi-step reasoning outperforms fixed multi-step reasoning, which in turn outperforms single-step reasoning on two distinct MRC datasets (SQuAD and MS MARCO).",
"In this work, we derive an alternative multi-step reasoning neural network for MRC. During training, we fix the number of reasoning steps, but perform stochastic dropout on the answer module (final layer predictions). During decoding, we generate answers based on the average of predictions in all steps, rather than the final step. We call this a stochastic answer network (SAN) because the stochastic dropout is applied to the answer module; albeit simple, this technique significantly improves the robustness and overall accuracy of the model. Intuitively this works because while the model successively refines its prediction over multiple steps, each step is still trained to generate the same answer; we are performing a kind of stochastic ensemble over the model's successive prediction refinements. Stochastic prediction dropout is illustrated in Figure 1 ."
],
[
"The machine reading comprehension (MRC) task as defined here involves a question $Q=\\lbrace q_0, q_1, ..., q_{m-1}\\rbrace $ and a passage $P=\\lbrace p_0, p_1, ..., p_{n-1}\\rbrace $ and aims to find an answer span $A=\\lbrace a_{start}, a_{end}\\rbrace $ in $P$ . We assume that the answer exists in the passage $P$ as a contiguous text string. Here, $m$ and $n$ denote the number of tokens in $Q$ and $P$ , respectively. The learning algorithm for reading comprehension is to learn a function $f(Q, P) \\rightarrow A$ . The training data is a set of the query, passage and answer tuples $P=\\lbrace p_0, p_1, ..., p_{n-1}\\rbrace $0 .",
"We now describe our model from the ground up. The main contribution of this work is the answer module, but in order to understand what goes into this module, we will start by describing how $Q$ and $P$ are processed by the lower layers. Note the lower layers also have some novel variations that are not used in previous work. As shown in Figure 2 , our model contains four different layers to capture different concept of representations. The detailed description of our model is provided as follows.",
"Lexicon Encoding Layer. The purpose of the first layer is to extract information from $Q$ and $P$ at the word level and normalize for lexical variants. A typical technique to obtain lexicon embedding is concatenation of its word embedding with other linguistic embedding such as those derived from Part-Of-Speech (POS) tags. For word embeddings, we use the pre-trained 300-dimensional GloVe vectors BIBREF5 for the both $Q$ and $P$ . Following chen2017reading, we use three additional types of linguistic features for each token $p_i$ in the passage $P$ :",
"In summary, each token $p_i$ in the passage is represented as a 600-dimensional vector and each token $q_j$ is represented as a 300-dimensional vector.",
"Due to different dimensions for the passages and questions, in the next layer two different bidirectional LSTM (BiLSTM) BIBREF6 may be required to encode the contextual information. This, however, introduces a large number of parameters. To prevent this, we employ an idea inspired by BIBREF7 : use two separate two-layer position-wise Feed-Forward Networks (FFN), $FFN(x)=W_2 ReLU(W_1 x +b_1) + b_2$ , to map both the passage and question lexical encodings into the same number of dimensions. Note that this FFN has fewer parameters compared to a BiLSTM. Thus, we obtain the final lexicon embeddings for the tokens in $Q$ as a matrix $E^q \\in \\mathbb {R}^{d \\times m}$ and tokens in $P$ as $E^p\\in \\mathbb {R}^{d \\times n}$ .",
"Contextual Encoding Layer. Both passage and question use a shared two-layers BiLSTM as the contextual encoding layer, which projects the lexicon embeddings to contextual embeddings. We concatenate a pre-trained 600-dimensional CoVe vectors BIBREF8 trained on German-English machine translation dataset, with the aforementioned lexicon embeddings as the final input of the contextual encoding layer, and also with the output of the first contextual encoding layer as the input of its second encoding layer. To reduce the parameter size, we use a maxout layer BIBREF9 at each BiLSTM layer to shrink its dimension. By a concatenation of the outputs of two BiLSTM layers, we obtain $H^q\\in \\mathbb {R}^{2d \\times m}$ as representation of $Q$ and $H^p\\in \\mathbb {R}^{2d \\times n}$ as representation of $P$ , where $d$ is the hidden size of the BiLSTM.",
"Memory Generation Layer. In the memory generation layer, We construct the working memory, a summary of information from both $Q$ and $P$ . First, a dot-product attention is adopted like in BIBREF7 to measure the similarity between the tokens in $Q$ and $P$ . Instead of using a scalar to normalize the scores as in BIBREF7 , we use one layer network to transform the contextual information of both $Q$ and $P$ :",
"$$C=dropout(f_{attention}(\\hat{H}^q, \\hat{H}^p)) \\in \\mathbb {R}^{m \\times n}\\\\$$ (Eq. 8) ",
" $C$ is an attention matrix. Note that $\\hat{H^q}$ and $\\hat{H^p}$ is transformed from $H^q$ and $H^p$ by one layer neural network $ReLU(W_3x)$ , respectively. Next, we gather all the information on passages by a simple concatenation of its contextual information $H^p$ and its question-aware representation $H^q \\cdot C$ : ",
"$$U^p = concat(H^p, H^qC) \\in \\mathbb {R}^{4d \\times n}$$ (Eq. 9) ",
"Typically, a passage may contain hundred of tokens, making it hard to learn the long dependencies within it. Inspired by BIBREF10 , we apply a self-attended layer to rearrange the information $U^p$ as: ",
"$$\\hat{U}^p = U^p drop_{diag}(f_{attention}(U^p, U^p)).$$ (Eq. 10) ",
"In other words, we first obtain an $n \\times n$ attention matrix with $U^p$ onto itself, apply dropout, then multiply this matrix with $U^p$ to obtain an updated $\\hat{U}^p$ . Instead of using a penalization term as in BIBREF10 , we dropout the diagonal of the similarity matrix forcing each token in the passage to align to other tokens rather than itself.",
"At last, the working memory is generated by using another BiLSTM based on all the information gathered: ",
"$$M=BiLSTM([U^p; \\hat{U}^p])$$ (Eq. 11) ",
"where the semicolon mark $;$ indicates the vector/matrix concatenation operator.",
"Answer module. There is a Chinese proverb that says: “wisdom of masses exceeds that of any individual.\" Unlike other multi-step reasoning models, which only uses a single output either at the last step or some dynamically determined final step, our answer module employs all the outputs of multiple step reasoning. Intuitively, by applying dropout, it avoids a “step bias problem\" (where models places too much emphasis one particular step's predictions) and forces the model to produce good predictions at every individual step. Further, during decoding, we reuse wisdom of masses instead of individual to achieve a better result. We call this method “stochastic prediction dropout\" because dropout is being applied to the final predictive distributions.",
"Formally, our answer module will compute over $T$ memory steps and output the answer span. This module is a memory network and has some similarities to other multi-step reasoning networks: namely, it maintains a state vector, one state per step. At the beginning, the initial state $s_0$ is the summary of the $Q$ : $s_0=\\sum _j \\alpha _j H^q_{j}$ , where $\\alpha _j = \\frac{exp(w_4 \\cdot H^q_j)}{\\sum _{j^{\\prime }}exp(w_4 \\cdot H^q_{j^{\\prime }})}$ . At time step $t$ in the range of $\\lbrace 1, 2, ..., T-1\\rbrace $ , the state is defined by $s_t = GRU(s_{t-1}, x_t)$ . Here, $x_t$ is computed from the previous state $s_{t-1}$ and memory $s_0$0 : $s_0$1 and $s_0$2 . Finally, a bilinear function is used to find the begin and end point of answer spans at each reasoning step $s_0$3 .",
"$$P_t^{begin} = softmax(s_tW_6M)$$ (Eq. 12) ",
"$$P_t^{end} = softmax([s_t; \\sum _j P_{t,j}^{begin}M_j]W_7M).$$ (Eq. 13) ",
"From a pair of begin and end points, the answer string can be extracted from the passage. However, rather than output the results (start/end points) from the final step (which is fixed at $T-1$ as in Memory Networks or dynamically determined as in ReasoNet), we utilize all of the $T$ outputs by averaging the scores: ",
"$$P^{begin} = avg([P_0^{begin}, P_1^{begin}, ..., P_{T-1}^{begin}])$$ (Eq. 14) ",
"$$P^{end} = avg([P_0^{end}, P_1^{end}, ..., P_{T-1}^{end}])$$ (Eq. 15) ",
"Each $P_t^{begin}$ or $P_t^{end}$ is a multinomial distribution over $\\lbrace 1,\\ldots ,n\\rbrace $ , so the average distribution is straightforward to compute.",
"During training, we apply stochastic dropout to before the above averaging operation. For example, as illustrated in Figure 1 , we randomly delete several steps' predictions in Equations 14 and 15 so that $P^{begin}$ might be $avg([P_1^{begin}, P_3^{begin}])$ and $P^{end}$ might be $avg([P_0^{end}, P_3^{end}, P_{4}^{end}])$ . The use of averaged predictions and dropout during training improves robustness.",
"Our stochastic prediction dropout is similar in motivation to the dropout introduced by BIBREF11 . The difference is that theirs is dropout at the intermediate node-level, whereas ours is dropout at the final layer-level. Dropout at the node-level prevents correlation between features. Dropout at the final layer level, where randomness is introduced to the averaging of predictions, prevents our model from relying exclusively on a particular step to generate correct output. We used a dropout rate of 0.4 in experiments."
],
[
" Dataset: We evaluate on the Stanford Question Answering Dataset (SQuAD) BIBREF0 . This contains about 23K passages and 100K questions. The passages come from approximately 500 Wikipedia articles and the questions and answers are obtained by crowdsourcing. The crowdsourced workers are asked to read a passage (a paragraph), come up with questions, then mark the answer span. All results are on the official development set, unless otherwise noted.",
"Two evaluation metrics are used: Exact Match (EM), which measures the percentage of span predictions that matched any one of the ground truth answer exactly, and Macro-averaged F1 score, which measures the average overlap between the prediction and the ground truth answer. Implementation details: The spaCy tool is used to tokenize the both passages and questions, and generate lemma, part-of-speech and named entity tags. We use 2-layer BiLSTM with $d=128$ hidden units for both passage and question encoding. The mini-batch size is set to 32 and Adamax BIBREF12 is used as our optimizer. The learning rate is set to 0.002 at first and decreased by half after every 10 epochs. We set the dropout rate for all the hidden units of LSTM, and the answer module output layer to 0.4. To prevent degenerate output, we ensure that at least one step in the answer module is active during training."
],
[
"The main experimental question we would like to answer is whether the stochastic dropout and averaging in the answer module is an effective technique for multi-step reasoning. To do so, we fixed all lower layers and compared different architectures for the answer module:",
"The main results in terms of EM and F1 are shown in Table 1 . We observe that SAN achieves 76.235 EM and 84.056 F1, outperforming all other models. Standard 1-step model only achieves 75.139 EM and dynamic steps (via ReasoNet) achieves only 75.355 EM. SAN also outperforms a 5-step memory net with averaging, which implies averaging predictions is not the only thing that led to SAN's superior results; indeed, stochastic prediction dropout is an effective technique.",
"The K-best oracle results is shown in Figure 3 . The K-best spans are computed by ordering the spans according the their probabilities $P^{begin} \\times P^{end}$ . We limit K in the range 1 to 4 and then pick the span with the best EM or F1 as oracle. SAN also outperforms the other models in terms of K-best oracle scores. Impressively, these models achieve human performance at $K=2$ for EM and $K=3$ for F1.",
"Finally, we compare our results with other top models in Table 2 . Note that all the results in Table 2 are taken from the published papers. We see that SAN is very competitive in both single and ensemble settings (ranked in second) despite its simplicity. Note that the best-performing model BIBREF14 used a large-scale language model as an extra contextual embedding, which gave a significant improvement (+4.3% dev F1). We expect significant improvements if we add this to SAN in future work."
],
[
"We are interested in whether the proposed model is sensitive to different random initial conditions. Table 3 shows the development set scores of SAN trained from initialization with different random seeds. We observe that the SAN results are consistently strong regardless of the 10 different initializations. For example, the mean EM score is 76.131 and the lowest EM score is 75.922, both of which still outperform the 75.355 EM of the Dynamic step ReasoNet in Table 1 .",
"We are also interested in how sensitive are the results to the number of reasoning steps, which is a fixed hyper-parameter. Since we are using dropout, a natural question is whether we can extend the number of steps to an extremely large number. Table 4 shows the development set scores for $T=1$ to $T=10$ . We observe that there is a gradual improvement as we increase $T=1$ to $T=5$ , but after 5 steps the improvements have saturated. In fact, the EM/F1 scores drop slightly, but considering that the random initialization results in Table 3 show a standard deviation of 0.142 and a spread of 0.426 (for EM), we believe that the $T=10$ result does not statistically differ from the $T=5$ result. In summary, we think it is useful to perform some approximate hyper-parameter tuning for the number of steps, but it is not necessary to find the exact optimal value.",
"Finally, we test SAN on two Adversarial SQuAD datasets, AddSent and AddOneSent BIBREF22 , where the passages contain auto-generated adversarial distracting sentences to fool computer systems that are developed to answer questions about the passages. For example, AddSent is constructed by adding sentences that look similar to the question, but do not actually contradict the correct answer. AddOneSent is constructed by appending a random human-approved sentence to the passage.",
"We evaluate the single SAN model (i.e., the one presented in Table 2 ) on both AddSent and AddOneSent. The results in Table 5 show that SAN achieves the new state-of-the-art performance and SAN's superior result is mainly attributed to the multi-step answer module, which leads to significant improvement in F1 score over the Standard 1-step answer module, i.e., +1.2 on AddSent and +0.7 on AddOneSent."
],
[
"For practical deployment scenarios, prediction speed at test time is an important criterion. Therefore, one question is whether SAN can train with, e.g. $T=5$ steps but test with $T=1$ steps. Table 6 shows the results of a SAN trained on $T=5$ steps, but tested with different number of steps. As expected, the results are best when $T$ matches during training and test; however, it is important to note that small numbers of steps $T=1$ and $T=2$ nevertheless achieve strong results. For example, prediction at $T=1$ achieves 75.58, which outperforms a standard 1-step model (75.14 EM) as in Table 1 that has approximate equivalent prediction time."
],
[
"The average training time per epoch is comparable: our implementation running on a GTX Titan X is 22 minutes for 5-step memory net, 30 minutes for ReasoNet, and 24 minutes for SAN. The learning curve is shown in Figure 4 . We observe that all systems improve at approximately the same rate up to 10 or 15 epochs. However, SAN continues to improve afterwards as other models start to saturate. This observation is consistent with previous works using dropout BIBREF11 . We believe that while training time per epoch is similar between SAN and other models, it is recommended to train SAN for more epochs in order to achieve gains in EM/F1."
],
[
"To see whether SAN performs well on a particular type of question, we divided the development set by questions type based on their respective Wh-word, such as “who\" and “where\". The score breakdown by F1 is shown in Figure 5 . We observe that SAN seems to outperform other models uniformly across all types. The only exception is the Why questions, but there is too little data to derive strong conclusions."
],
[
"MS MARCO BIBREF27 is a large scale real-word RC dataset which contains 100,100 (100K) queries collected from anonymized user logs from the Bing search engine. The characteristic of MS MARCO is that all the questions are real user queries and passages are extracted from real web documents. For each query, approximate 10 passages are extracted from public web documents. The answers are generated by humans. The data is partitioned into a 82,430 training, a 10,047 development and 9,650 test tuples. The evaluation metrics are BLEU BIBREF28 and ROUGE-L BIBREF29 due to its free-form text answer style. To apply the same RC model, we search for a span in MS MARCO's passages that maximizes the ROUGE-L score with the raw free-form answer. It has an upper bound of 93.45 BLEU and 93.82 ROUGE-L on the development set.",
"The MS MARCO dataset contains multiple passages per query. Our model as shown in Figure 2 is developed to generate answer from a single passage. Thus, we need to extend it to handle multiple passages. Following BIBREF13 , we take two steps to generate an answer to a query $Q$ from $J$ passages, $P^1, ..., P^J$ . First, we run SAN on every ( $P^j, Q$ ) pair, generating $J$ candidate answer spans, one from each passage. Then, we multiply the SAN score of each candidate answer span with its relevance score $r(P^j, Q)$ assigned by a passage ranker, and output the span with the maximum score as the answer. In our experiments, we use the passage ranker described in BIBREF30 . The ranker is trained on the same MS MARCO training data, and achieves 37.1 p@1 on the development set.",
"The results in Table 7 show that SAN outperforms V-Net BIBREF31 and becomes the new state of the art."
],
[
"The recent big progress on MRC is largely due to the availability of the large-scale datasets BIBREF0 , BIBREF27 , BIBREF32 , BIBREF1 , since it is possible to train large end-to-end neural network models. In spite of the variety of model structures and attenion types BIBREF33 , BIBREF34 , BIBREF35 , BIBREF21 , BIBREF13 , BIBREF19 , a typical neural network MRC model first maps the symbolic representation of the documents and questions into a neural space, then search answers on top of it. We categorize these models into two groups based on the difference of the answer module: single-step and multi-step reasoning. The key difference between the two is what strategies are applied to search the final answers in the neural space.",
"A single-step model matches the question and document only once and produce the final answers. It is simple yet efficient and can be trained using the classical back-propagation algorithm, thus it is adopted by most systems BIBREF34 , BIBREF21 , BIBREF19 , BIBREF18 , BIBREF36 , BIBREF37 , BIBREF17 . However, since humans often solve question answering tasks by re-reading and re-digesting the document multiple times before reaching the final answers (this may be based on the complexity of the questions/documents), it is natural to devise an iterative way to find answers as multi-step reasoning.",
"Pioneered by BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , who used a predetermined fixed number of reasoning steps, Shen et al shen2016reasonet, shen2017empirical showed that multi-step reasoning outperforms single-step ones and dynamic multi-step reasoning further outperforms the fixed multi-step ones on two distinct MRC datasets (SQuAD and MS MARCO). But these models have to be trained using reinforcement learning methods, e.g., policy gradient, which are tricky to implement due to the instability issue. Our model is different in that we fix the number of reasoning steps, but perform stochastic dropout to prevent step bias. Further, our model can also be trained by using the back-propagation algorithm, which is simple and yet efficient."
],
[
"We introduce Stochastic Answer Networks (SAN), a simple yet robust model for machine reading comprehension. The use of stochastic dropout in training and averaging in test at the answer module leads to robust improvements on SQuAD, outperforming both fixed step memory networks and dynamic step ReasoNet. We further empirically analyze the properties of SAN in detail. The model achieves results competitive with the state-of-the-art on the SQuAD leaderboard, as well as on the Adversarial SQuAD and MS MARCO datasets. Due to the strong connection between the proposed model with memory networks and ReasoNet, we would like to delve into the theoretical link between these models and its training algorithms. Further, we also would like to explore SAN on other tasks, such as text classification and natural language inference for its generalization in the future."
],
[
"We thank Pengcheng He, Yu Wang and Xinying Song for help to set up dockers. We also thank Pranav Samir Rajpurkar for help on SQuAD evaluations, and the anonymous reviewers for valuable discussions and comments. "
]
],
"section_name": [
"Introduction",
"Proposed model: SAN",
"Experiment Setup",
"Results",
"How robust are the results?",
"Is it possible to use different numbers of steps in test vs. train?",
"How does the training time compare?",
"How does SAN perform by question type?",
"Experiments results on MS MARCO",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"f864dde04f962f4441ab60ffb4226800eaadfa59"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Test performance on SQuAD. Results are sorted by Test F1.",
"Finally, we compare our results with other top models in Table 2 . Note that all the results in Table 2 are taken from the published papers. We see that SAN is very competitive in both single and ensemble settings (ranked in second) despite its simplicity. Note that the best-performing model BIBREF14 used a large-scale language model as an extra contextual embedding, which gave a significant improvement (+4.3% dev F1). We expect significant improvements if we add this to SAN in future work.",
"The main experimental question we would like to answer is whether the stochastic dropout and averaging in the answer module is an effective technique for multi-step reasoning. To do so, we fixed all lower layers and compared different architectures for the answer module:",
"FLOAT SELECTED: Table 1: Main results—Comparison of different answer module architectures. Note that SAN performs best in both Exact Match and F1 metrics."
],
"extractive_spans": [],
"free_form_answer": "Compared to baselines SAN (Table 1) shows improvement of 1.096% on EM and 0.689% F1. Compared to other published SQuAD results (Table 2) SAN is ranked second. ",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Test performance on SQuAD. Results are sorted by Test F1.",
"We see that SAN is very competitive in both single and ensemble settings (ranked in second) despite its simplicity.",
"The main experimental question we would like to answer is whether the stochastic dropout and averaging in the answer module is an effective technique for multi-step reasoning. To do so, we fixed all lower layers and compared different architectures for the answer module",
"FLOAT SELECTED: Table 1: Main results—Comparison of different answer module architectures. Note that SAN performs best in both Exact Match and F1 metrics."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"74eea9f3f4f790836045fcc75d0b3f5156901499"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"yes"
],
"question": [
"How much performance improvements they achieve on SQuAD?"
],
"question_id": [
"39a450ac15688199575798e72a2cc016ef4316b5"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"Machine Reading"
],
"topic_background": [
"research"
]
} | {
"caption": [
"Figure 1: Illustration of “stochastic prediction dropout” in the answer module during training. At each reasoning step t, the model combines memory (bottom row) with hidden states st−1 to generate a prediction (multinomial distribution). Here, there are three steps and three predictions, but one prediction is dropped and the final result is an average of the remaining distributions.",
"Figure 2: Architecture of the SAN for Reading Comprehension: The first layer is a lexicon encoding layer that maps words to their embeddings independently for the question (left) and the passage (right): this is a concatenation of word embeddings, POS embeddings, etc. followed by a position-wise FFN. The next layer is a context encoding layer, where a BiLSTM is used on the top of the lexicon embedding layer to obtain the context representation for both question and passage. In order to reduce the parameters, a maxout layer is applied on the output of BiLSTM. The third layer is the working memory: First we compute an alignment matrix between the question and passage using an attention mechanism, and use this to derive a question-aware passage representation. Then we concatenate this with the context representation of passage and the word embedding, and employ a self attention layer to re-arrange the information gathered. Finally, we use another LSTM to generate a working memory for the passage. At last, the fourth layer is the answer module, which is a GRU that outputs predictions at each state st.",
"Table 1: Main results—Comparison of different answer module architectures. Note that SAN performs best in both Exact Match and F1 metrics.",
"Table 2: Test performance on SQuAD. Results are sorted by Test F1.",
"Table 3: Robustness of SAN (5-step) on different random seeds for initialization: best and worst scores are boldfaced. Note that our official submit is trained on seed 1.",
"Figure 3: K-Best Oracle results",
"Table 4: Effect of number of steps: best and worst results are boldfaced.",
"Figure 5: Score breakdown by question type.",
"Table 5: Test performance on the adversarial SQuAD dataset in F1 score.",
"Table 6: Prediction on different steps T . Note that the SAN model is trained using 5 steps.",
"Figure 4: Learning curve measured on Dev set.",
"Table 7: MS MARCO devset results."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Figure3-1.png",
"7-Table4-1.png",
"8-Figure5-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"8-Figure4-1.png",
"9-Table7-1.png"
]
} | [
"How much performance improvements they achieve on SQuAD?"
] | [
[
"1712.03556-6-Table1-1.png",
"1712.03556-6-Table2-1.png",
"1712.03556-Results-3",
"1712.03556-Results-0"
]
] | [
"Compared to baselines SAN (Table 1) shows improvement of 1.096% on EM and 0.689% F1. Compared to other published SQuAD results (Table 2) SAN is ranked second. "
] | 875 |
1612.09113 | Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies | In this paper we present a novel Neural Network algorithm for conducting semisupervised learning for sequence labeling tasks arranged in a linguistically motivated hierarchy. This relationship is exploited to regularise the representations of supervised tasks by backpropagating the error of the unsupervised task through the supervised tasks. We introduce a neural network where lower layers are supervised by downstream tasks and the final layer task is an auxiliary unsupervised task. The architecture shows improvements of up to two percentage points F β=1 for Chunking compared to a plausible baseline. | {
"paragraphs": [
[
"It is natural to think of NLP tasks existing in a hierarchy, with each task building upon the previous tasks. For example, Part of Speech (POS) is known to be an extremely strong feature for Noun Phrase Chunking, and downstream tasks such as greedy Language Modeling (LM) can make use of information about the syntactic and semantic structure recovered from junior tasks in making predictions.",
"Conversely, information about downstream tasks should also provide information that aids generalisation for junior downstream tasks, a form of semi-supervised learning. Arguably, there is a two-way relationship between each pair of tasks.",
"Following work such as sogaard2016deep, that exploits such hierarchies in a fully supervised setting, we represent this hierarchical relationship within the structure of a multi-task Recurrent Neural Network (RNN), where junior tasks in the hierarchy are supervised on inner layers and the parameters are jointly optimised during training. Joint optimisation within a hierarchical network acts as a form of regularisation in two ways: first, it forces the network to learn general representations within the parameters of the shared hidden layers BIBREF0 ; second, there is a penalty on the supervised junior layers for forming a representation and making predictions that are inconsistent with senior tasks. Intuitively, we can see how this can be beneficial - when humans receive new information from one task that is inconsistent with with our internal representation of a junior task we update both representations to maintain a coherent view of the world.",
"By incorporating an unsupervised auxiliary task (e.g. plank2016multilingual) as the most senior layer we can use this structure for semi-supervised learning - the error on the unsupervised tasks penalises junior tasks when their representations and predictions are not consistent. It is the aim of this paper to demonstrate that organising a network in such a way can improve performance. To that end, although we do not achieve state of the art results, we see a small but consistent performance improvement against a baseline. A diagram of our model can be seen in Figure 1 .",
"Our Contributions:"
],
[
"When we speak and understand language we are arguably performing many different linguistic tasks at once. At the top level we might be trying to formulate the best possible sequence of words given all of the contextual and prior information, but this requires us to do lower-level tasks like understanding the syntactic and semantic roles of the words we choose in a specific context.",
"This paper seeks to examine the POS tagging, Chunking and Language Modeling hierarchy and demonstrate that, by developing an algorithm that both exploits this structure and optimises all three jointly, we can improve performance."
],
[
"In the original introductory paper to Noun Phrase Chunking, abney1991parsing, Chunking is motivated by describing a three-phase process - first, you read the words and assign a Part of Speech tag, you then use a ‘Chunker’ to group these words together into chunks depending on the context and the Parts of Speech, and finally you build a parse tree on top of the chunks.",
"The parallels between this linguistic description of parsing and our architecture are clear; first, we build a prediction for POS, we then use this prediction to assist in parsing by Chunk, which we then use for greedy Language Modeling. In this hierarchy, we consider Language Modeling as auxiliary - designed to improve performance on POS and Chunking, and so therefore results are not presented for this task."
],
[
"In our model we represent linguistically motivated hierarchies in a multi-task Bi-Directional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layers.This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags. In addition to sogaard2016deep.",
"Work such as mirowski-vlachos:2015:ACL-IJCNLP in which incorporating syntactic dependencies improves performance, demonstrates the benefits of incorporating junior tasks in prediction.",
"Our neural network has one hidden layer, after which each successive task is supervised on the next layer. In addition, we add skip connections from the hidden layer to the senior supervised layers to allow layers to ignore information from junior tasks.",
"A diagram of our network can be seen in Figure 1 ."
],
[
"Our model has 3 sources of error signals - one for each task. Since each task is categorical we use the discrete cross entropy to calculate the loss for each task: $\nH(p, q) = - \\sum _{i}^{n_{labels}} p(label_i) \\ log \\ q(label_i)\n$ ",
"Where $n_{labels}$ is the number of labels in the task, $q(label_i)$ is the probability of label $i$ under the predicted distribution, and $p(label_i)$ is the probability of label $i$ in the true distribution (in this case, a one-hot vector).",
"During training with fully supervised data (POS, Chunk and Language Modeling), we optimise the mean cross entropy: $\nLoss(x,y) = \\frac{1}{n} \\sum _{i}^{n} H(y, f_{task_i}(x))\n$ ",
"Where $f_{task_i}(x)$ is the predicted distribution on task number $i$ from our model.",
"When labels are missing, we drop the associated cross entropy terms from the loss, and omit the cross entropy calculation from the forward pass."
],
[
"Our network is a Bi-Directional Recurrent Neural Network (Bi-RNN) (schuster1997bidirectional) with Gated Recurrent Units (GRUs) (cho2014properties, chung2014empirical).",
"In a Bi-Directional RNN we run left-to-right through the sentence, and then we run right-to-left. This gives us two hidden states at time step t - one from the left-to-right pass, and one from the right-to-left pass. These are then combined to provide a probability distribution for the tag token conditioned on all of the other words in the sentence."
],
[
"During training we alternate batches of data with POS and Chunk and Language Model labels with batches of just Language Modeling according to some probability $ 0 < \\gamma < 1$ .",
"We train our model using the ADAM (kingma2014adam) optimiser for 100 epochs, where one epoch corresponds to one pass through the labelled data. We train in batch sizes of $32\\times 32$ ."
],
[
"We present our experiments on two data sets - CoNLL 2000 Chunking data set (tjong2000introduction) which is derived from the Penn Tree Bank newspaper text (marcus1993building), and the Genia biomedical corpus (kim2003genia), derived from biomedical article abstracts.",
"These two data sets were chosen since they perform differently under the same classifiers BIBREF1 . The unlabelled data for semi-supervised learning for newspaper text is the Penn Tree Bank, and for biomedical text it a custom data set of Pubmed abstracts."
],
[
"We compare the results of our model to a baseline multi-task architecture inspired by yang2016multi. In our baseline model there are no explicit connections between tasks - the only shared parameters are in the hidden layer.",
"We also present results for our hierarchical model where there is no training on unlabelled data (but there is the LM) and confirm previous results that arranging tasks in a hierarchy improves performance. Results for both models can be seen for POS in Table 2 and for Chunk in Table 1 ."
],
[
"Experiments showing the effects of our semi-supervised learning regime on models initialised both with and without pre-trained word embeddings can be seen in Tables 3 and 4 .",
"In models without pre-trained word embeddings we see a significant improvement associated with the semi-supervised regime.",
"However, we observe that for models with pre-trained word embeddings, the positive impact of semi-supervised learning is less significant. This is likely due to the fact some of the regularities learned using the language model are already contained within the embedding. In fact, the training schedule of SENNA is similar to that of neural language modelling (collobert2011natural).",
"Two other points are worthy of mention in the experiments with 100 % of the training data. First, the impact of semi-supervised learning on biomedical data is significantly less than on newspaper data. This is likely due to the smaller overlap between vocabularies in the training set and vocabularies in the test set. Second, the benefits for POS are smaller than they are for Chunking - this is likely due to the POS weights being more heavily regularised by receiving gradients from both the Chunking and Language Modeling loss.",
"Finally, we run experiments with only a fraction of the training data to see whether our semi-supervised approach makes our models more robust (Tables 3 and 4 ). Here, we find variable but consistent improvement in the performance of our tasks even at 1 % of the original training data."
],
[
"Our model structure includes an embedding layer between each task. This layer allows us to learn low-dimensional vector representations of labels, and expose regularities in a way similar to e.g. mikolov2013distributed.",
"We demonstrate this in Figure 2 where we present a T-SNE visualisation of our label embeddings for Chunking and observe clusters along the diagonal."
],
[
"In this paper we have demonstrated two things: a way to use hierarchical neural networks to conduct semi-supervised learning and the associated performance improvements, and a way to learn low-dimensional embeddings of labels.",
"Future work would investigate how to address Catastrophic Forgetting BIBREF2 (the problem in Neural Networks of forgetting previous tasks when training on a new task), which leads to the requirement for the mix parameter $\\gamma $ in our algorithm, and prevents such models such as ours from scaling to larger supervised task hierarchies where the training data may be various and disjoint."
]
],
"section_name": [
"Introduction",
"Linguistically Motivated Task Hierarchies",
"Motivating our Choice of Tasks",
"Our Model",
"Supervision of Multiple Tasks",
"Bi-Directional RNNs",
"Implementation Details",
"Data Sets",
"Baseline Results",
"Semi-Supervised Experiments",
"Label Embeddings",
"Conclusions & Further Work"
]
} | {
"answers": [
{
"annotation_id": [
"813f65e527ab10955d4d3e362a314627ff36fbc0"
],
"answer": [
{
"evidence": [
"We compare the results of our model to a baseline multi-task architecture inspired by yang2016multi. In our baseline model there are no explicit connections between tasks - the only shared parameters are in the hidden layer."
],
"extractive_spans": [],
"free_form_answer": "The baseline is a multi-task architecture inspired by another paper.",
"highlighted_evidence": [
"We compare the results of our model to a baseline multi-task architecture inspired by yang2016multi. In our baseline model there are no explicit connections between tasks - the only shared parameters are in the hidden layer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
},
{
"annotation_id": [
"cf031cd9d5a91a82b430d8812ac5fc1cd9a9edd8"
],
"answer": [
{
"evidence": [
"In our model we represent linguistically motivated hierarchies in a multi-task Bi-Directional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layers.This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags. In addition to sogaard2016deep."
],
"extractive_spans": [
"Language Modeling"
],
"free_form_answer": "",
"highlighted_evidence": [
"This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"4857c606a55a83454e8d81ffe17e05cf8bc4b75f"
]
},
{
"annotation_id": [
"820f01e350893152edd87c1ec5fe980edcfd02ef"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1: Our Hierarchical Network. In this network, junior tasks are supervised in lower layers, with an unsupervised task (Language Modeling) at the most senior layer."
],
"extractive_spans": [
"two"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: Our Hierarchical Network. In this network, junior tasks are supervised in lower layers, with an unsupervised task (Language Modeling) at the most senior layer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"4857c606a55a83454e8d81ffe17e05cf8bc4b75f"
]
},
{
"annotation_id": [
"28275838d529dac91976389175ef274e3b00b5f7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1: Our Hierarchical Network. In this network, junior tasks are supervised in lower layers, with an unsupervised task (Language Modeling) at the most senior layer.",
"In our model we represent linguistically motivated hierarchies in a multi-task Bi-Directional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layers.This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags. In addition to sogaard2016deep.",
"Our neural network has one hidden layer, after which each successive task is supervised on the next layer. In addition, we add skip connections from the hidden layer to the senior supervised layers to allow layers to ignore information from junior tasks."
],
"extractive_spans": [],
"free_form_answer": "The network architecture has a multi-task Bi-Directional Recurrent Neural Network, with an unsupervised sequence labeling task and a low-dimensional embedding layer between tasks. There is a hidden layer after each successive task with skip connections to the senior supervised layers.",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: Our Hierarchical Network. In this network, junior tasks are supervised in lower layers, with an unsupervised task (Language Modeling) at the most senior layer.",
"In our model we represent linguistically motivated hierarchies in a multi-task Bi-Directional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layers.This architecture builds upon sogaard2016deep, but is adapted in two ways: first, we add an unsupervised sequence labeling task (Language Modeling), second, we add a low-dimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags. In addition to sogaard2016deep.",
"Our neural network has one hidden layer, after which each successive task is supervised on the next layer. In addition, we add skip connections from the hidden layer to the senior supervised layers to allow layers to ignore information from junior tasks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What is the baseline?",
"What is the unsupervised task in the final layer?",
"How many supervised tasks are used?",
"What is the network architecture?"
],
"question_id": [
"85e45b37408bb353c6068ba62c18e516d4f67fe9",
"f4e1d2276d3fc781b686d2bb44eead73e06fbf3f",
"bf2ebc9bbd4cbdf8922c051f406effc97fd16e54",
"c13fe4064df0cfebd0538f29cb13e917fc5c3be0"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Our Hierarchical Network. In this network, junior tasks are supervised in lower layers, with an unsupervised task (Language Modeling) at the most senior layer.",
"Table 2: Hierarchy POS Results",
"Table 1: Hierarchy Chunking Results",
"Figure 2: T-SNE for Chunk labels. The orange spots represent labels at the beginning of chunks (‘b’), whereas red spots represent labels at the end of chunks (‘i’). We can clearly see clusters along the diagonal.",
"Table 3: Chunking Unlabelled Data Results",
"Table 4: POS Unlabelled Data Results",
"Table 5: Chunk Semi-Supervised Results",
"Table 6: POS Semi-Supervised Results"
],
"file": [
"2-Figure1-1.png",
"3-Table2-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png",
"4-Table6-1.png"
]
} | [
"What is the baseline?",
"What is the network architecture?"
] | [
[
"1612.09113-Baseline Results-0"
],
[
"1612.09113-2-Figure1-1.png",
"1612.09113-Our Model-2",
"1612.09113-Our Model-0"
]
] | [
"The baseline is a multi-task architecture inspired by another paper.",
"The network architecture has a multi-task Bi-Directional Recurrent Neural Network, with an unsupervised sequence labeling task and a low-dimensional embedding layer between tasks. There is a hidden layer after each successive task with skip connections to the senior supervised layers."
] | 880 |
1702.03274 | Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning | End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors. We introduce Hybrid Code Networks (HCNs), which combine an RNN with domain-specific knowledge encoded as software and system action templates. Compared to existing end-to-end approaches, HCNs considerably reduce the amount of training data required, while retaining the key benefit of inferring a latent representation of dialog state. In addition, HCNs can be optimized with supervised learning, reinforcement learning, or a mixture of both. HCNs attain state-of-the-art performance on the bAbI dialog dataset, and outperform two commercially deployed customer-facing dialog systems. | {
"paragraphs": [
[
"Task-oriented dialog systems help a user to accomplish some goal using natural language, such as making a restaurant reservation, getting technical support, or placing a phonecall. Historically, these dialog systems have been built as a pipeline, with modules for language understanding, state tracking, action selection, and language generation. However, dependencies between modules introduce considerable complexity – for example, it is often unclear how to define the dialog state and what history to maintain, yet action selection relies exclusively on the state for input. Moreover, training each module requires specialized labels.",
"Recently, end-to-end approaches have trained recurrent neural networks (RNNs) directly on text transcripts of dialogs. A key benefit is that the RNN infers a latent representation of state, obviating the need for state labels. However, end-to-end methods lack a general mechanism for injecting domain knowledge and constraints. For example, simple operations like sorting a list of database results or updating a dictionary of entities can expressed in a few lines of software, yet may take thousands of dialogs to learn. Moreover, in some practical settings, programmed constraints are essential – for example, a banking dialog system would require that a user is logged in before they can retrieve account information.",
"This paper presents a model for end-to-end learning, called Hybrid Code Networks (HCNs) which addresses these problems. In addition to learning an RNN, HCNs also allow a developer to express domain knowledge via software and action templates. Experiments show that, compared to existing recurrent end-to-end techniques, HCNs achieve the same performance with considerably less training data, while retaining the key benefit of end-to-end trainability. Moreover, the neural network can be trained with supervised learning or reinforcement learning, by changing the gradient update applied.",
"This paper is organized as follows. Section \"Model description\" describes the model, and Section \"Related work\" compares the model to related work. Section \"Supervised learning evaluation I\" applies HCNs to the bAbI dialog dataset BIBREF0 . Section \"Supervised learning evaluation II\" then applies the method to real customer support domains at our company. Section \"Reinforcement learning illustration\" illustrates how HCNs can be optimized with reinforcement learning, and Section \"Conclusion\" concludes."
],
[
"At a high level, the four components of a Hybrid Code Network are a recurrent neural network; domain-specific software; domain-specific action templates; and a conventional entity extraction module for identifying entity mentions in text. Both the RNN and the developer code maintain state. Each action template can be a textual communicative action or an API call. The HCN model is summarized in Figure 1 .",
"The cycle begins when the user provides an utterance, as text (step 1). The utterance is featurized in several ways. First, a bag of words vector is formed (step 2). Second, an utterance embedding is formed, using a pre-built utterance embedding model (step 3). Third, an entity extraction module identifies entity mentions (step 4) – for example, identifying “Jennifer Jones” as a <name> entity. The text and entity mentions are then passed to “Entity tracking” code provided by the developer (step 5), which grounds and maintains entities – for example, mapping the text “Jennifer Jones” to a specific row in a database. This code can optionally return an “action mask”, indicating actions which are permitted at the current timestep, as a bit vector. For example, if a target phone number has not yet been identified, the API action to place a phone call may be masked. It can also optionally return “context features” which are features the developer thinks will be useful for distinguishing among actions, such as which entities are currently present and which are absent.",
"The feature components from steps 1-5 are concatenated to form a feature vector (step 6). This vector is passed to an RNN, such as a long short-term memory (LSTM) BIBREF1 or gated recurrent unit (GRU) BIBREF2 . The RNN computes a hidden state (vector), which is retained for the next timestep (step 8), and passed to a dense layer with a softmax activation, with output dimension equal to the number of distinct system action templates (step 9). Thus the output of step 9 is a distribution over action templates. Next, the action mask is applied as an element-wise multiplication, and the result is normalized back to a probability distribution (step 10) – this forces non-permitted actions to take on probability zero. From the resulting distribution (step 11), an action is selected (step 12). When RL is active, exploration is required, so in this case an action is sampled from the distribution; when RL is not active, the best action should be chosen, and so the action with the highest probability is always selected.",
"The selected action is next passed to “Entity output” developer code that can substitute in entities (step 13) and produce a fully-formed action – for example, mapping the template “<city>, right?” to “Seattle, right?”. In step 14, control branches depending on the type of the action: if it is an API action, the corresponding API call in the developer code is invoked (step 15) – for example, to render rich content to the user. APIs can act as sensors and return features relevant to the dialog, so these can be added to the feature vector in the next timestep (step 16). If the action is text, it is rendered to the user (step 17), and cycle then repeats. The action taken is provided as a feature to the RNN in the next timestep (step 18)."
],
[
"Broadly there are two lines of work applying machine learning to dialog control. The first decomposes a dialog system into a pipeline, typically including language understanding, dialog state tracking, action selection policy, and language generation BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . Specifically related to HCNs, past work has implemented the policy as feed-forward neural networks BIBREF12 , trained with supervised learning followed by reinforcement learning BIBREF13 . In these works, the policy has not been recurrent – i.e., the policy depends on the state tracker to summarize observable dialog history into state features, which requires design and specialized labeling. By contrast, HCNs use an RNN which automatically infers a representation of state. For learning efficiency, HCNs use an external light-weight process for tracking entity values, but the policy is not strictly dependent on it: as an illustration, in Section \"Supervised learning evaluation II\" below, we demonstrate an HCN-based dialog system which has no external state tracker. If there is context which is not apparent in the text in the dialog, such as database status, this can be encoded as a context feature to the RNN.",
"The second, more recent line of work applies recurrent neural networks (RNNs) to learn “end-to-end” models, which map from an observable dialog history directly to a sequence of output words BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . These systems can be applied to task-oriented domains by adding special “API call” actions, enumerating database output as a sequence of tokens BIBREF0 , then learning an RNN using Memory Networks BIBREF27 , gated memory networks BIBREF28 , query reduction networks BIBREF29 , and copy-augmented networks BIBREF30 . In each of these architectures, the RNN learns to manipulate entity values, for example by saving them in a memory. Output is produced by generating a sequence of tokens (or ranking all possible surface forms), which can also draw from this memory. HCNs also use an RNN to accumulate dialog state and choose actions. However, HCNs differ in that they use developer-provided action templates, which can contain entity references, such as “<city>, right?”. This design reduce learning complexity, and also enable the software to limit which actions are available via an action mask, at the expense of developer effort. To further reduce learning complexity in a practical system, entities are tracked separately, outside the the RNN, which also allows them to be substituted into action templates. Also, past end-to-end recurrent models have been trained using supervised learning, whereas we show how HCNs can also be trained with reinforcement learning."
],
[
"In this section we compare HCNs to existing approaches on the public “bAbI dialog” dataset BIBREF0 . This dataset includes two end-to-end dialog learning tasks, in the restaurant domain, called task5 and task6. Task5 consists of synthetic, simulated dialog data, with highly regular user behavior and constrained vocabulary. Dialogs include a database access action which retrieves relevant restaurants from a database, with results included in the dialog transcript. We test on the “OOV” variant of Task5, which includes entity values not observed in the training set. Task6 draws on human-computer dialog data from the second dialog state tracking challenge (DSTC2), where usability subjects (crowd-workers) interacted with several variants of a spoken dialog system BIBREF31 . Since the database from DSTC2 was not provided, database calls have been inferred from the data and inserted into the dialog transcript. Example dialogs are provided in the Appendix Sections \"bAbI Task5 example dialog\" and \"bAbI Task6 example dialog\" .",
"To apply HCNs, we wrote simple domain-specific software, as follows. First, for entity extraction (step 4 in Figure 1 ), we used a simple string match, with a pre-defined list of entity names – i.e., the list of restaurants available in the database. Second, in the context update (step 5), we wrote simple logic for tracking entities: when an entity is recognized in the user input, it is retained by the software, over-writing any previously stored value. For example, if the price “cheap” is recognized in the first turn, it is retained as price=cheap. If “expensive” is then recognized in the third turn, it over-writes “cheap” so the code now holds price=expensive. Third, system actions were templatized: for example, system actions of the form “prezzo is a nice restaurant in the west of town in the moderate price range” all map to the template “<name> is a nice restaurant in the <location> of town in the <price> price range”. This results in 16 templates for Task5 and 58 for Task6. Fourth, when database results are received into the entity state, they are sorted by rating. Finally, an action mask was created which encoded common-sense dependencies. These are implemented as simple if-then rules based on the presence of entity values: for example, only allow an API call if pre-conditions are met; only offer a restaurant if database results have already been received; do not ask for an entity if it is already known; etc.",
"For Task6, we noticed that the system can say that no restaurants match the current query without consulting the database (for an example dialog, see Section \"bAbI Task6 example dialog\" in the Appendix). In a practical system this information would be retrieved from the database and not encoded in the RNN. So, we mined the training data and built a table of search queries known to yield no results. We also added context features that indicated the state of the database – for example, whether there were any restaurants matching the current query. The complete set of context features is given in Appendix Section \"Task5 and Task6 context features\" . Altogether this code consisted of about 250 lines of Python.",
"We then trained an HCN on the training set, employing the domain-specific software described above. We selected an LSTM for the recurrent layer BIBREF1 , with the AdaDelta optimizer BIBREF32 . We used the development set to tune the number of hidden units (128), and the number of epochs (12). Utterance embeddings were formed by averaging word embeddings, using a publicly available 300-dimensional word embedding model trained using word2vec on web data BIBREF33 . The word embeddings were static and not updated during LSTM training. In training, each dialog formed one minibatch, and updates were done on full rollouts (i.e., non-truncated back propagation through time). The training loss was categorical cross-entropy. Further low-level implementation details are in the Appendix Section \"Model implementation details\" .",
"We ran experiments with four variants of our model: with and without the utterance embeddings, and with and without the action mask (Figure 1 , steps 3 and 6 respectively).",
"Following past work, we report average turn accuracy – i.e., for each turn in each dialog, present the (true) history of user and system actions to the network and obtain the network's prediction as a string of characters. The turn is correct if the string matches the reference exactly, and incorrect if not. We also report dialog accuracy, which indicates if all turns in a dialog are correct.",
"We compare to four past end-to-end approaches BIBREF0 , BIBREF28 , BIBREF30 , BIBREF29 . We emphasize that past approaches have applied purely sequence-to-sequence models, or (as a baseline) purely programmed rules BIBREF0 . By contrast, Hybrid Code Networks are a hybrid of hand-coded rules and learned models.",
"Results are shown in Table 1 . Since Task5 is synthetic data generated using rules, it is possible to obtain perfect accuracy using rules (line 1). The addition of domain knowledge greatly simplifies the learning task and enables HCNs to also attain perfect accuracy. On Task6, rules alone fare poorly, whereas HCNs outperform past learned models.",
"We next examined learning curves, training with increasing numbers of dialogs. To guard against bias in the ordering of the training set, we averaged over 5 runs, randomly permuting the order of the training dialogs in each run. Results are in Figure 2 . In Task5, the action mask and utterance embeddings substantially reduce the number of training dialogs required (note the horizontal axis scale is logarithmic). For Task6, the benefits of the utterance embeddings are less clear. An error analysis showed that there are several systematic differences between the training and testing sets. Indeed, DSTC2 intentionally used different dialog policies for the training and test sets, whereas our goal is to mimic the policy in the training set.",
"Nonetheless, these tasks are the best public benchmark we are aware of, and HCNs exceed performance of existing sequence-to-sequence models. In addition, they match performance of past models using an order of magnitude less data (200 vs. 1618 dialogs), which is crucial in practical settings where collecting realistic dialogs for a new domain can be expensive."
],
[
"We now turn to comparing with purely hand-crafted approaches. To do this, we obtained logs from our company's text-based customer support dialog system, which uses a sophisticated rule-based dialog manager. Data from this system is attractive for evaluation because it is used by real customers – not usability subjects – and because its rule-based dialog manager was developed by customer support professionals at our company, and not the authors. This data is not publicly available, but we are unaware of suitable human-computer dialog data in the public domain which uses rules.",
"Customers start using the dialog system by entering a brief description of their problem, such as “I need to update my operating system”. They are then routed to one of several hundred domains, where each domain attempts to resolve a particular problem. In this study, we collected human-computer transcripts for the high-traffic domains “reset password” and “cannot access account”.",
"We labeled the dialog data as follows. First, we enumerated unique system actions observed in the data. Then, for each dialog, starting from the beginning, we examined each system action, and determined whether it was “correct”. Here, correct means that it was the most appropriate action among the set of existing system actions, given the history of that dialog. If multiple actions were arguably appropriate, we broke ties in favor of the existing rule-based dialog manager. Example dialogs are provided in the Appendix Sections \"Forgot password example dialog\" and \"Account access example dialog\" .",
"If a system action was labeled as correct, we left it as-is and continued to the next system action. If the system action was not correct, we replaced it with the correct system action, and discarded the rest of the dialog, since we do not know how the user would have replied to this new system action. The resulting dataset contained a mixture of complete and partial dialogs, containing only correct system actions. We partitioned this set into training and test dialogs. Basic statistics of the data are shown in Table 2 .",
"In this domain, no entities were relevant to the control flow, and there was no obvious mask logic since any question could follow any question. Therefore, we wrote no domain-specific software for this instance of the HCN, and relied purely on the recurrent neural network to drive the conversation. The architecture and training of the RNN was the same as in Section \"Supervised learning evaluation I\" , except that here we did not have enough data for a validation set, so we instead trained until we either achieved 100% accuracy on the training set or reached 200 epochs.",
"To evaluate, we observe that conventional measures like average dialog accuracy unfairly penalize the system used to collect the dialogs – in our case, the rule-based system. If the system used for collection makes an error at turn $t$ , the labeled dialog only includes the sub-dialog up to turn $t$ , and the system being evaluated off-line is only evaluated on that sub-dialog. In other words, in our case, reporting dialog accuracy would favor the HCN because it would be evaluated on fewer turns than the rule-based system. We therefore use a comparative measure that examines which method produces longer continuous sequences of correct system actions, starting from the beginning of the dialog. Specifically, we report $\\Delta P = \\frac{C(\\text{HCN-win}) - C(\\text{rule-win})}{C(\\text{all})}$ , where $C(\\text{HCN-win})$ is the number of test dialogs where the rule-based approach output a wrong action before the HCN; $C(\\text{rule-win})$ is the number of test dialogs where the HCN output a wrong action before the rule-based approach; and $C(\\text{all})$ is the number of dialogs in the test set. When $\\Delta P > 0$ , there are more dialogs in which HCNs produce longer continuous sequences of correct actions starting from the beginning of the dialog. We run all experiments 5 times, each time shuffling the order of the training set. Results are in Figure 3 . HCNs exceed performance of the existing rule-based system after about 30 dialogs.",
"In these domains, we have a further source of knowledge: the rule-based dialog managers themselves can be used to generate example “sunny-day” dialogs, where the user provides purely expected inputs. From each rule-based controller, synthetic dialogs were sampled to cover each expected user response at least once, and added to the set of labeled real dialogs. This resulted in 75 dialogs for the “Forgot password” domain, and 325 for the “Can't access account” domain. Training was repeated as described above. Results are also included in Figure 3 , with the suffix “sampled”. In the “Can't access account” domain, the sampled dialogs yield a large improvement, probably because the flow chart for this domain is large, so the sampled dialogs increase coverage. The gain in the “forgot password” domain is present but smaller.",
"In summary, HCNs can out-perform production-grade rule-based systems with a reasonable number of labeled dialogs, and adding synthetic “sunny-day” dialogs improves performance further. Moreover, unlike existing pipelined approaches to dialog management that rely on an explicit state tracker, this HCN used no explicit state tracker, highlighting an advantage of the model."
],
[
"In the previous sections, supervised learning (SL) was applied to train the LSTM to mimic dialogs provided by the system developer. Once a system operates at scale, interacting with a large number of users, it is desirable for the system to continue to learn autonomously using reinforcement learning (RL). With RL, each turn receives a measurement of goodness called a reward; the agent explores different sequences of actions in different situations, and makes adjustments so as to maximize the expected discounted sum of rewards, which is called the return, denoted $G$ .",
"For optimization, we selected a policy gradient approach BIBREF34 , which has been successfully applied to dialog systems BIBREF35 , robotics BIBREF36 , and the board game Go BIBREF37 . In policy gradient-based RL, a model $\\pi $ is parameterized by $\\mathbf {w}$ and outputs a distribution from which actions are sampled at each timestep. At the end of a trajectory – in our case, dialog – the return $G$ for that trajectory is computed, and the gradients of the probabilities of the actions taken with respect to the model weights are computed. The weights are then adjusted by taking a gradient step proportional to the return: ",
"$$\\mathbf {w} \\leftarrow \\mathbf {w} + \\alpha ( \\sum _t \\triangledown _{\\mathbf {w}} \\log \\pi (a_t|\\mathbf {h_t};\\mathbf {w}) ) ( G - b ) $$ (Eq. 14) ",
"where $\\alpha $ is a learning rate; $a_t$ is the action taken at timestep $t$ ; $\\mathbf {h_t}$ is the dialog history at time $t$ ; $G$ is the return of the dialog; $\\triangledown _{\\mathbf {x}} F$ denotes the Jacobian of $F$ with respect to $\\mathbf {x}$ ; $b$ is a baseline described below; and $a_t$0 is the LSTM – i.e., a stochastic policy which outputs a distribution over $a_t$1 given a dialog history $a_t$2 , parameterized by weights $a_t$3 . The baseline $a_t$4 is an estimate of the average return of the current policy, estimated on the last 100 dialogs using weighted importance sampling. Intuitively, “better” dialogs receive a positive gradient step, making the actions selected more likely; and “worse” dialogs receive a negative gradient step, making the actions selected less likely.",
"SL and RL correspond to different methods of updating weights, so both can be applied to the same network. However, there is no guarantee that the optimal RL policy will agree with the SL training set; therefore, after each RL gradient step, we check whether the updated policy reconstructs the training set. If not, we re-run SL gradient steps on the training set until the model reproduces the training set. Note that this approach allows new training dialogs to be added at any time during RL optimization.",
"We illustrate RL optimization on a simulated dialog task in the name dialing domain. In this system, a contact's name may have synonyms (“Michael” may also be called “Mike”), and a contact may have more than one phone number, such as “work” or “mobile”, which may in turn have synonyms like “cell” for “mobile”. This domain has a database of names and phone numbers taken from the Microsoft personnel directory, 5 entity types – firstname, nickname, lastname, phonenumber, and phonetype – and 14 actions, including 2 API call actions. Simple entity logic was coded, which retains the most recent copy of recognized entities. A simple action mask suppresses impossible actions, such as placing a phonecall before a phone number has been retrieved from the database. Example dialogs are provided in Appendix Section \"Name dialing example dialogs\" .",
"To perform optimization, we created a simulated user. At the start of a dialog, the simulated user randomly selected a name and phone type, including names and phone types not covered by the dialog system. When speaking, the simulated user can use the canonical name or a nickname; usually answers questions but can ignore the system; can provide additional information not requested; and can give up. The simulated user was parameterized by around 10 probabilities, set by hand.",
"We defined the reward as being 1 for successfully completing the task, and 0 otherwise. A discount of $0.95$ was used to incentivize the system to complete dialogs faster rather than slower, yielding return 0 for failed dialogs, and $G = 0.95^{T-1}$ for successful dialogs, where $T$ is the number of system turns in the dialog. Finally, we created a set of 21 labeled dialogs, which will be used for supervised learning.",
"For the RNN in the HCN, we again used an LSTM with AdaDelta, this time with 32 hidden units. RL policy updates are made after each dialog. Since a simulated user was employed, we did not have real user utterances, and instead relied on context features, omitting bag-of-words and utterance embedding features.",
"We first evaluate RL by randomly initializing an LSTM, and begin RL optimization. After 10 RL updates, we freeze the policy, and run 500 dialogs with the user simulation to measure task completion. We repeat all of this for 100 runs, and report average performance. In addition, we also report results by initializing the LSTM using supervised learning on the training set, consisting of 1, 2, 5, or 10 dialogs sampled randomly from the training set, then running RL as described above.",
"Results are in Figure 4 . Although RL alone can find a good policy, pre-training with just a handful of labeled dialogs improves learning speed dramatically. Additional experiments, not shown for space, found that ablating the action mask slowed training, agreeing with BIBREF6 .",
"Finally, we conduct a further experiment where we sample 10 training dialogs, then add one to the training set just before RL dialog 0, 100, 200, ... , 900. Results are shown in Figure 4 . This shows that SL dialogs can be introduced as RL is in progress – i.e., that it is possible to interleave RL and SL. This is an attractive property for practical systems: if a dialog error is spotted by a developer while RL is in progress, it is natural to add a training dialog to the training set."
],
[
"This paper has introduced Hybrid Code Networks for end-to-end learning of task-oriented dialog systems. HCNs support a separation of concerns where procedural knowledge and constraints can be expressed in software, and the control flow is learned. Compared to existing end-to-end approaches, HCNs afford more developer control and require less training data, at the expense of a small amount of developer effort.",
"Results in this paper have explored three different dialog domains. On a public benchmark in the restaurants domain, HCNs exceeded performance of purely learned models. Results in two troubleshooting domains exceeded performance of a commercially deployed rule-based system. Finally, in a name-dialing domain, results from dialog simulation show that HCNs can also be optimized with a mixture of reinforcement and supervised learning.",
"In future work, we plan to extend HCNs by incorporating lines of existing work, such as integrating the entity extraction step into the neural network BIBREF38 , adding richer utterance embeddings BIBREF39 , and supporting text generation BIBREF14 . We will also explore using HCNs with automatic speech recognition (ASR) input, for example by forming features from n-grams of the ASR n-best results BIBREF40 . Of course, we also plan to deploy the model in a live dialog system. More broadly, HCNs are a general model for stateful control, and we would be interested to explore applications beyond dialog systems – for example, in NLP medical settings or human-robot NL interaction tasks, providing domain constraints are important for safety; and in resource-poor settings, providing domain knowledge can amplify limited data."
],
[
"The RNN was specified using Keras version 0.3.3, with back-end computation in Theano version 0.8.0.dev0 BIBREF42 , BIBREF41 . The Keras model specification is given below. The input variable obs includes all features from Figure 1 step 6 except for the previous action (step 18) and the action mask (step 6, top-most vector).",
"# Given:",
"# obs_size, action_size, nb_hidden",
"g = Graph()",
"g.add_input(",
" name='obs',",
" input_shape=(None, obs_size)",
")",
"g.add_input(",
" name='prev_action',",
" input_shape=(None, action_size)",
")",
"g.add_input(",
" name='avail_actions',",
" input_shape=(None, action_size)",
")",
"g.add_node(",
" LSTM(",
" n_hidden,",
" return_sequences=True,",
" activation='tanh',",
" ),",
" name='h1',",
" inputs=[",
" 'obs',",
" 'prev_action',",
" 'avail_actions'",
" ]",
")",
"g.add_node(",
" TimeDistributedDense(",
" action_size,",
" activation='softmax',",
" ),",
" name='h2',",
" input='h1'",
")",
"g.add_node(",
" Activation(",
" activation=normalize,",
" ),",
" name='action',",
" inputs=['h2','avail_actions'],",
" merge_mode='mul',",
" create_output=True",
")",
"g.compile(",
" optimizer=Adadelta(clipnorm=1.),",
" sample_weight_modes={",
" 'action': 'temporal'",
" },",
" loss={",
" 'action':'categorical_crossentropy'",
" }",
")",
"Model sizes are given in Table 3 . Example dialogs are given below for each of the 5 dialog systems. For space and readability, the entity tags that appear in the user and system sides of the dialogs have been removed – for example, Call <name>Joan</name> is shown as Call Joan."
],
[
"good morning",
"hello what can i help you with today",
"i'd like to book a table with italian food",
"i'm on it",
"[silence]",
"where should it be",
"in paris",
"how many people would be in your party",
"for six people please",
"which price range are looking for",
"in a cheap price range please",
"ok let me look into some options for you",
"[silence]",
"api_call italian paris six cheap",
"[database results returned]",
"[silence]",
"what do you think of this option: resto_madrid_cheap_spanish_8stars",
"no this does not work for me",
"sure let me find an other option for you",
"[silence]",
"what do you think of this option: resto_madrid_cheap_spanish_6stars",
"do you have something else",
"sure let me find an other option for you",
"[silence]",
"what do you think of this option: resto_madrid_cheap_spanish_1stars",
"it's perfect",
"great let me do the reservation",
"may i have the phone number of the restaurant",
"here it is resto_madrid_cheap_spanish_1stars_phone",
"thanks",
"is there anything i can help you with",
"no thank you",
"you're welcome"
],
[
"In the system turn marked [*], the system provided venue information without first making an API call.",
"Hello , welcome to the Cambridge restaurant system . You can ask for restaurants by area , price range or food type . How may I help you ?",
"im looking for canapes food in any price range",
"I am sorry but there is no canapes restaurant that matches your request [*]",
"how about european",
"api_call european R_location R_price",
"[database results returned]",
"[silence]",
"michaelhouse_cafe is a great restaurant serving european food and it is in the expensive price range",
"food an",
"What part of town do you have in mind?",
"any any part of town",
"michaelhouse_cafe serves european food .",
"may i have the address and phone number please",
"The phone number of michaelhouse_cafe is michaelhouse_cafe_phone",
"thank you good bye",
"you are welcome"
],
[
"Context features used in Section \"Supervised learning evaluation I\" are in Table 4 ."
],
[
"Some system actions have been shortened for space.",
"How do I recover my password",
"Okay, you don't need to remember your password, we can reset it. Would you like to try that?",
"Yes",
"To reset your password, please see this URL ... Was that helpful?",
"I don't have a flashdrive",
"Let's connect you to a person who can help you"
],
[
"Some system actions have been shortened for space.",
"cannot sign in to my email",
"Please check caps lock is off, and that your email is spelled correctly. Are you able to access your account now?",
"no",
"If you still can't sign in, please go the the following page and check that the email server is running ok... Is everything up and running?",
"yes",
"You need to reset your password - here's how ... Did this solve your problem?",
"still cannot get in",
"Did you receive the security code?",
"yes",
"Check you entered the right numbers. Did this solve your problem?",
"no",
"Have you seen a message that your profile is temporarily blocked?",
"just your message of update on sept 15",
"Did you turn on two-step verification recently?",
"whats that?",
"Sorry, I can't help. Rephrase your question, or click below to talk to a person."
],
[
"How can I help you?",
"Call Joe",
"There's more than one person named Joe. Can you say their full name?",
"Call Joe Adamson",
"SavePhonetypeavail()",
"Calling Joe Adamson, work",
"PlaceCall()",
"How can I help you?",
"Call Sally on her home",
"Sorry, I don't have a home number for Sally Smith. I only have a work phone. Do you want to call that number?",
"no",
"Oh, sorry about that. Goodbye."
]
],
"section_name": [
"Introduction",
"Model description",
"Related work",
"Supervised learning evaluation I",
"Supervised learning evaluation II",
"Reinforcement learning illustration",
"Conclusion",
"Model implementation details",
"bAbI Task5 example dialog",
"bAbI Task6 example dialog",
"Task5 and Task6 context features",
"Forgot password example dialog",
"Account access example dialog",
"Name dialing example dialogs"
]
} | {
"answers": [
{
"annotation_id": [
"a88f96dc52f0d97a0a3e669957dcf710b9464aba"
],
"answer": [
{
"evidence": [
"Recently, end-to-end approaches have trained recurrent neural networks (RNNs) directly on text transcripts of dialogs. A key benefit is that the RNN infers a latent representation of state, obviating the need for state labels. However, end-to-end methods lack a general mechanism for injecting domain knowledge and constraints. For example, simple operations like sorting a list of database results or updating a dictionary of entities can expressed in a few lines of software, yet may take thousands of dialogs to learn. Moreover, in some practical settings, programmed constraints are essential – for example, a banking dialog system would require that a user is logged in before they can retrieve account information."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"A key benefit is that the RNN infers a latent representation of state, obviating the need for state labels."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"a1f322b6458ba69a438ae47d25f786be94b51055"
]
},
{
"annotation_id": [
"e6e51def9f96c0138632636f0d35475d37f5b50e"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"a1f322b6458ba69a438ae47d25f786be94b51055"
]
},
{
"annotation_id": [
"4465f6d4ccc6560916ee02ab013f3b8f36bfdfc4"
],
"answer": [
{
"evidence": [
"We defined the reward as being 1 for successfully completing the task, and 0 otherwise. A discount of $0.95$ was used to incentivize the system to complete dialogs faster rather than slower, yielding return 0 for failed dialogs, and $G = 0.95^{T-1}$ for successful dialogs, where $T$ is the number of system turns in the dialog. Finally, we created a set of 21 labeled dialogs, which will be used for supervised learning."
],
"extractive_spans": [],
"free_form_answer": "reward 1 for successfully completing the task, with a discount by the number of turns, and reward 0 when fail",
"highlighted_evidence": [
"We defined the reward as being 1 for successfully completing the task, and 0 otherwise. A discount of $0.95$ was used to incentivize the system to complete dialogs faster rather than slower, yielding return 0 for failed dialogs, and $G = 0.95^{T-1}$ for successful dialogs, where $T$ is the number of system turns in the dialog. ",
" 0.95^{T-1}",
"reward 0.95^{T-1} "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a1f322b6458ba69a438ae47d25f786be94b51055"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Does the latent dialogue state heklp their model?",
"Do the authors test on datasets other than bAbl?",
"What is the reward model for the reinforcement learning appraoch?"
],
"question_id": [
"ff814793387c8f3b61f09b88c73c00360a22a60e",
"059acc270062921ad27ee40a77fd50de6f02840a",
"6a9eb407be6a459dc976ffeae17bdd8f71c8791c"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"reinforcement",
"reinforcement",
"reinforcement"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Operational loop. Trapezoids refer to programmatic code provided by the software developer, and shaded boxes are trainable components. Vertical bars under “6” represent concatenated vectors which form the input to the RNN.",
"Table 1: Results on bAbI dialog Task5-OOV and Task6 (Bordes and Weston, 2016). Results for “Rules” taken from Bordes and Weston (2016). Note that, unlike cited past work, HCNs make use of domainspecific procedural knowledge.",
"Figure 2: Training dialog count vs. turn accuracy for bAbI dialog Task5-OOV and Task6. “embed” indicates whether utterance embeddings were included; “mask” indicates whether the action masking code was active.",
"Table 2: Basic statistics of labeled customer support dialogs. Test accuracy refers to whole-dialog accuracy of the existing rule-based system.",
"Figure 3: Training dialogs vs. ∆P , where ∆P is the fraction of test dialogs where HCNs produced longer initial correct sequences of system actions than the rules, minus the fraction where rules produced longer initial correct sequences than the HCNs. “embed” indicates whether utterance embeddings were included; “sampled” indicates whether dialogs sampled from the rule-based controller were included in the training set.",
"Figure 4: Dialog success rate vs. reinforcement learning training dialogs. Curve marked “0” begins with a randomly initialized LSTM. Curves marked “N initial” are pre-trained with N labeled dialogs. Curve marked “10, interleaved” adds one SL training dialog before RL dialog 0, 100, 200, ... 900.",
"Table 3: Dimensions of the 5 HCNs in this paper.",
"Table 4: Binary context features used to convey entity and database state in Section 4."
],
"file": [
"2-Figure1-1.png",
"5-Table1-1.png",
"5-Figure2-1.png",
"6-Table2-1.png",
"7-Figure3-1.png",
"7-Figure4-1.png",
"12-Table3-1.png",
"12-Table4-1.png"
]
} | [
"What is the reward model for the reinforcement learning appraoch?"
] | [
[
"1702.03274-Reinforcement learning illustration-7"
]
] | [
"reward 1 for successfully completing the task, with a discount by the number of turns, and reward 0 when fail"
] | 883 |
1610.03112 | Leveraging Recurrent Neural Networks for Multimodal Recognition of Social Norm Violation in Dialog | Social norms are shared rules that govern and facilitate social interaction. Violating such social norms via teasing and insults may serve to upend power imbalances or, on the contrary reinforce solidarity and rapport in conversation, rapport which is highly situated and context-dependent. In this work, we investigate the task of automatically identifying the phenomena of social norm violation in discourse. Towards this goal, we leverage the power of recurrent neural networks and multimodal information present in the interaction, and propose a predictive model to recognize social norm violation. Using long-term temporal and contextual information, our model achieves an F1 score of 0.705. Implications of our work regarding developing a social-aware agent are discussed. | {
"paragraphs": [
[
"Social norms are informal understandings that govern human behavior. They serve as the basis for our beliefs and expectations about others, and are instantiated in human-human conversation through verbal and nonverbal behaviors BIBREF0 , BIBREF1 . There is considerable body of work on modeling socially normative behavior in intelligent agent-based systems BIBREF2 , BIBREF3 , aiming to facilitate lifelike conversations with human users. Violating such social norms and impoliteness in the conversation, on the other hand, have also been demonstrated to positively affect certain aspects of the social interaction. For instance, BIBREF4 suggests impoliteness may challenge rapport in strangers but it is also an indicator of built relationship among friends. The literature on social psychology BIBREF5 shows that the task of managing interpersonal bond like rapport requires management of face which, in turn, relies on behavioral expectation, which are allied with social norms early in a relationship, and become more interpersonally determined as the relationship proceeds. BIBREF6 advanced the arguments by proposing that with the increasing knowledge of one another, more general norms may be purposely violated in order to accommodate each other's behavior expectation. Moreover, they proposed that such kind of social norm violation in fact reinforce the sense of in-group connectedness. Finally in BIBREF7 , the authors discovered the effect of temporally co-occurring smile and social norm violation that signal high interpersonal rapport. Thus, we believe that recognizing the phenomena of social norm violation in dialog can contribute important insights into understanding the interpersonal dynamics that unfold between the interlocutors.",
"Interesting prior work on quantifying social norm violation has taken a heavily data-driven focus BIBREF8 , BIBREF9 . For instance, BIBREF8 trained a series of bigram language models to quantify the violation of social norms in users' posts on an online community by leveraging cross-entropy value, or the deviation of word sequences predicted by the language model and their usage by the user. However, their models were trained on written-language instead of natural face-face dialog corpus. Another kind of social norm violation was examined by BIBREF10 , who developed a classifier to identify specific types of sarcasm in tweets. They utilized a bootstrapping algorithm to automatically extract lists of positive sentiment phrases and negative situation phrases from given sarcastic tweets, which were in turn leveraged to recognize sarcasm in an SVM classifier. However, no contextual information was considered in this work. BIBREF11 understood the nature of social norm violation in dialog by correlating it with associated observable verbal, vocal and visual cues. By leveraging their findings and statistical machine learning techniques, they built a computational model for automatic recognition. While they preserved short-term temporal contextual information in the model, this study avoided dealing with sparsity of the social norm violation phenomena by under-sampling the negative-class instances to make a balanced dataset.",
"Motivated by theoretical rationale and prior empirical findings concerning the relationship between violation social norm and interpersonal dynamics, in the current work, we take a step towards addressing the above limitations and our contributions are two-fold: (1)We quantitatively evaluate the contribution of long-term temporal contextual information on detecting violation of social norm. (2)We incorporate this understanding to our computational model for automatic recognizing social norm violation by leveraging the power of recurrent neural network on modeling the long-term temporal dependencies."
],
[
"Reciprocal peer tutoring data was collected from 12 American English-speaking dyads (6 friends and 6 strangers; 6 boys and 6 girls), with a mean age of 13 years, who interacted for 5 hourly sessions over as many weeks (a total of 60 sessions, and 5400 minutes of data), tutoring one another in algebra. Each session began with a period of getting to know one another, after which the first tutoring period started, followed by another small social interlude, a second tutoring period with role reversal between the tutor and tutee, and then the final social time.",
"We assessed our automatic recognition of social norm violation against this corpus annotated for those strategies. Inter-rater reliability (IRR) for the social norm violation that computed via Krippendorff's alpha was 0.75. IRR for visual behavior was 0.89 for eye gaze, 0.75 for smile count (how many smiles occur), 0.64 for smile duration and 0.99 for head nod. Table 1 shows statistics of our corpus. Below we discuss the definition of social norm violation.",
"Ground Truth: Social norm violations are behaviors or actions that go against general socially acceptable and stereotypical behaviors. In a first pass, we coded whether a clause was a social norm violation. In a second pass, if a social norm violation, we differentiated: (1) breaking the conversational rules of the experiment (e.g. off-task talk during tutoring session, insulting the experimenter or the experiment, etc); (2) face threatening acts (e.g. criticizing, teasing, or insulting, etc); (3) referring to one's own or the other person's social norm violations or general social norm violations (e.g. referring to the need to get back to focusing on work, or to the other person being verbally annoying etc). Social norms are culturally-specific, and so we judged a social norm violation by the impact it had on the listener (e.g. shock, specific reference to the behavior as a violation, etc.)."
],
[
"In this section, our objective was to build a computational model for detecting social norm violation. Towards this end, we first took each clause, the smallest units that can express a complete proposition, as the prediction unit. Next, inspired from the thorough analysis in BIBREF11 , we extracted verbal and visual features of the speaker that were highly correlated to social norm violation clauses, with rare threshold being set to 20. Verbal features included LIWC features BIBREF12 that helped in categorization of words used during usage of social norm violation, bigrams, part of speech bigrams and word-part of speech pairs from the speaker's clauses. Visual features included head node, smile and eye gaze information of the speaker. In total there were 3782 features per clause."
],
[
"We treated a dialog $D$ as a sequence of clauses $c_0, ... c_T$ , where $T$ was the number of clauses in the $D$ . Each clause $c_i$ was a tuple $([w^i_0, ...w^i_m], e_i)$ , where $[w^i_0, ...w^i_m]$ was the $m$ words in the clause $c_i$ , and $e_i$ was the corresponding meta information such as the relationship of the dyad and nonverbal behavior during the generation of the clause. The handcrafted feature of size 3782 was denoted as $c_0, ... c_T$0 , and could be viewed as a mapping function $c_0, ... c_T$1 . Meanwhile, each clause was associated with a binary label $c_0, ... c_T$2 that indicates the ground truth of whether $c_0, ... c_T$3 is a violation of social norm. Eventually, the goal was to model $c_0, ... c_T$4 , the conditional distribution over whether the latest clause was a violation of social norm, given the entire history of the dialog.",
"We first trained a L2 regularized logistic regression model using the proposed verbal and visual features $f_i$ as inputs (leftmost in Figure 1). This model serves as our baseline.",
"Past empirical results suggest two possible hypotheses of improving the model performance: 1. improvement in clause level representation 2. inclusion of contextual information for prediction. Therefore, we designed Local/Global-Context models to test these hypotheses.",
"The Local-Context recurrent neural network (RNN) models the context inside a clause at the word-level by encoding word embeddings of size 300 in a clause $c_i$ sequentially using a Long-short Term Memory (LSTM) cell of size 300. The mechanism of LSTM is defined as: $\n\\left[\n\\begin{matrix}\ni_t \\\\\nf_t \\\\\no_t \\\\\nj_t \\\\\n\\end{matrix}\n\\right] &=\n\\left[\n\\begin{matrix}\n\\sigma \\\\\n\\sigma \\\\\n\\sigma \\\\\ntanh \\\\\n\\end{matrix}\n\\right] W [h_{t-1}, x_t] \\\\\nc_t &= f_t \\odot c_{t-1} + i_t \\odot j_t\\\\\nh_t &= o_t \\odot tanh(c_t)\n$ ",
" We treated last hidden LSTM output $h^i_m$ as the clause embedding and concatenated that with the corresponding meta information vector $e_i$ . The combined vector was linearly transformed and then fed into a softmax function.",
"Next our Global-Context RNN investigated the influence of clause-level context in detecting social norm violation, by using the LSTM cells to model the long-term temporal dependencies. For a fair comparison, we used the same hand-crafted feature $f_i$ used in the logistic regression model as the representation of clause $c_i$ . As shown in Figure 1 , we first obtained a linear embedding of size 150 $emb_i=W_{e}f_i+b_i$ of $f_i$ . Then $emb_i$ was used as the inputs to LSTM of size 600. The hidden output $h_i$ at each time step was fed into a multilayer perceptron (MLP) with 1 hidden layer of size 100. We applied 50% dropout regularization BIBREF13 at the input/output of LSTM and MLP hidden layer for better generalization. Finally the model was optimized w.r.t to the cross entropy loss. A further challenge was the length of dialog. The average number of clauses in training dialog was 817.8, which made it computationally intractable to backpropagate through the entire sequence. Therefore, truncated backpropagation through time (TBPTT) BIBREF14 was used by unrolling the network for 20 steps. The final state of LSTM of each batch was fetched into the next batch as the initial state."
],
[
"We observed that Global-Context RNN with 2 LSTM layers outperformed other models as showed in Table 2. First, by comparing logistic regression model with our best model, the result indicates the strong predictive power of long-term temporal contextual information on the task of detecting social norm violation in dialog. On the other hand, Local-Context RNN model did not achieve significant improvement on overall performance regarding to logistic regression, which means that our learned clause representation through training process has less competence compared to hand-crafted features inspired from linguistic knowledge. One potential reason for such a result could be insufficient amount of training set in order to learn a generic clause representation."
],
[
"In this work, we began by indicating our interest in quantitatively learning the contribution of long-term temporal contextual information on detecting social norm violation in discourse. We then leveraged the power of recurrent neural network on modeling long-term temporal dependency. Inspired by hand-crafted multimodal features derived from qualitative and quantitative analysis in former empirical studies, we developed a Global-Context RNN model to detect social norm violation in human dialog. This model will play a prime role in building socially-aware agents that have capabilities of understanding interpersonal dynamics that unfold in the interaction, which is in turn, essential to better adapt to the interpersonal relationship felt by their users. Thus, to serve this goal, our future work will build a generative model of social norm violation, which will make an agent act towards more realistic human behavior understanding, reasoning and generation. We begin to model those aspects of human-human interaction that are not only helpful to human-agent collaboration, but also sustain aspects of what we cherish most in being human. "
]
],
"section_name": [
"Introduction and Related Work",
"Data and Annotation",
"Model and Experiment",
"Models",
"Experiment Result",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"5241af358fc2a3ed7eb077bf23a087b9d9fd06f5"
],
"answer": [
{
"evidence": [
"Interesting prior work on quantifying social norm violation has taken a heavily data-driven focus BIBREF8 , BIBREF9 . For instance, BIBREF8 trained a series of bigram language models to quantify the violation of social norms in users' posts on an online community by leveraging cross-entropy value, or the deviation of word sequences predicted by the language model and their usage by the user. However, their models were trained on written-language instead of natural face-face dialog corpus. Another kind of social norm violation was examined by BIBREF10 , who developed a classifier to identify specific types of sarcasm in tweets. They utilized a bootstrapping algorithm to automatically extract lists of positive sentiment phrases and negative situation phrases from given sarcastic tweets, which were in turn leveraged to recognize sarcasm in an SVM classifier. However, no contextual information was considered in this work. BIBREF11 understood the nature of social norm violation in dialog by correlating it with associated observable verbal, vocal and visual cues. By leveraging their findings and statistical machine learning techniques, they built a computational model for automatic recognition. While they preserved short-term temporal contextual information in the model, this study avoided dealing with sparsity of the social norm violation phenomena by under-sampling the negative-class instances to make a balanced dataset."
],
"extractive_spans": [],
"free_form_answer": "No, there has been previous work on recognizing social norm violation.",
"highlighted_evidence": [
"Interesting prior work on quantifying social norm violation has taken a heavily data-driven focus BIBREF8 , BIBREF9 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"Does this paper propose a new task that others can try to improve performance on?"
],
"question_id": [
"cacb83e15e160d700db93c3f67c79a11281d20c5"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"social"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Table 1: Statistics of the corpus",
"Figure 1: Three proposed computational models.",
"Table 2: Performance comparsion for the 3 evaluated models"
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"4-Table2-1.png"
]
} | [
"Does this paper propose a new task that others can try to improve performance on?"
] | [
[
"1610.03112-Introduction and Related Work-1"
]
] | [
"No, there has been previous work on recognizing social norm violation."
] | 884 |
1607.03542 | Open-Vocabulary Semantic Parsing with both Distributional Statistics and Formal Knowledge | Traditional semantic parsers map language onto compositional, executable queries in a fixed schema. This mapping allows them to effectively leverage the information contained in large, formal knowledge bases (KBs, e.g., Freebase) to answer questions, but it is also fundamentally limiting---these semantic parsers can only assign meaning to language that falls within the KB's manually-produced schema. Recently proposed methods for open vocabulary semantic parsing overcome this limitation by learning execution models for arbitrary language, essentially using a text corpus as a kind of knowledge base. However, all prior approaches to open vocabulary semantic parsing replace a formal KB with textual information, making no use of the KB in their models. We show how to combine the disparate representations used by these two approaches, presenting for the first time a semantic parser that (1) produces compositional, executable representations of language, (2) can successfully leverage the information contained in both a formal KB and a large corpus, and (3) is not limited to the schema of the underlying KB. We demonstrate significantly improved performance over state-of-the-art baselines on an open-domain natural language question answering task. | {
"paragraphs": [
[
"Semantic parsing is the task of mapping a phrase in natural language onto a formal query in some fixed schema, which can then be executed against a knowledge base (KB) BIBREF0 , BIBREF1 . For example, the phrase “Who is the president of the United States?” might be mapped onto the query $\\lambda (x).$ $\\textsc {/government/president\\_of}$ ( $x$ , $\\textsc {USA}$ ), which, when executed against Freebase BIBREF2 , returns $\\textsc {Barack Obama}$ . By mapping phrases to executable statements, semantic parsers can leverage large, curated sources of knowledge to answer questions BIBREF3 .",
"This benefit comes with an inherent limitation, however—semantic parsers can only produce executable statements within their manually-produced schema. There is no query against Freebase that can answer questions like “Who are the Democratic front-runners in the US election?”, as Freebase does not encode information about front-runners. Semantic parsers trained for Freebase fail on these kinds of questions.",
"To overcome this limitation, recent work has proposed methods for open vocabulary semantic parsing, which replace a formal KB with a probabilistic database learned from a text corpus. In these methods, language is mapped onto queries with predicates derived directly from the text itself BIBREF4 , BIBREF5 . For instance, the question above might be mapped to $\\lambda (x).$ $\\textit {president\\_of}$ ( $x$ , $\\textsc {USA}$ ). This query is not executable against any KB, however, and so open vocabulary semantic parsers must learn execution models for the predicates found in the text. They do this with a distributional approach similar to word embedding methods, giving them broad coverage, but lacking access to the large, curated KBs available to traditional semantic parsers.",
"Prior work in semantic parsing, then, has either had direct access to the information in a knowledge base, or broad coverage over all of natural language using the information in a large corpus, but not both.",
"In this work, we show how to combine these two approaches by incorporating KB information into open vocabulary semantic parsing models. Our key insight is that formal KB queries can be converted into features that can be added to the learned execution models of open vocabulary semantic parsers. This conversion allows open vocabulary models to use the KB fact $\\textsc {/government/president\\_of}$ ( $\\textsc {BarackObama}$ , $\\textsc {USA}$ ) when scoring $\\textit {president\\_of}$ ( $\\textsc {BarackObama}$ , $\\textsc {USA}$ ), without requiring the model to map the language onto a single formal statement. Crucially, this featurization also allows the model to use these KB facts even when they only provide partial information about the language being modeled. For example, knowing that an entity is a $\\textsc {politician}$ is very helpful information for deciding whether that entity is a front-runner. Our approach, outlined in Figure 1 , effectively learns the meaning of a word as a distributional vector plus a weighted combination of Freebase queries, a considerably more expressive representation than those used by prior work.",
"While this combination is the main contribution of our work, we also present some small improvements that allow open vocabulary semantic parsing models to make better use of KB information when it is available: improving the logical forms generated by the semantic parser, and employing a simple technique from related work for generating candidate entities from the KB.",
"We demonstrate our approach on the task of answering open-domain fill-in-the-blank natural language questions. By giving open vocabulary semantic parsers direct access to KB information, we improve mean average precision on this task by over 120%."
],
[
"In this section, we briefly describe the current state-of-the-art model for open vocabulary semantic parsing, introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary. Instead of mapping text to Freebase queries, as done by a traditional semantic parser, their method parses text to a surface logical form with predicates derived directly from the words in the text (see Figure 1 ). Next, a distribution over denotations for each predicate is learned using a matrix factorization approach similar to that of Riedel et al. riedel-2013-mf-universal-schema. This distribution is concisely represented using a probabilistic database, which also enables efficient probabilistic execution of logical form queries.",
"The matrix factorization has two sets of parameters: each category or relation has a learned $k$ -dimensional embedding $\\theta $ , and each entity or entity pair has a learned $k$ -dimensional embedding $\\phi $ . The probability assigned to a category instance $c(e)$ or relation instance $r(e_1, e_2)$ is given by: $ p(c(e)) &= \\sigma ( \\theta _c^T \\phi _e ) \\\\ p(r(e_1, e_2)) &= \\sigma (\n\\theta _r^T \\phi _{(e_1, e_2)} ) $ ",
"The probability of a predicate instance is the sigmoided inner product of the corresponding predicate and entity embeddings. Predicates with nearby embeddings will have similar distributions over the entities in their denotation. The parameters $\\theta $ and $\\phi $ are learned using a query ranking objective that optimizes them to rank entities observed in the denotation of a logical form above unobserved entities. Given the trained predicate and entity parameters, the system is capable of efficiently computing the marginal probability that an entity is an element of a logical form's denotation using approximate inference algorithms for probabilistic databases.",
"The model presented in this section is purely distributional, with predicate and entity models that draw only on co-occurrence information found in a corpus. In the following sections, we show how to augment this model with information contained in large, curated KBs such as Freebase."
],
[
"Our key insight is that the executable queries used by traditional semantic parsers can be converted into features that provide KB information to the execution models of open vocabulary semantic parsers. Here we show how this is done.",
"Traditional semantic parsers map words onto distributions over executable queries, select one to execute, and return sets of entities or entity pairs from a KB as a result. Instead of executing a single query, we can simply execute all possible queries and use an entity's (or entity pair's) membership in each set as a feature in our predicate models.",
"There are two problems with this approach: (1) the set of all possible queries is intractably large, so we need a mechanism similar to a semantic parser's lexicon to select a small set of queries for each word; and (2) executing hundreds or thousands of queries at runtime for each predicate and entity is not computationally tractable. To solve these problems, we use a graph-based technique called subgraph feature extraction (SFE) BIBREF6 ."
],
[
"SFE is a technique for generating feature matrices over node pairs in graphs with labeled edges. When the graph corresponds to a formal KB such as Freebase, the features generated by SFE are isomorphic to statements in the KB schema BIBREF7 . This means that we can use SFE to generate a feature vector for each entity (or entity pair) which succinctly captures the set of all statements in whose denotations the entity (or entity pair) appears. Using this feature vector as part of the semantic parser's entity models solves problem (2) above, and performing feature selection for each predicate solves problem (1).",
"Some example features extracted by SFE are shown in Figure 2 . For entity pairs, these features include the sequence of edges (or paths) connecting the nodes corresponding to the entity pair. For entities, these features include the set of paths connected to the node, optionally including the node at the end of the path. Note the correspondence between these features and Freebase queries: the path $\\langle $ $\\textsc {designed}$ $\\rightarrow $ $\\textsc {located\\_in}$ $\\rangle $ can be executed as a query against Freebase, returning a set of (architect, location) entity pairs, where the architect designed a structure in the location. ( $\\textsc {Palladio}$ , $\\textsc {Italy}$ ) is one such entity pair, so this pair has a feature value of 1 for this query."
],
[
"The feature vectors produced by SFE contain tens of millions of possible formal statements. Out of these tens of millions of formal statements, only a handful represent relevant Freebase queries for any particular predicate. We therefore select a small number of statements to consider for each learned predicate in the open vocabulary semantic parser.",
"We select features by first summing the entity and entity pair feature vectors seen with each predicate in the training data. For example, the phrase “Italian architect Andrea Palladio” is considered a positive training example for the predicate instances $\\textit {architect}(\\textsc {Palladio})$ and $\\textit {architect\\_N/N}(\\textsc {Italy}, \\textsc {Palladio})$ . We add the feature vectors for $\\textsc {Palladio}$ and ( $\\textsc {Italy}$ , $\\textsc {Palladio}$ ) to the feature counts for the predicates $\\textit {architect}$ and $\\textit {architect\\_N/N}$ , respectively. This gives a set of counts $\\textsc {count}$ ( $\\pi $ ), $\\textsc {count}$ ( $\\textit {architect\\_N/N}(\\textsc {Italy}, \\textsc {Palladio})$0 ), and $\\textit {architect\\_N/N}(\\textsc {Italy}, \\textsc {Palladio})$1 ( $\\textit {architect\\_N/N}(\\textsc {Italy}, \\textsc {Palladio})$2 ), for each predicate $\\textit {architect\\_N/N}(\\textsc {Italy}, \\textsc {Palladio})$3 and feature $\\textit {architect\\_N/N}(\\textsc {Italy}, \\textsc {Palladio})$4 . The features are then ranked by PMI for each predicate by computing $\\textit {architect\\_N/N}(\\textsc {Italy}, \\textsc {Palladio})$5 . After removing low-frequency features, we pick the $\\textit {architect\\_N/N}(\\textsc {Italy}, \\textsc {Palladio})$6 features with the highest PMI values for each predicate to use in our model."
],
[
"Here we present our approach to incorporating KB information into open vocabulary semantic parsers. Having described how we use SFE to generate features corresponding to statements in a formal schema, adding these features to the models described in Section \"Subgraph feature extraction\" is straightforward.",
"We saw in Section \"Subgraph feature extraction\" that open vocabulary semantic parsers learn distributional vectors for each category, relation, entity and entity pair. We augment these vectors with the feature vectors described in Section \"Converting Freebase queries to features\" . Each category and relation receives a weight $\\omega $ for each selected Freebase query, and each entity and entity pair has an associated feature vector $\\psi $ . The truth probability of a category instance $c(e)$ or relation instance $r(e_1, e_2)$ is thus given by: $\np(c(e)) &= \\sigma ( \\theta _c^T \\phi _e + \\omega _c^T \\psi _c(e)) \\\\\np(r(e_1, e_2)) &= \\sigma ( \\theta _r^T \\phi _{(e_1, e_2)} + \\omega _r^T \\psi _r(e_1, e_2) )\n$ ",
"In these equations, $\\theta $ and $\\phi $ are learned predicate and entity embeddings, as described in Section \"Subgraph feature extraction\" . The second term in the sum represents our new features and their learned weights. $\\psi _c(e)$ and $\\psi _r(e_1, e_2)$ are SFE feature vectors for each entity and entity pair; a different set of features is chosen for each predicate $c$ and $r$ , as described in Section \"Making full use of KB information\" . $\\omega _c$ and $\\omega _r$ are learned weights for these features.",
"In our model, there are now three sets of parameters to be learned: (1) $\\theta $ , low-dimensional distributional vectors trained for each predicate; (2) $\\phi $ , low-dimensional distributional vectors trained for each entity and entity pair; and (3) $\\omega $ , weights associated with the selected formal SFE features for each predicate. All of these parameters are optimized jointly, using the same method described in Section \"Subgraph feature extraction\" .",
"Note here that each SFE feature corresponds to a query over the formal schema, defining a set of entities (or entity pairs). The associated feature weight measures the likelihood that an entity in this set is also in the denotation of the surface predicate. Our models include many such features for each surface predicate, effectively mapping each surface predicate onto a weighted combination of Freebase queries."
],
[
"In addition to improving predicate models, as just described, adding KB information to open vocabulary semantic parsers suggests two other simple improvements: (1) using more specific logical forms, and (2) generating candidate entities from the KB."
],
[
"Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary generate logical forms from natural language statements by computing a syntactic CCG parse, then applying a collection of rules to produce logical forms. However, their logical form analyses do not model noun-mediated relations well. For example, given the phrase “Italian architect Andrea Palladio,” their system's logical form would include the relation $\\textit {N/N}(\\textsc {Italy},\n\\textsc {Palladio})$ . Here, the $\\textit {N/N}$ predicate represents a generic noun modifier relation; however, this relation is too vague for the predicate model to accurately learn its denotation. A similar problem occurs with prepositions and possessives, e.g., it is similarly hard to learn the denotation of the predicate $\\textit {of}$ .",
"Our system improves the analysis of noun-mediated relations by simply including the noun in the predicate name. In the architect example above, our system produces the relation $\\textit {architect\\_N/N}$ . It does this by concatenating all intervening noun modifiers between two entity mentions and including them in the predicate name; for example, “Illinois attorney general Lisa Madigan” produces the predicate $\\textit {attorney\\_general\\_N/N}$ . We similarly improve the analyses of prepositions and possessives to include the head noun. For example, “Barack Obama, president of the U.S.” produces the predicate instance $\\textit {president\\_of}(\\textsc {Barack Obama}, \\textsc {U.S.})$ , and “Rome, Italy's capital” produces the predicate $\\textit {^{\\prime }s\\_capital}$ . This process generates more specific predicates that more closely align with the KB facts that we make available to the predicate models."
],
[
"A key benefit of our predicate models is that they are able to assign scores to entity pairs that were never seen in the training data. Distributional models have no learned vectors for these entity pairs and therefore assume $p(r(e_1,e_2)) = 0$ for unseen entity pairs $(e_1,e_2)$ . This limits the recall of these models when applied to question answering, as entity pairs will not have been observed for many correct, but rare entity answers. In contrast, because our models have access to a large KB, the formal component of the model can always give a score to any entity pair in the KB. This allows our model to considerably improve question answering performance on rare entities.",
"It would be computationally intractable to consider all Freebase entities as answers to queries, and so we use a simple candidate entity generation technique to consider only a small set of likely entities for a given query. We first find all entities in the query, and consider as candidates any entity that has either been seen at training time with a query entity or is directly connected to a query entity in Freebase. This candidate entity generation is common practice for recent question answering models over Freebase BIBREF8 , though, for the reasons stated above, it has not been used previously in open vocabulary semantic parsing models."
],
[
"We evaluate our open-vocabulary semantic parser on a fill-in-the-blank natural language query task. Each test example is a natural language phrase containing at least two Freebase entities, one of which is held out. The system must propose a ranked list of Freebase entities to fill in the blank left by the held out entity, and the predicted entities are then judged manually for correctness. We compare our proposed models, which combine distributional and formal elements, with a purely distributional baseline from prior work. All of the data and code used in these experiments is available at http://github.com/allenai/open_vocab_semparse."
],
[
"Much recent work on semantic parsing has been evaluated using the WebQuestions dataset BIBREF3 . This dataset is not suitable for evaluating our model because it was filtered to only questions that are mappable to Freebase queries. In contrast, our focus is on language that is not directly mappable to Freebase. We thus use the dataset introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary, which consists of the ClueWeb09 web corpus along with Google's FACC entity linking of that corpus to Freebase BIBREF9 . For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 . This produced 2.1m predicate instances involving 142k entity pairs and 184k entities. After removing infrequently-seen predicates (seen fewer than 6 times), there were 25k categories and 4.2k relations.",
"We also used the test set created by Krishnamurthy and Mitchell, which contains 220 queries generated in the same fashion as the training data from a separate section of ClueWeb. However, as they did not release a development set with their data, we used this set as a development set. For a final evaluation, we generated another, similar test set from a different held out section of ClueWeb, in the same fashion as done by Krishnamurthy and Mitchell. This final test set contains 307 queries."
],
[
"We compare three models in our experiments: (1) the distributional model of Krishnamurthy and Mitchell, described in Section \"Subgraph feature extraction\" , which is the current state-of-the-art method for open vocabulary semantic parsing; (2) a formal model (new to this work), where the distributional parameters $\\theta $ and $\\phi $ in Section \"Combined predicate models\" are fixed at zero; and (3) the combined model described in Section \"Combined predicate models\" (also new to this work). In each of these models, we used vectors of size 300 for all embeddings. Except where noted, all experiments use our modified logical forms (Section \"Evaluation\" ) and our entity proposal mechanism (Section \"Related work\" ). We do not compare against any traditional semantic parsers, as more than half of the questions in our dataset are not answerable by Freebase queries, and so are out of scope for those parsers BIBREF5 ."
],
[
"Given a fill-in-the-blank query such as “Italian architect ”, each system produces a ranked list of 100 candidate entities. To compare the output of the systems, we follow a pooled evaluation protocol commonly used in relation extraction and information retrieval BIBREF11 , BIBREF12 . We take the top 30 predictions from each system and manually annotate whether they are correct, and use those annotations to compute the average precision (AP) and reciprocal rank (RR) of each system on the query. Average precision is defined as $\\frac{1}{m}\\sum ^m_{k=1} \\mathrm {Prec}(k) \\times \\mathrm {Correct}(k)$ , where $\\mathrm {Prec}(k)$ is the precision at rank $k$ , $\\mathrm {Correct}(k)$ is an indicator function for whether the $k$ th answer is correct, and $m$ is number of returned answers (up to 100 in this evaluation). AP is equivalent to calculating the area under a precision-recall curve. Reciprocal rank is computed by first finding the rank $r$ of the first correct prediction made by a system. Reciprocal rank is then $\\frac{1}{r}$ , ranging from 1 (if the first prediction is correct) to 0 (if there is no correct answer returned). In the tables below we report mean average precision (MAP) and mean reciprocal rank (MRR), averaged over all of the queries in the test set. We also report a weighted version of MAP, where the AP of each query is scaled by the number of annotated correct answers to the query (shown as W-MAP in the tables for space considerations)."
],
[
"We first show the effect of the new logical forms introduced in Section \"Evaluation\" . As can be seen in Table 1 , with our improved logical forms, all models are better able to capture the semantics of language. This improvement is most pronounced in the formal models, which have more capacity to get specific features from Freebase with the new logical forms. As our logical forms give all models better performance, the remaining experiments we present all use these logical forms.",
"We next show the improvement gained by using the simple candidate entity generation outlined in Section \"Related work\" . By simply appending the list of connected entities in Freebase to the end of the rankings returned by the distributional model, MAP improves by 40% (see Table 2 ). The connectedness of an entity pair in Freebase is very informative, especially for rare entities that are not seen together during training.",
"Table 3 shows a comparison between the semantic parsing models on the development set. As can be seen, the combined model significantly improves performance over prior work, giving a relative gain in weighted MAP of 29%.",
"Table 4 shows that these improvements are consistent on the final test set, as well. The performance improvement seen by the combined model is actually larger on this set, with gains on our metrics ranging from 50% to 87%.",
"On both of these datasets, the difference in MAP between the combined model and the distributional model is statistically significant (by a paired permutation test, $p < 0.05$ ). The differences between the combined model and the formal model, and between the formal model and the distributional model, are not statistically significant, as each method has certain kinds of queries that it performs well on. Only the combined model is able to consistently outperform the distributional model on all kinds of queries."
],
[
"Our model tends to outperform the distributional model on queries containing predicates with exact or partial correlates in Freebase. For example, our model obtains nearly perfect average precision on the queries “French newspaper ” and “Israeli prime minister ,” both of which can be exactly expressed in Freebase. The top features for $\\textit {newspaper}$ ( $x$ ) all indicate that $x$ has type $\\textsc {newspaper}$ in Freebase, and the top features for $\\textit {newspaper\\_N/N}$ ( $x$ , $y$ ) indicate that $y$ is a newspaper, and that $x$ is either the circulation area of $y$ or the language of $x$0 .",
"The model also performs well on queries with partial Freebase correlates, such as “Microsoft head honcho ”, “The United States, 's closest ally”, and “Patriots linebacker ,” although with somewhat lower average precision. The high weight features in these cases tend to provide useful hints, even though there is no direct correlate; for example, the model learns that “honchos” are people, and that they tend to be CEOs and film producers.",
"There are also some areas where our model can be improved. First, in some cases, the edge sequence features used by the model are not expressive enough to identify the correct relation in Freebase. An example of this problem is the “linebacker” example above, where the features for $\\textit {linebacker\\_N/N}$ can capture which athletes play for which teams, but not the positions of those athletes. Second, our model can under-perform on predicates with no close mapping to Freebase. An example where this problem occurs is the query “ is a NASA mission.” Third, there remains room to further improve the logical forms produced by the semantic parser, specifically for multi-word expressions. One problem occurs with multi-word noun modifiers, e.g., “Vice president Al Gore” is mapped to $\\textit {vice}(\\textsc {Al Gore}) \\wedge \\textit {president}(\\textsc {Al Gore})$ . Another problem is that there is no back-off with multi-word relations. For example, the predicate $\\textit {head\\_honcho\\_N/N}$ was never seen in the training data, so it is replaced with $\\textit {unknown}$ ; however, it would be better to replace it with $\\textit {honcho\\_N/N}$ , which was seen in the training data. Finally, although using connected entities in Freebase as additional candidates during inference is helpful, it often over- or under-generates candidates. A more tailored, per-query search process could improve performance."
],
[
"There is an extensive literature on building semantic parsers to answer questions against a KB BIBREF1 , BIBREF3 , BIBREF13 , BIBREF14 . Some of this work has used surface (or ungrounded) logical forms as an intermediate representation, similar to our work BIBREF15 , BIBREF16 , BIBREF8 , BIBREF17 . The main difference between our work and these techniques is that they map surface logical forms to a single executable Freebase query, while we learn execution models for the surface logical forms directly, using a weighted combination of Freebase queries as part of the model. None of these prior works can assign meaning to language that is not directly representable in the KB schema.",
"Choi, Kwiatkowski and Zettlemoyer choi-2015-semantic-parsing-partial-ontologies presented an information extraction system that performs a semantic parse of open-domain text, recognizing when a predicate cannot be mapped to Freebase. However, while they recognize when a predicate is not mappable to Freebase, they do not attempt to learn execution models for those predicates, nor can they answer questions using those predicates.",
"Yao and Van Durme yao-2014-info-extraction-freebase-qa and Dong et al. dong-2015-freebase-qa-mccnn proposed question answering models that use similar features to those used in this work. However, they did not produce semantic parses of language, instead using methods that are non-compositional and do not permit complex queries.",
"Finally, learning probabilistic databases in an open vocabulary semantic parser has a strong connection with KB completion. In addition to SFE BIBREF6 , our work draws on work on embedding the entities and relations in a KB BIBREF12 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , as well as work on graph-based methods for reasoning with KBs BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Our combination of embedding methods with graph-based methods in this paper is suggestive of how one could combine the two in methods for KB completion. Initial work exploring this direction has already been done by Toutanova and Chen toutanova-2015-observed-vs-latent-kbc."
],
[
"Prior work in semantic parsing has either leveraged large knowledge bases to answer questions, or used distributional techniques to gain broad coverage over all of natural language. In this paper, we have shown how to gain both of these benefits by converting the queries generated by traditional semantic parsers into features which are then used in open vocabulary semantic parsing models. We presented a technique to do this conversion in a way that is scalable using graph-based feature extraction methods. Our combined model achieved relative gains of over 50% in mean average precision and mean reciprocal rank versus a purely distributional approach. We also introduced a better mapping from surface text to logical forms, and a simple method for using a KB to find candidate entities during inference. Taken together, the methods introduced in this paper improved mean average precision on our task from .163 to .370, a 127% relative improvement over prior work.",
"This work suggests a new direction for semantic parsing research. Existing semantic parsers map language to a single KB query, an approach that successfully leverages a KB's predicate instances, but is fundamentally limited by its schema. In contrast, our approach maps language to a weighted combination of queries plus a distributional component; this approach is capable of representing a much broader class of concepts while still using the KB when it is helpful. Furthermore, it is capable of using the KB even when the meaning of the language cannot be exactly represented by a KB predicate, which is a common occurrence. We believe that this kind of approach could significantly expand the applicability of semantic parsing techniques to more complex domains where the assumptions of traditional techniques are too limiting. We are actively exploring applying these techniques to science question answering BIBREF26 , for example, where existing KBs provide only partial coverage of the questions."
]
],
"section_name": [
"Introduction",
"Open vocabulary semantic parsing",
"Converting Freebase queries to features",
"Subgraph feature extraction",
"Feature selection",
"Combined predicate models",
"Making full use of KB information",
"Logical form generation",
"Candidate entity generation",
"Evaluation",
"Data",
"Models",
"Methodology",
"Results",
"Discussion",
"Related work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"8a4265fcb69262d4160fec857fca628364af4e2a"
],
"answer": [
{
"evidence": [
"Much recent work on semantic parsing has been evaluated using the WebQuestions dataset BIBREF3 . This dataset is not suitable for evaluating our model because it was filtered to only questions that are mappable to Freebase queries. In contrast, our focus is on language that is not directly mappable to Freebase. We thus use the dataset introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary, which consists of the ClueWeb09 web corpus along with Google's FACC entity linking of that corpus to Freebase BIBREF9 . For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 . This produced 2.1m predicate instances involving 142k entity pairs and 184k entities. After removing infrequently-seen predicates (seen fewer than 6 times), there were 25k categories and 4.2k relations."
],
"extractive_spans": [
"Freebase"
],
"free_form_answer": "",
"highlighted_evidence": [
"We thus use the dataset introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary, which consists of the ClueWeb09 web corpus along with Google's FACC entity linking of that corpus to Freebase BIBREF9 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"4857c606a55a83454e8d81ffe17e05cf8bc4b75f"
]
},
{
"annotation_id": [
"57d518e5064cc27b817b22c6e065271b247e04a6"
],
"answer": [
{
"evidence": [
"Much recent work on semantic parsing has been evaluated using the WebQuestions dataset BIBREF3 . This dataset is not suitable for evaluating our model because it was filtered to only questions that are mappable to Freebase queries. In contrast, our focus is on language that is not directly mappable to Freebase. We thus use the dataset introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary, which consists of the ClueWeb09 web corpus along with Google's FACC entity linking of that corpus to Freebase BIBREF9 . For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 . This produced 2.1m predicate instances involving 142k entity pairs and 184k entities. After removing infrequently-seen predicates (seen fewer than 6 times), there were 25k categories and 4.2k relations.",
"We also used the test set created by Krishnamurthy and Mitchell, which contains 220 queries generated in the same fashion as the training data from a separate section of ClueWeb. However, as they did not release a development set with their data, we used this set as a development set. For a final evaluation, we generated another, similar test set from a different held out section of ClueWeb, in the same fashion as done by Krishnamurthy and Mitchell. This final test set contains 307 queries."
],
"extractive_spans": [],
"free_form_answer": "3 million webpages processed with a CCG parser for training, 220 queries for development, and 307 queries for testing",
"highlighted_evidence": [
"For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 .",
"We also used the test set created by Krishnamurthy and Mitchell, which contains 220 queries generated in the same fashion as the training data from a separate section of ClueWeb. However, as they did not release a development set with their data, we used this set as a development set.",
"This final test set contains 307 queries."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"4857c606a55a83454e8d81ffe17e05cf8bc4b75f"
]
},
{
"annotation_id": [
"8799cdf8b20f025429f0112bf85892e3e8d79af2"
],
"answer": [
{
"evidence": [
"We demonstrate our approach on the task of answering open-domain fill-in-the-blank natural language questions. By giving open vocabulary semantic parsers direct access to KB information, we improve mean average precision on this task by over 120%."
],
"extractive_spans": [],
"free_form_answer": "Fill-in-the-blank natural language questions",
"highlighted_evidence": [
"We demonstrate our approach on the task of answering open-domain fill-in-the-blank natural language questions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What knowledge base do they use?",
"How big is their dataset?",
"What task do they evaluate on?"
],
"question_id": [
"33957fde72f9082a5c11844e7c47c58f8029c4ae",
"1c4cd22d6eaefffd47b93c2124f6779a06d2d9e1",
"2122bd05c03dde098aa17e36773e1ac7b6011969"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"semantic parsing",
"semantic parsing",
"semantic parsing"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Overview of the components of our model. Given an input text, we use a CCG parser and an entity linker to produce a logical form with predicates derived from the text (shown in italics). For each predicate, we learn a distributional vector θ, as well as weights ω associated with a set of selected Freebase queries. For each entity and entity pair, we learn a distributional vector φ, and we extract a binary feature vector ψ from Freebase, indicating whether each entity or entity pair is in the set returned by the selected Freebase queries. These models are combined to assign probabilities to candidate entities.",
"Figure 2: A subset of the Freebase graph, and some example extracted features. The actual Freebase relations and entity identifiers used are modified here to aid readability.",
"Table 1: Improvement in mean average precision when using our logical forms on the development set.",
"Table 2: Improvement to the distributional model when using our candidate entity generation.",
"Table 3: Development set results for our fill-in-the-blank task. The combined model significantly improves MAP over prior work.",
"Table 4: Final test results set for our fill-in-the-blank task. The combined model improves over prior work by 50–87% on our metrics. These improvements over the baseline are after the baseline has been improved by the methods developed in this paper, shown in Table 1 and Table 2. The cumulative effect of the methods presented in this work is an improvement of over 120% in MAP."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png"
]
} | [
"How big is their dataset?",
"What task do they evaluate on?"
] | [
[
"1607.03542-Data-1",
"1607.03542-Data-0"
],
[
"1607.03542-Introduction-6"
]
] | [
"3 million webpages processed with a CCG parser for training, 220 queries for development, and 307 queries for testing",
"Fill-in-the-blank natural language questions"
] | 885 |
1712.02121 | A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network | In this paper, we propose a novel embedding model, named ConvKB, for knowledge base completion. Our model ConvKB advances state-of-the-art models by employing a convolutional neural network, so that it can capture global relationships and transitional characteristics between entities and relations in knowledge bases. In ConvKB, each triple (head entity, relation, tail entity) is represented as a 3-column matrix where each column vector represents a triple element. This 3-column matrix is then fed to a convolution layer where multiple filters are operated on the matrix to generate different feature maps. These feature maps are then concatenated into a single feature vector representing the input triple. The feature vector is multiplied with a weight vector via a dot product to return a score. This score is then used to predict whether the triple is valid or not. Experiments show that ConvKB achieves better link prediction performance than previous state-of-the-art embedding models on two benchmark datasets WN18RR and FB15k-237. | {
"paragraphs": [
[
"Large-scale knowledge bases (KBs), such as YAGO BIBREF0 , Freebase BIBREF1 and DBpedia BIBREF2 , are usually databases of triples representing the relationships between entities in the form of fact (head entity, relation, tail entity) denoted as (h, r, t), e.g., (Melbourne, cityOf, Australia). These KBs are useful resources in many applications such as semantic searching and ranking BIBREF3 , BIBREF4 , BIBREF5 , question answering BIBREF6 , BIBREF7 and machine reading BIBREF8 . However, the KBs are still incomplete, i.e., missing a lot of valid triples BIBREF9 , BIBREF10 . Therefore, much research work has been devoted towards knowledge base completion or link prediction to predict whether a triple (h, r, t) is valid or not BIBREF11 .",
"Many embedding models have proposed to learn vector or matrix representations for entities and relations, obtaining state-of-the-art (SOTA) link prediction results BIBREF12 . In these embedding models, valid triples obtain lower implausibility scores than invalid triples. Let us take the well-known embedding model TransE BIBREF13 as an example. In TransE, entities and relations are represented by $k$ -dimensional vector embeddings. TransE employs a transitional characteristic to model relationships between entities, in which it assumes that if (h, r, t) is a valid fact, the embedding of head entity $h$ plus the embedding of relation $r$ should be close to the embedding of tail entity $t$ , i.e. $v_h$ + $v_r$ $\\approx $ $v_t$ (here, $v_h$ , $v_r$ and $h$0 are embeddings of $h$1 , $h$2 and $h$3 respectively). That is, a TransE score $h$4 of the valid triple (h, r, t) should be close to 0 and smaller than a score $h$5 of an invalid triple (h', r', t'). The transitional characteristic in TransE also implies the global relationships among same dimensional entries of $h$6 , $h$7 and $h$8 .",
"Other transition-based models extend TransE to additionally use projection vectors or matrices to translate head and tail embeddings into the relation vector space, such as: TransH BIBREF14 , TransR BIBREF15 , TransD BIBREF16 , STransE BIBREF17 and TranSparse BIBREF18 . Furthermore, DISTMULT BIBREF19 and ComplEx BIBREF20 use a tri-linear dot product to compute the score for each triple. Recent research has shown that using relation paths between entities in the KBs could help to get contextual information for improving KB completion performance BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . See other embedding models for KB completion in BIBREF26 .",
"Recently, convolutional neural networks (CNNs), originally designed for computer vision BIBREF27 , have significantly received research attention in natural language processing BIBREF28 , BIBREF29 . CNN learns non-linear features to capture complex relationships with a remarkably less number of parameters compared to fully connected neural networks. Inspired from the success in computer vision, BIBREF30 proposed ConvE—the first model applying CNN for the KB completion task. In ConvE, only $v_h$ and $v_r$ are reshaped and then concatenated into an input matrix which is fed to the convolution layer. Different filters of the same $3\\times 3$ shape are operated over the input matrix to output feature map tensors. These feature map tensors are then vectorized and mapped into a vector via a linear transformation. Then this vector is computed with $v_t$ via a dot product to return a score for (h, r, t). See a formal definition of the ConvE score function in Table 1 . It is worth noting that ConvE focuses on the local relationships among different dimensional entries in each of $v_h$ or $v_r$ , i.e., ConvE does not observe the global relationships among same dimensional entries of an embedding triple ( $v_h$ , $v_r$ , $v_t$ ), so that ConvE ignores the transitional characteristic in transition-based models, which is one of the most useful intuitions for the task.",
"In this paper, we present ConvKB—an embedding model which proposes a novel use of CNN for the KB completion task. In ConvKB, each entity or relation is associated with an unique $k$ -dimensional embedding. Let $v_h$ , $v_r$ and $v_t$ denote $k$ -dimensional embeddings of $h$ , $r$ and $t$ , respectively. For each triple (h, r, t), the corresponding triple of $k$ -dimensional embeddings ( $v_h$ , $v_h$0 , $v_h$1 ) is represented as a $v_h$2 input matrix. This input matrix is fed to the convolution layer where different filters of the same $v_h$3 shape are used to extract the global relationships among same dimensional entries of the embedding triple. That is, these filters are repeatedly operated over every row of the input matrix to produce different feature maps. The feature maps are concatenated into a single feature vector which is then computed with a weight vector via a dot product to produce a score for the triple (h, r, t). This score is used to infer whether the triple (h, r, t) is valid or not.",
"Our contributions in this paper are as follows:"
],
[
"A knowledge base $\\mathcal {G}$ is a collection of valid factual triples in the form of (head entity, relation, tail entity) denoted as $(h, r, t)$ such that $h, t \\in \\mathcal {E}$ and $r \\in \\mathcal {R}$ where $\\mathcal {E}$ is a set of entities and $\\mathcal {R}$ is a set of relations. Embedding models aim to define a score function $f$ giving an implausibility score for each triple $(h, r, t)$ such that valid triples receive lower scores than invalid triples. Table 1 presents score functions in previous SOTA models.",
"We denote the dimensionality of embeddings by $k$ such that each embedding triple ( $v_h$ , $v_r$ , $v_t$ ) are viewed as a matrix $A = [v_h,v_r,v_t] \\in \\mathbb {R}^{k\\times 3}$ . And $A_{i,:} \\in \\mathbb {R}^{1\\times 3}$ denotes the $i$ -th row of $A$ . Suppose that we use a filter $\\omega \\in \\mathbb {R}^{1\\times 3}$ operated on the convolution layer. $\\omega $ is not only aimed to examine the global relationships between same dimensional entries of the embedding triple ( $v_h$0 , $v_h$1 , $v_h$2 ), but also to generalize the transitional characteristics in the transition-based models. $v_h$3 is repeatedly operated over every row of $v_h$4 to finally generate a feature map $v_h$5 as: ",
"$$v_i = g\\left(\\omega \\cdot {A_{i,:}} + b\\right) \\nonumber $$ (Eq. 4) ",
"where $b \\in \\mathbb {R}$ is a bias term and $g$ is some activation function such as ReLU.",
"Our ConvKB uses different filters $\\in \\mathbb {R}^{1\\times 3}$ to generate different feature maps. Let ${\\Omega }$ and $\\tau $ denote the set of filters and the number of filters, respectively, i.e. $\\tau = |{\\Omega }|$ , resulting in $\\tau $ feature maps. These $\\tau $ feature maps are concatenated into a single vector $\\in \\mathbb {R}^{\\tau k\\times 1}$ which is then computed with a weight vector ${w} \\in \\mathbb {R}^{\\tau k\\times 1}$ via a dot product to give a score for the triple $(h, r, t)$ . Figure 1 illustrates the computation process in ConvKB.",
"Formally, we define the ConvKB score function $f$ as follows: ",
"$$f(h,r,t) = \\mathsf {concat}\\left(g\\left([v_h,v_r,v_t]\\ast {\\Omega }\\right)\\right)\\cdot {w} \\nonumber $$ (Eq. 6) ",
"where ${\\Omega }$ and ${w}$ are shared parameters, independent of $h$ , $r$ and $t$ ; $\\ast $ denotes a convolution operator; and $\\mathsf {concat}$ denotes a concatenation operator.",
"If we only use one filter $\\omega $ (i.e. using $\\tau =1$ ) with a fixed bias term $b=0$ and the activation function $g(x)=|x|$ or $g(x)=x^2$ , and fix $\\omega = [1, 1, -1]$ and ${w} = \\textbf {1}$ during training, ConvKB reduces to the plain TransE model BIBREF13 . So our ConvKB model can be viewed as an extension of TransE to further model global relationships.",
"We use the Adam optimizer BIBREF32 to train ConvKB by minimizing the loss function $\\mathcal {L}$ BIBREF20 with $L_2$ regularization on the weight vector ${w}$ of the model: ",
"$$\\text{in which, } l_{(h,r,t)} = \\left\\lbrace \n\\begin{array}{l}\n1\\;\\text{for } (h,r,t)\\in \\mathcal {G}\\\\\n-1\\;\\text{for } (h,r,t)\\in \\mathcal {G}^{\\prime }\n\\end{array} \\right.$$ (Eq. ) $\n\\mathcal {L} & = \\sum _{\\begin{array}{c}(h,r,t) \\in \\lbrace \\mathcal {G} \\cup \\mathcal {G}^{\\prime }\\rbrace \\end{array}} \\log \\left(1 + \\exp \\left(l_{(h,r,t)} \\cdot f\\left(h,r,t\\right)\\right)\\right) \\nonumber \\\\\n& \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ + \\frac{\\lambda }{2}\\Vert {w}\\Vert ^2_2 \\nonumber \n$ ",
"here $\\mathcal {G}^{\\prime }$ is a collection of invalid triples generated by corrupting valid triples in $\\mathcal {G}$ ."
],
[
"We evaluate ConvKB on two benchmark datasets: WN18RR BIBREF30 and FB15k-237 BIBREF31 . WN18RR and FB15k-237 are correspondingly subsets of two common datasets WN18 and FB15k BIBREF13 . As noted by BIBREF31 , WN18 and FB15k are easy because they contain many reversible relations. So knowing relations are reversible allows us to easily predict the majority of test triples, e.g. state-of-the-art results on both WN18 and FB15k are obtained by using a simple reversal rule as shown in BIBREF30 . Therefore, WN18RR and FB15k-237 are created to not suffer from this reversible relation problem in WN18 and FB15k, for which the knowledge base completion task is more realistic. Table 2 presents the statistics of WN18RR and FB15k-237."
],
[
"In the KB completion or link prediction task BIBREF13 , the purpose is to predict a missing entity given a relation and another entity, i.e, inferring $h$ given $(r, t)$ or inferring $t$ given $(h, r)$ . The results are calculated based on ranking the scores produced by the score function $f$ on test triples.",
"Following BIBREF13 , for each valid test triple $(h, r, t)$ , we replace either $h$ or $t$ by each of other entities in $\\mathcal {E}$ to create a set of corrupted triples. We use the “Filtered” setting protocol BIBREF13 , i.e., not taking any corrupted triples that appear in the KB into accounts. We rank the valid test triple and corrupted triples in ascending order of their scores. We employ three common evaluation metrics: mean rank (MR), mean reciprocal rank (MRR), and Hits@10 (i.e., the proportion of the valid test triples ranking in top 10 predictions). Lower MR, higher MRR or higher Hits@10 indicate better performance."
],
[
"We use the common Bernoulli trick BIBREF14 , BIBREF15 to generate the head or tail entities when sampling invalid triples. We also use entity and relation embeddings produced by TransE to initialize entity and relation embeddings in ConvKB. We employ a TransE implementation available at: https://github.com/datquocnguyen/STransE. We train TransE for 3,000 epochs, using a grid search of hyper-parameters: the dimensionality of embeddings $k \\in \\lbrace 50, 100\\rbrace $ , SGD learning rate $\\in \\lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\\rbrace $ , $\\mathit {l}_1$ -norm or $\\mathit {l}_2$ -norm, and margin $\\gamma \\in \\lbrace 1, 3, 5, 7\\rbrace $ . The highest Hits@10 scores on the validation set are when using $\\mathit {l}_1$ -norm, learning rate at $5e^{-4}$ , $\\gamma $ = 5 and $k$ = 50 for WN18RR, and using $\\mathit {l}_1$ -norm, learning rate at $\\in \\lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\\rbrace $0 , $\\in \\lbrace 1e^{-4}, 5e^{-4}, 1e^{-3}, 5e^{-3}\\rbrace $1 = 1 and k = 100 for FB15k-237.",
"To learn our model parameters including entity and relation embeddings, filters $\\omega $ and the weight vector ${w}$ , we use Adam BIBREF32 and select its initial learning rate $\\in \\lbrace 5e^{-6}, 1e^{-5}, 5e^{-5}, 1e^{-4}, 5e^{-4}\\rbrace $ . We use ReLU as the activation function $g$ . We fix the batch size at 256 and set the $L_2$ -regularizer $\\lambda $ at 0.001 in our objective function. The filters $\\omega $ are initialized by a truncated normal distribution or by $[0.1, 0.1, -0.1]$ . We select the number of filters $\\tau \\in \\lbrace 50, 100, 200, 400, 500\\rbrace $ . We run ConvKB up to 200 epochs and use outputs from the last epoch for evaluation. The highest Hits@10 scores on the validation set are obtained when using $k$ = 50, ${w}$0 , the truncated normal distribution for filter initialization, and the initial learning rate at ${w}$1 on WN18RR; and k = 100, ${w}$2 , ${w}$3 for filter initialization, and the initial learning rate at ${w}$4 on FB15k-237."
],
[
"Table 3 compares the experimental results of our ConvKB model with previous published results, using the same experimental setup. Table 3 shows that ConvKB obtains the best MR and highest Hits@10 scores on WN18RR and also the highest MRR and Hits@10 scores on FB15k-237.",
"ConvKB does better than the closely related model TransE on both experimental datasets, especially on FB15k-237 where ConvKB gains significant improvements of $347-257 = 90$ in MR (which is about 26% relative improvement) and $0.396 - 0.294 = 0.102$ in MRR (which is 34+% relative improvement), and also obtains $51.7 - 46.5 = 5.2$ % absolute improvement in Hits@10. Previous work shows that TransE obtains very competitive results BIBREF21 , BIBREF38 , BIBREF20 , BIBREF25 . However, when comparing the CNN-based embedding model ConvE with other models, BIBREF30 did not experiment with TransE. We reconfirm previous findings that TransE in fact is a strong baseline model, e.g., TransE obtains better MR and Hits@10 than ConvE on WN18RR.",
"ConvKB obtains better scores than ConvE on both datasets (except MRR on WN18RR and MR on FB15k-237), thus showing the usefulness of taking transitional characteristics into accounts. In particular, on FB15k-237, ConvKB achieves improvements of $0.394-0.316 = 0.078$ in MRR (which is about 25% relative improvement) and $51.7 - 49.1 = 2.6$ % in Hits@10, while both ConvKB and ConvE produce similar MR scores. ConvKB also obtains 25% relatively higher MRR score than the relation path-based model KB $_{LRN}$ on FB15k-237. In addition, ConvKB gives better Hits@10 than KB $_{LRN}$ , however, KB $_{LRN}$ gives better MR than ConvKB. We plan to extend ConvKB with relation path information to obtain better link prediction performance in future work."
],
[
"In this paper, we propose a novel embedding model ConvKB for the knowledge base completion task. ConvKB applies the convolutional neural network to explore the global relationships among same dimensional entries of the entity and relation embeddings, so that ConvKB generalizes the transitional characteristics in the transition-based embedding models. Experimental results show that our model ConvKB outperforms other state-of-the-art models on two benchmark datasets WN18RR and FB15k-237. Our code is available at: https://github.com/daiquocnguyen/ConvKB.",
"We also plan to extend ConvKB for a new application where we could formulate data in the form of triples. For example, inspired from the work by BIBREF39 for search personalization, we can also apply ConvKB to model user-oriented relationships between submitted queries and documents returned by search engines, i.e. modeling triple representations (query, user, document)."
],
[
"This research was partially supported by the Australian Research Council (ARC) Discovery Grant Project DP160103934."
]
],
"section_name": [
"Introduction",
"Proposed ConvKB model",
"Datasets",
"Evaluation protocol",
"Training protocol",
"Main experimental results",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"b82a3b1c90e24f366e8fba31218480fb7394f394"
],
"answer": [
{
"evidence": [
"Recently, convolutional neural networks (CNNs), originally designed for computer vision BIBREF27 , have significantly received research attention in natural language processing BIBREF28 , BIBREF29 . CNN learns non-linear features to capture complex relationships with a remarkably less number of parameters compared to fully connected neural networks. Inspired from the success in computer vision, BIBREF30 proposed ConvE—the first model applying CNN for the KB completion task. In ConvE, only $v_h$ and $v_r$ are reshaped and then concatenated into an input matrix which is fed to the convolution layer. Different filters of the same $3\\times 3$ shape are operated over the input matrix to output feature map tensors. These feature map tensors are then vectorized and mapped into a vector via a linear transformation. Then this vector is computed with $v_t$ via a dot product to return a score for (h, r, t). See a formal definition of the ConvE score function in Table 1 . It is worth noting that ConvE focuses on the local relationships among different dimensional entries in each of $v_h$ or $v_r$ , i.e., ConvE does not observe the global relationships among same dimensional entries of an embedding triple ( $v_h$ , $v_r$ , $v_t$ ), so that ConvE ignores the transitional characteristic in transition-based models, which is one of the most useful intuitions for the task."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In ConvE, only $v_h$ and $v_r$ are reshaped and then concatenated into an input matrix which is fed to the convolution layer"
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"9179c93cf7c33404d8ff128154c968c7f520b70e"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 1: Process involved in ConvKB (with the embedding size k = 4, the number of filters τ = 3 and the activation function g = ReLU for illustration purpose).",
"Our ConvKB uses different filters $\\in \\mathbb {R}^{1\\times 3}$ to generate different feature maps. Let ${\\Omega }$ and $\\tau $ denote the set of filters and the number of filters, respectively, i.e. $\\tau = |{\\Omega }|$ , resulting in $\\tau $ feature maps. These $\\tau $ feature maps are concatenated into a single vector $\\in \\mathbb {R}^{\\tau k\\times 1}$ which is then computed with a weight vector ${w} \\in \\mathbb {R}^{\\tau k\\times 1}$ via a dot product to give a score for the triple $(h, r, t)$ . Figure 1 illustrates the computation process in ConvKB."
],
"extractive_spans": [],
"free_form_answer": "3 feature maps for a given tuple",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: Process involved in ConvKB (with the embedding size k = 4, the number of filters τ = 3 and the activation function g = ReLU for illustration purpose).",
"Our ConvKB uses different filters $\\in \\mathbb {R}^{1\\times 3}$ to generate different feature maps. Let ${\\Omega }$ and $\\tau $ denote the set of filters and the number of filters, respectively, i.e. $\\tau = |{\\Omega }|$ , resulting in $\\tau $ feature maps. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"d1d09549634542449a7be4189a4d831396c3ff64"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"yes",
"yes",
"yes"
],
"question": [
"Did the authors try stacking multiple convolutional layers?",
"How many feature maps are generated for a given triple?",
"How does the number of parameters compare to other knowledge base completion models?"
],
"question_id": [
"480e10e5a1b9c0ae9f7763b7611eeae9e925096b",
"056fc821d1ec1e8ca5dc958d14ea389857b1a299",
"974868e4e22f14766bcc76dc4927a7f2795dcd5e"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"link prediction",
"link prediction",
"link prediction"
],
"topic_background": [
"research",
"research",
"research"
]
} | {
"caption": [
"Table 1: The score functions in previous SOTA models and in our ConvKB model. ‖v‖p denotes the p-norm of v. 〈vh,vr,vt〉 = ∑ i vhivrivti denotes a tri-linear dot product. g denotes a non-linear function. ∗ denotes a convolution operator. · denotes a dot product. concat denotes a concatenation operator. v̂ denotes a 2D reshaping of v. Ω denotes a set of filters.",
"Table 2: Statistics of the experimental datasets.",
"Figure 1: Process involved in ConvKB (with the embedding size k = 4, the number of filters τ = 3 and the activation function g = ReLU for illustration purpose).",
"Table 3: Experimental results on WN18RR and FB15k-237 test sets. MRR and H@10 denote the mean reciprocal rank and Hits@10 (in %), respectively. [?]: Results are taken from Dettmers et al. (2018) where Hits@10 and MRR are rounded to 2 decimal places on WN18RR. The last 4 rows report results of models that exploit information about relation paths (KBLRN , R-GCN+ and Neural LP) or textual mentions derived from a large external corpus (Node+LinkFeat). The best score is in bold, while the second best score is in underline."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Figure1-1.png",
"4-Table3-1.png"
]
} | [
"How many feature maps are generated for a given triple?"
] | [
[
"1712.02121-3-Figure1-1.png",
"1712.02121-Proposed ConvKB model-4"
]
] | [
"3 feature maps for a given tuple"
] | 887 |
1912.01214 | Cross-lingual Pre-training Based Transfer for Zero-shot Neural Machine Translation | Transfer learning between different language pairs has shown its effectiveness for Neural Machine Translation (NMT) in low-resource scenario. However, existing transfer methods involving a common target language are far from success in the extreme scenario of zero-shot translation, due to the language space mismatch problem between transferor (the parent model) and transferee (the child model) on the source side. To address this challenge, we propose an effective transfer learning approach based on cross-lingual pre-training. Our key idea is to make all source languages share the same feature space and thus enable a smooth transition for zero-shot translation. To this end, we introduce one monolingual pre-training method and two bilingual pre-training methods to obtain a universal encoder for different languages. Once the universal encoder is constructed, the parent model built on such encoder is trained with large-scale annotated data and then directly applied in zero-shot translation scenario. Experiments on two public datasets show that our approach significantly outperforms strong pivot-based baseline and various multilingual NMT approaches. | {
"paragraphs": [
[
"Although Neural Machine Translation (NMT) has dominated recent research on translation tasks BIBREF0, BIBREF1, BIBREF2, NMT heavily relies on large-scale parallel data, resulting in poor performance on low-resource or zero-resource language pairs BIBREF3. Translation between these low-resource languages (e.g., Arabic$\\rightarrow $Spanish) is usually accomplished with pivoting through a rich-resource language (such as English), i.e., Arabic (source) sentence is translated to English (pivot) first which is later translated to Spanish (target) BIBREF4, BIBREF5. However, the pivot-based method requires doubled decoding time and suffers from the propagation of translation errors.",
"One common alternative to avoid pivoting in NMT is transfer learning BIBREF6, BIBREF7, BIBREF8, BIBREF9 which leverages a high-resource pivot$\\rightarrow $target model (parent) to initialize a low-resource source$\\rightarrow $target model (child) that is further optimized with a small amount of available parallel data. Although this approach has achieved success in some low-resource language pairs, it still performs very poorly in extremely low-resource or zero-resource translation scenario. Specifically, BIBREF8 reports that without any child model training data, the performance of the parent model on the child test set is miserable.",
"In this work, we argue that the language space mismatch problem, also named domain shift problem BIBREF10, brings about the zero-shot translation failure in transfer learning. It is because transfer learning has no explicit training process to guarantee that the source and pivot languages share the same feature distributions, causing that the child model inherited from the parent model fails in such a situation. For instance, as illustrated in the left of Figure FIGREF1, the points of the sentence pair with the same semantics are not overlapping in source space, resulting in that the shared decoder will generate different translations denoted by different points in target space. Actually, transfer learning for NMT can be viewed as a multi-domain problem where each source language forms a new domain. Minimizing the discrepancy between the feature distributions of different source languages, i.e., different domains, will ensure the smooth transition between the parent and child models, as shown in the right of Figure FIGREF1. One way to achieve this goal is the fine-tuning technique, which forces the model to forget the specific knowledge from parent data and learn new features from child data. However, the domain shift problem still exists, and the demand of parallel child data for fine-tuning heavily hinders transfer learning for NMT towards the zero-resource setting.",
"In this paper, we explore the transfer learning in a common zero-shot scenario where there are a lot of source$\\leftrightarrow $pivot and pivot$\\leftrightarrow $target parallel data but no source$\\leftrightarrow $target parallel data. In this scenario, we propose a simple but effective transfer approach, the key idea of which is to relieve the burden of the domain shift problem by means of cross-lingual pre-training. To this end, we firstly investigate the performance of two existing cross-lingual pre-training methods proposed by BIBREF11 in zero-shot translation scenario. Besides, a novel pre-training method called BRidge Language Modeling (BRLM) is designed to make full use of the source$\\leftrightarrow $pivot bilingual data to obtain a universal encoder for different languages. Once the universal encoder is constructed, we only need to train the pivot$\\rightarrow $target model and then test this model in source$\\rightarrow $target direction directly. The main contributions of this paper are as follows:",
"We propose a new transfer learning approach for NMT which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation.",
"We propose a novel pre-training method called BRLM, which can effectively alleviates the distance between different source language spaces.",
"Our proposed approach significantly improves zero-shot translation performance, consistently surpassing pivoting and multilingual approaches. Meanwhile, the performance on supervised translation direction remains the same level or even better when using our method."
],
[
"In recent years, zero-shot translation in NMT has attracted widespread attention in academic research. Existing methods are mainly divided into four categories: pivot-based method, transfer learning, multilingual NMT, and unsupervised NMT.",
"Pivot-based Method is a common strategy to obtain a source$\\rightarrow $target model by introducing a pivot language. This approach is further divided into pivoting and pivot-synthetic. While the former firstly translates a source language into the pivot language which is later translated to the target language BIBREF4, BIBREF5, BIBREF12, the latter trains a source$\\rightarrow $target model with pseudo data generated from source-pivot or pivot-target parallel data BIBREF13, BIBREF14. Although the pivot-based methods can achieve not bad performance, it always falls into a computation-expensive and parameter-vast dilemma of quadratic growth in the number of source languages, and suffers from the error propagation problem BIBREF15.",
"Transfer Learning is firstly introduced for NMT by BIBREF6, which leverages a high-resource parent model to initialize the low-resource child model. On this basis, BIBREF7 and BIBREF8 use shared vocabularies for source/target language to improve transfer learning, while BIBREF16 relieve the vocabulary mismatch by mainly using cross-lingual word embedding. Although these methods are successful in the low-resource scene, they have limited effects in zero-shot translation.",
"Multilingual NMT (MNMT) enables training a single model that supports translation from multiple source languages into multiple target languages, even those unseen language pairs BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Aside from simpler deployment, MNMT benefits from transfer learning where low-resource language pairs are trained together with high-resource ones. However, BIBREF22 point out that MNMT for zero-shot translation easily fails, and is sensitive to the hyper-parameter setting. Also, MNMT usually performs worse than the pivot-based method in zero-shot translation setting BIBREF23.",
"Unsupervised NMT (UNMT) considers a harder setting, in which only large-scale monolingual corpora are available for training. Recently, many methods have been proposed to improve the performance of UNMT, including using denoising auto-encoder, statistic machine translation (SMT) and unsupervised pre-training BIBREF24, BIBREF25, BIBREF26, BIBREF11. Since UNMT performs well between similar languages (e.g., English-German translation), its performance between distant languages is still far from expectation.",
"Our proposed method belongs to the transfer learning, but it is different from traditional transfer methods which train a parent model as starting point. Before training a parent model, our approach fully leverages cross-lingual pre-training methods to make all source languages share the same feature space and thus enables a smooth transition for zero-shot translation."
],
[
"In this section, we will present a cross-lingual pre-training based transfer approach. This method is designed for a common zero-shot scenario where there are a lot of source$\\leftrightarrow $pivot and pivot$\\leftrightarrow $target bilingual data but no source$\\leftrightarrow $target parallel data, and the whole training process can be summarized as follows step by step:",
"Pre-train a universal encoder with source/pivot monolingual or source$\\leftrightarrow $pivot bilingual data.",
"Train a pivot$\\rightarrow $target parent model built on the pre-trained universal encoder with the available parallel data. During the training process, we freeze several layers of the pre-trained universal encoder to avoid the degeneracy issue BIBREF27.",
"Directly translate source sentences into target sentences with the parent model, which benefits from the availability of the universal encoder.",
"The key difficulty of this method is to ensure the intermediate representations of the universal encoder are language invariant. In the rest of this section, we first present two existing methods yet to be explored in zero-shot translation, and then propose a straightforward but effective cross-lingual pre-training method. In the end, we present the whole training and inference protocol for transfer."
],
[
"Two existing cross-lingual pre-training methods, Masked Language Modeling (MLM) and Translation Language Modeling (TLM), have shown their effectiveness on XNLI cross-lingual classification task BIBREF11, BIBREF28, but these methods have not been well studied on cross-lingual generation tasks in zero-shot condition. We attempt to take advantage of the cross-lingual ability of the two methods for zero-shot translation.",
"Specifically, MLM adopts the Cloze objective of BERT BIBREF29 and predicts the masked words that are randomly selected and replaced with [MASK] token on monolingual corpus. In practice, MLM takes different language monolingual corpora as input to find features shared across different languages. With this method, word pieces shared in all languages have been mapped into a shared space, which makes the sentence representations across different languages close BIBREF30.",
"Since MLM objective is unsupervised and only requires monolingual data, TLM is designed to leverage parallel data when it is available. Actually, TLM is a simple extension of MLM, with the difference that TLM concatenates sentence pair into a whole sentence, and then randomly masks words in both the source and target sentences. In this way, the model can either attend to surrounding words or to the translation sentence, implicitly encouraging the model to align the source and target language representations. Note that although each sentence pair is formed into one sentence, the positions of the target sentence are reset to count form zero."
],
[
"Aside from MLM and TLM, we propose BRidge Language Modeling (BRLM) to further obtain word-level representation alignment between different languages. This method is inspired by the assumption that if the feature spaces of different languages are aligned very well, the masked words in the corrupted sentence can also be guessed by the context of the correspondingly aligned words on the other side. To achieve this goal, BRLM is designed to strengthen the ability to infer words across languages based on alignment information, instead of inferring words within monolingual sentence as in MLM or within the pseudo sentence formed by concatenating sentence pair as in TLM.",
"As illustrated in Figure FIGREF9, BRLM stacks shared encoder over both side sentences separately. In particular, we design two network structures for BRLM, which are divided into Hard Alignment (BRLM-HA) and Soft Alignment (BRLM-SA) according to the way of generating the alignment information. These two structures actually extend MLM into a bilingual scenario, with the difference that BRLM leverages external aligner tool or additional attention layer to explicitly introduce alignment information during model training.",
"Hard Alignment (BRLM-HA). We first use external aligner tool on source$\\leftrightarrow $pivot parallel data to extract the alignment information of sentence pair. During model training, given source$\\leftrightarrow $pivot sentence pair, BRLM-HA randomly masks some words in source sentence and leverages alignment information to obtain the aligned words in pivot sentence for masked words. Based on the processed input, BRLM-HA adopts the Transformer BIBREF1 encoder to gain the hidden states for source and pivot sentences respectively. Then the training objective of BRLM-HA is to predict the masked words by not only the surrounding words in source sentence but also the encoder outputs of the aligned words. Note that this training process is also carried out in a symmetric situation, in which we mask some words in pivot sentence and obtain the aligned words in the source sentence.",
"Soft Alignment (BRLM-SA). Instead of using external aligner tool, BRLM-SA introduces an additional attention layer to learn the alignment information together with model training. In this way, BRLM-SA avoids the effect caused by external wrong alignment information and enables many-to-one soft alignment during model training. Similar with BRLM-HA, the training objective of BRLM-SA is to predict the masked words by not only the surrounding words in source sentence but also the outputs of attention layer. In our implementation, the attention layer is a multi-head attention layer adopted in Transformer, where the queries come from the masked source sentence, the keys and values come from the pivot sentence.",
"In principle, MLM and TLM can learn some implicit alignment information during model training. However, the alignment process in MLM is inefficient since the shared word pieces only account for a small proportion of the whole corpus, resulting in the difficulty of expanding the shared information to align the whole corpus. TLM also lacks effort in alignment between the source and target sentences since TLM concatenates the sentence pair into one sequence, making the explicit alignment between the source and target infeasible. BRLM fully utilizes the alignment information to obtain better word-level representation alignment between different languages, which better relieves the burden of the domain shift problem."
],
[
"We consider the typical zero-shot translation scenario in which a high resource pivot language has parallel data with both source and target languages, while source and target languages has no parallel data between themselves. Our proposed cross-lingual pretraining based transfer approach for source$\\rightarrow $target zero-shot translation is mainly divided into two phrases: the pretraining phase and the transfer phase.",
"In the pretraining phase, we first pretrain MLM on monolingual corpora of both source and pivot languages, and continue to pretrain TLM or the proposed BRLM on the available parallel data between source and pivot languages, in order to build a cross-lingual encoder shared by the source and pivot languages.",
"In the transfer phase, we train pivot$\\rightarrow $target NMT model initialized by the cross-lingually pre-trained encoder, and finally transfer the trained NMT model to source$\\rightarrow $target translation thanks to the shared encoder. Note that during training pivot$\\rightarrow $target NMT model, we freeze several layers of the cross-lingually pre-trained encoder to avoid the degeneracy issue.",
"For the more complicated scenario that either the source side or the target side has multiple languages, the encoder and the decoder are also shared across each side languages for efficient deployment of translation between multiple languages."
],
[
"We evaluate our cross-lingual pre-training based transfer approach against several strong baselines on two public datatsets, Europarl BIBREF31 and MultiUN BIBREF32, which contain multi-parallel evaluation data to assess the zero-shot performance. In all experiments, we use BLEU as the automatic metric for translation evaluation."
],
[
"The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.",
"For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus."
],
[
"We use traditional transfer learning, pivot-based method and multilingual NMT as our baselines. For the fair comparison, the Transformer-big model with 1024 embedding/hidden units, 4096 feed-forward filter size, 6 layers and 8 heads per layer is adopted for all translation models in our experiments. We set the batch size to 2400 per batch and limit sentence length to 100 BPE tokens. We set the $\\text{attn}\\_\\text{drop}=0$ (a dropout rate on each attention head), which is favorable to the zero-shot translation and has no effect on supervised translation directions BIBREF22. For the model initialization, we use Facebook's cross-lingual pretrained models released by XLM to initialize the encoder part, and the rest parameters are initialized with xavier uniform. We employ the Adam optimizer with $\\text{lr}=0.0001$, $t_{\\text{warm}\\_\\text{up}}=4000$ and $\\text{dropout}=0.1$. At decoding time, we generate greedily with length penalty $\\alpha =1.0$.",
"Regarding MLM, TLM and BRLM, as mentioned in the pre-training phase of transfer protocol, we first pre-train MLM on monolingual data of both source and pivot languages, then leverage the parameters of MLM to initialize TLM and the proposed BRLM, which are continued to be optimized with source-pivot bilingual data. In our experiments, we use MLM+TLM, MLM+BRLM to represent this training process. For the masking strategy during training, following BIBREF29, $15\\%$ of BPE tokens are selected to be masked. Among the selected tokens, $80\\%$ of them are replaced with [MASK] token, $10\\%$ are replaced with a random BPE token, and $10\\%$ unchanged. The prediction accuracy of masked words is used as a stopping criterion in the pre-training stage. Besides, we use fastalign tool BIBREF34 to extract word alignments for BRLM-HA."
],
[
"Table TABREF19 and TABREF26 report zero-shot results on Europarl and Multi-UN evaluation sets, respectively. We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16. The results show that our approaches consistently outperform other approaches across languages and datasets, especially surpass pivoting, which is a strong baseline in the zero-shot scenario that multilingual NMT systems often fail to beat BIBREF19, BIBREF20, BIBREF23. Pivoting translates source to pivot then to target in two steps, causing inefficient translation process. Our approaches use one encoder-decoder model to translate between any zero-shot directions, which is more efficient than pivoting. Regarding the comparison between transfer approaches, our cross-lingual pretraining based transfer outperforms transfer method that does not use pretraining by a large margin."
],
[
"Regarding comparison between the baselines in table TABREF19, we find that pivoting is the strongest baseline that has significant advantage over other two baselines. Cross-lingual transfer for languages without shared vocabularies BIBREF16 manifests the worst performance because of not using source$\\leftrightarrow $pivot parallel data, which is utilized as beneficial supervised signal for the other two baselines.",
"Our best approach of MLM+BRLM-SA achieves the significant superior performance to all baselines in the zero-shot directions, improving by 0.9-4.8 BLEU points over the strong pivoting. Meanwhile, in the supervised direction of pivot$\\rightarrow $target, our approaches performs even better than the original supervised Transformer thanks to the shared encoder trained on both large-scale monolingual data and parallel data between multiple languages.",
"MLM alone that does not use source$\\leftrightarrow $pivot parallel data performs much better than the cross-lingual transfer, and achieves comparable results to pivoting. When MLM is combined with TLM or the proposed BRLM, the performance is further improved. MLM+BRLM-SA performs the best, and is better than MLM+BRLM-HA indicating that soft alignment is helpful than hard alignment for the cross-lingual pretraining."
],
[
"Like experimental results on Europarl, MLM+BRLM-SA performs the best among all proposed cross-lingual pretraining based transfer approaches as shown in Table TABREF26. When comparing systems consisting of one encoder-decoder model for all zero-shot translation, our approaches performs significantly better than MNMT BIBREF19.",
"Although it is challenging for one model to translate all zero-shot directions between multiple distant language pairs of MultiUN, MLM+BRLM-SA still achieves better performances on Es $\\rightarrow $ Ar and Es $\\rightarrow $ Ru than strong pivoting$_{\\rm m}$, which uses MNMT to translate source to pivot then to target in two separate steps with each step receiving supervised signal of parallel corpora. Our approaches surpass pivoting$_{\\rm m}$ in all zero-shot directions by adding back translation BIBREF33 to generate pseudo parallel sentences for all zero-shot directions based on our pretrained models such as MLM+BRLM-SA, and further training our universal encoder-decoder model with these pseudo data. BIBREF22 gu2019improved introduces back translation into MNMT, while we adopt it in our transfer approaches. Finally, our best MLM+BRLM-SA with back translation outperforms pivoting$_{\\rm m}$ by 2.4 BLEU points averagely, and outperforms MNMT BIBREF22 by 4.6 BLEU points averagely. Again, in supervised translation directions, MLM+BRLM-SA with back translation also achieves better performance than the original supervised Transformer."
],
[
"We first evaluate the representational invariance across languages for all cross-lingual pre-training methods. Following BIBREF23, we adopt max-pooling operation to collect the sentence representation of each encoder layer for all source-pivot sentence pairs in the Europarl validation sets. Then we calculate the cosine similarity for each sentence pair and average all cosine scores. As shown in Figure FIGREF27, we can observe that, MLM+BRLM-SA has the most stable and similar cross-lingual representations of sentence pairs on all layers, while it achieves the best performance in zero-shot translation. This demonstrates that better cross-lingual representations can benefit for the process of transfer learning. Besides, MLM+BRLM-HA is not as superior as MLM+BRLM-SA and even worse than MLM+TLM on Fr-En, since MLM+BRLM-HA may suffer from the wrong alignment knowledge from an external aligner tool. We also find an interesting phenomenon that as the number of layers increases, the cosine similarity decreases."
],
[
"We further sample an English-Russian sentence pair from the MultiUN validation sets and visualize the cosine similarity between hidden states of the top encoder layer to further investigate the difference of all cross-lingual pre-training methods. As shown in Figure FIGREF38, the hidden states generated by MLM+BRLM-SA have higher similarity for two aligned words. It indicates that MLM+BRLM-SA can gain better word-level representation alignment between source and pivot languages, which better relieves the burden of the domain shift problem."
],
[
"To freeze parameters is a common strategy to avoid catastrophic forgetting in transfer learning BIBREF27. Table TABREF43 shows the performance of transfer learning with freezing different layers on MultiUN test set, in which En$\\rightarrow $Ru denotes the parent model, Ar$\\rightarrow $Ru and Es$\\rightarrow $Ru are two child models, and all models are based on MLM+BRLM-SA. We can find that updating all parameters during training will cause a notable drop on the zero-shot direction due to the catastrophic forgetting. On the contrary, freezing all the parameters leads to the decline on supervised direction because the language features extracted during pre-training is not sufficient for MT task. Freezing the first four layers of the transformer shows the best performance and keeps the balance between pre-training and fine-tuning."
],
[
"In this paper, we propose a cross-lingual pretraining based transfer approach for the challenging zero-shot translation task, in which source and target languages have no parallel data, while they both have parallel data with a high resource pivot language. With the aim of building the language invariant representation between source and pivot languages for smooth transfer of the parent model of pivot$\\rightarrow $target direction to the child model of source$\\rightarrow $target direction, we introduce one monolingual pretraining method and two bilingual pretraining methods to construct an universal encoder for the source and pivot languages. Experiments on public datasets show that our approaches significantly outperforms several strong baseline systems, and manifest the language invariance characteristics in both sentence level and word level neural representations."
],
[
"We would like to thank the anonymous reviewers for the helpful comments. This work was supported by National Key R&D Program of China (Grant No. 2016YFE0132100), National Natural Science Foundation of China (Grant No. 61525205, 61673289). This work was also partially supported by Alibaba Group through Alibaba Innovative Research Program and the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions."
]
],
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Approach ::: Masked and Translation Language Model Pretraining",
"Approach ::: Bridge Language Model Pretraining",
"Approach ::: Transfer Protocol",
"Experiments ::: Setup",
"Experiments ::: Setup ::: Datasets.",
"Experiments ::: Setup ::: Experimental Details.",
"Experiments ::: Main Results",
"Experiments ::: Main Results ::: Results on Europarl Dataset.",
"Experiments ::: Main Results ::: Results on MultiUN Dataset.",
"Experiments ::: Analysis ::: Sentence Representation.",
"Experiments ::: Analysis ::: Contextualized Word Representation.",
"Experiments ::: Analysis ::: The Effect of Freezing Parameters.",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"819898a2daf67225307aaf59d8048987c06b0c03",
"bbc549d0d6a598c0a93d0dde01d8cc6ffe316adc"
],
"answer": [
{
"evidence": [
"Table TABREF19 and TABREF26 report zero-shot results on Europarl and Multi-UN evaluation sets, respectively. We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16. The results show that our approaches consistently outperform other approaches across languages and datasets, especially surpass pivoting, which is a strong baseline in the zero-shot scenario that multilingual NMT systems often fail to beat BIBREF19, BIBREF20, BIBREF23. Pivoting translates source to pivot then to target in two steps, causing inefficient translation process. Our approaches use one encoder-decoder model to translate between any zero-shot directions, which is more efficient than pivoting. Regarding the comparison between transfer approaches, our cross-lingual pretraining based transfer outperforms transfer method that does not use pretraining by a large margin."
],
"extractive_spans": [
"BIBREF19",
"BIBREF20"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16. ",
"The results show that our approaches consistently outperform other approaches across languages and datasets, especially surpass pivoting, which is a strong baseline in the zero-shot scenario that multilingual NMT systems often fail to beat BIBREF19, BIBREF20, BIBREF23."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF19 and TABREF26 report zero-shot results on Europarl and Multi-UN evaluation sets, respectively. We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16. The results show that our approaches consistently outperform other approaches across languages and datasets, especially surpass pivoting, which is a strong baseline in the zero-shot scenario that multilingual NMT systems often fail to beat BIBREF19, BIBREF20, BIBREF23. Pivoting translates source to pivot then to target in two steps, causing inefficient translation process. Our approaches use one encoder-decoder model to translate between any zero-shot directions, which is more efficient than pivoting. Regarding the comparison between transfer approaches, our cross-lingual pretraining based transfer outperforms transfer method that does not use pretraining by a large margin."
],
"extractive_spans": [
"multilingual NMT (MNMT) BIBREF19"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"291a918eae943cf62b5d5ad4a9b6b24c4e3090f1",
"8ca8aaa07c7ee1d00c02322d47244d4489f151d1"
],
"answer": [
{
"evidence": [
"Table TABREF19 and TABREF26 report zero-shot results on Europarl and Multi-UN evaluation sets, respectively. We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16. The results show that our approaches consistently outperform other approaches across languages and datasets, especially surpass pivoting, which is a strong baseline in the zero-shot scenario that multilingual NMT systems often fail to beat BIBREF19, BIBREF20, BIBREF23. Pivoting translates source to pivot then to target in two steps, causing inefficient translation process. Our approaches use one encoder-decoder model to translate between any zero-shot directions, which is more efficient than pivoting. Regarding the comparison between transfer approaches, our cross-lingual pretraining based transfer outperforms transfer method that does not use pretraining by a large margin.",
"Although it is challenging for one model to translate all zero-shot directions between multiple distant language pairs of MultiUN, MLM+BRLM-SA still achieves better performances on Es $\\rightarrow $ Ar and Es $\\rightarrow $ Ru than strong pivoting$_{\\rm m}$, which uses MNMT to translate source to pivot then to target in two separate steps with each step receiving supervised signal of parallel corpora. Our approaches surpass pivoting$_{\\rm m}$ in all zero-shot directions by adding back translation BIBREF33 to generate pseudo parallel sentences for all zero-shot directions based on our pretrained models such as MLM+BRLM-SA, and further training our universal encoder-decoder model with these pseudo data. BIBREF22 gu2019improved introduces back translation into MNMT, while we adopt it in our transfer approaches. Finally, our best MLM+BRLM-SA with back translation outperforms pivoting$_{\\rm m}$ by 2.4 BLEU points averagely, and outperforms MNMT BIBREF22 by 4.6 BLEU points averagely. Again, in supervised translation directions, MLM+BRLM-SA with back translation also achieves better performance than the original supervised Transformer."
],
"extractive_spans": [
"pivoting",
"pivoting$_{\\rm m}$"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our approaches with related approaches of pivoting, multilingual NMT (MNMT) BIBREF19, and cross-lingual transfer without pretraining BIBREF16.",
"Although it is challenging for one model to translate all zero-shot directions between multiple distant language pairs of MultiUN, MLM+BRLM-SA still achieves better performances on Es $\\rightarrow $ Ar and Es $\\rightarrow $ Ru than strong pivoting$_{\\rm m}$, which uses MNMT to translate source to pivot then to target in two separate steps with each step receiving supervised signal of parallel corpora. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use traditional transfer learning, pivot-based method and multilingual NMT as our baselines. For the fair comparison, the Transformer-big model with 1024 embedding/hidden units, 4096 feed-forward filter size, 6 layers and 8 heads per layer is adopted for all translation models in our experiments. We set the batch size to 2400 per batch and limit sentence length to 100 BPE tokens. We set the $\\text{attn}\\_\\text{drop}=0$ (a dropout rate on each attention head), which is favorable to the zero-shot translation and has no effect on supervised translation directions BIBREF22. For the model initialization, we use Facebook's cross-lingual pretrained models released by XLM to initialize the encoder part, and the rest parameters are initialized with xavier uniform. We employ the Adam optimizer with $\\text{lr}=0.0001$, $t_{\\text{warm}\\_\\text{up}}=4000$ and $\\text{dropout}=0.1$. At decoding time, we generate greedily with length penalty $\\alpha =1.0$.",
"Pivot-based Method is a common strategy to obtain a source$\\rightarrow $target model by introducing a pivot language. This approach is further divided into pivoting and pivot-synthetic. While the former firstly translates a source language into the pivot language which is later translated to the target language BIBREF4, BIBREF5, BIBREF12, the latter trains a source$\\rightarrow $target model with pseudo data generated from source-pivot or pivot-target parallel data BIBREF13, BIBREF14. Although the pivot-based methods can achieve not bad performance, it always falls into a computation-expensive and parameter-vast dilemma of quadratic growth in the number of source languages, and suffers from the error propagation problem BIBREF15."
],
"extractive_spans": [
"firstly translates a source language into the pivot language which is later translated to the target language"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use traditional transfer learning, pivot-based method and multilingual NMT as our baselines.",
"Pivot-based Method is a common strategy to obtain a source$\\rightarrow $target model by introducing a pivot language. This approach is further divided into pivoting and pivot-synthetic. While the former firstly translates a source language into the pivot language which is later translated to the target language BIBREF4, BIBREF5, BIBREF12, the latter trains a source$\\rightarrow $target model with pseudo data generated from source-pivot or pivot-target parallel data BIBREF13, BIBREF14."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ce1cbd643169ab70cc3b807401798b400472868f",
"f5d30fb867823556668418ecbbafde77fa834f2f"
],
"answer": [
{
"evidence": [
"We evaluate our cross-lingual pre-training based transfer approach against several strong baselines on two public datatsets, Europarl BIBREF31 and MultiUN BIBREF32, which contain multi-parallel evaluation data to assess the zero-shot performance. In all experiments, we use BLEU as the automatic metric for translation evaluation."
],
"extractive_spans": [
"Europarl",
"MultiUN"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our cross-lingual pre-training based transfer approach against several strong baselines on two public datatsets, Europarl BIBREF31 and MultiUN BIBREF32, which contain multi-parallel evaluation data to assess the zero-shot performance."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our cross-lingual pre-training based transfer approach against several strong baselines on two public datatsets, Europarl BIBREF31 and MultiUN BIBREF32, which contain multi-parallel evaluation data to assess the zero-shot performance. In all experiments, we use BLEU as the automatic metric for translation evaluation."
],
"extractive_spans": [
"Europarl BIBREF31",
"MultiUN BIBREF32"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our cross-lingual pre-training based transfer approach against several strong baselines on two public datatsets, Europarl BIBREF31 and MultiUN BIBREF32, which contain multi-parallel evaluation data to assess the zero-shot performance."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"36d46e79bc56de8706a4a7001d5e74fc25ecf15c",
"81063a32b1f4c7450eb6c0fbbcb649870b47e344"
],
"answer": [
{
"evidence": [
"For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus.",
"The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.",
"FLOAT SELECTED: Table 1: Data Statistics."
],
"extractive_spans": [],
"free_form_answer": "De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru",
"highlighted_evidence": [
"For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. ",
"The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. ",
"FLOAT SELECTED: Table 1: Data Statistics."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.",
"For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus."
],
"extractive_spans": [
"French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De)",
"Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation"
],
"free_form_answer": "",
"highlighted_evidence": [
"For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets.",
"For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"which multilingual approaches do they compare with?",
"what are the pivot-based baselines?",
"which datasets did they experiment with?",
"what language pairs are explored?"
],
"question_id": [
"b6f15fb6279b82e34a5bf4828b7b5ddabfdf1d54",
"f5e6f43454332e0521a778db0b769481e23e7682",
"9a05a5f4351db75da371f7ac12eb0b03607c4b87",
"5eda469a8a77f028d0c5f1acd296111085614537"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: The circle and triangle dots represent source sentences in different language l1 and l2, and the square dots means target sentences in language l3. A sample of translation pairs is connected by the dashed line. We would like to force each of the translation pairs has the same latent representation as the right part of the figure so as to transfer l1 → l3 model directly to l2 → l3 model.",
"Figure 2: The overview of BRidge Language Modeling (BRLM). The BRLM extends MLM (Lample and Conneau 2019) to pairs of parallel sentences and leverages explicit alignment information obtained by external aligner tool or additional attention layer to encourage word representation alignment across different languages.",
"Table 1: Data Statistics.",
"Table 2: Results on Europarl test sets. Three pivot settings are conducted in our experiments. In each setting, the left column presents the zero-shot performances (source→target), and the right column denotes the performances in the supervised parent model direction (pivot→target).",
"Table 3: Results on MultiUN test sets. The six zero-shot translation directions are evaluated. The column “A-ZST\" reports averaged BLEU of zero-shot translation, while the column “A-ST\" reports averaged BLEU of supervised pivot→target direction.",
"Figure 3: Cosine similarity between sentence representation of each encoder layer across all source-pivot sentence pairs in the Europarl validation set.",
"Figure 4: Cosine similarity visualization at word level given an English-Russian sentence pair from the MultiUN validation sets. Brighter indicates higher similarity.",
"Table 4: BLEU score of freezing different layers. The number in Freezing Layers column denotes that the number of encoder layers will not be updated."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Figure3-1.png",
"7-Figure4-1.png",
"7-Table4-1.png"
]
} | [
"what language pairs are explored?"
] | [
[
"1912.01214-Experiments ::: Setup ::: Datasets.-0",
"1912.01214-Experiments ::: Setup ::: Datasets.-1",
"1912.01214-4-Table1-1.png"
]
] | [
"De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru"
] | 0 |
1609.00425 | Identifying Dogmatism in Social Media: Signals and Models | We explore linguistic and behavioral features of dogmatism in social media and construct statistical models that can identify dogmatic comments. Our model is based on a corpus of Reddit posts, collected across a diverse set of conversational topics and annotated via paid crowdsourcing. We operationalize key aspects of dogmatism described by existing psychology theories (such as over-confidence), finding they have predictive power. We also find evidence for new signals of dogmatism, such as the tendency of dogmatic posts to refrain from signaling cognitive processes. When we use our predictive model to analyze millions of other Reddit posts, we find evidence that suggests dogmatism is a deeper personality trait, present for dogmatic users across many different domains, and that users who engage on dogmatic comments tend to show increases in dogmatic posts themselves. | {
"paragraphs": [
[
"“I'm supposed to trust the opinion of a MS minion? The people that produced Windows ME, Vista and 8? They don't even understand people, yet they think they can predict the behavior of new, self-guiding AI?” –anonymous",
"“I think an AI would make it easier for Patients to confide their information because by nature, a robot cannot judge them. Win-win? :D”' –anonymous",
"Dogmatism describes the tendency to lay down opinions as incontrovertibly true, without respect for conflicting evidence or the opinions of others BIBREF0 . Which user is more dogmatic in the examples above? This question is simple for humans. Phrases like “they think” and “they don't even understand,” suggest an intractability of opinion, while “I think” and “win-win?” suggest the opposite. Can we train computers to draw similar distinctions? Work in psychology has called out many aspects of dogmatism that can be modeled computationally via natural language, such as over-confidence and strong emotions BIBREF1 .",
"We present a statistical model of dogmatism that addresses two complementary goals. First, we validate psychological theories by examining the predictive power of feature sets that guide the model's predictions. For example, do linguistic signals of certainty help to predict a post is dogmatic, as theory would suggest? Second, we apply our model to answer four questions:",
"R1: What kinds of topics (e.g., guns, LGBT) attract the highest levels of dogmatism?",
"R2: How do dogmatic beliefs cluster?",
"R3: How does dogmatism influence a conversation on social media? R4: How do other user behaviors (e.g., frequency and breadth of posts) relate to dogmatism?",
"We train a predictive model to classify dogmatic posts from Reddit, one of the most popular discussion communities on the web. Posts on Reddit capture discussion and debate across a diverse set of domains and topics – users talk about everything from climate change and abortion, to world news and relationship advice, to the future of artificial intelligence. As a prerequisite to training our model, we have created a corpus of 5,000 Reddit posts annotated with levels of dogmatism, which we are releasing to share with other researchers.",
"Using the model, we operationalize key domain-independent aspects of psychological theories of dogmatism drawn from the literature. We find these features have predictive power that largely supports the underlying theory. For example, posts that use less confident language tend to be less dogmatic. We also discover evidence for new attributes of dogmatism. For example, dogmatic posts tend not to verbalize cognition, through terms such as “I think,” “possibly,” or “might be.”",
"Our model is trained on only 5,000 annotated posts, but once trained, we use it to analyze millions of other Reddit posts to answer our research questions. We find a diverse set of topics are colored by dogmatic language (e.g., people are dogmatic about religion, but also about LGBT issues). Further, we find some evidence for dogmatism as a deeper personality trait – people who are strongly dogmatic about one topic are more likely to express dogmatic views about others as well. Finally, in conversation, we discover that one user's dogmatism tends to bring out dogmatism in their conversational partner, forming a vicious cycle."
],
[
"Posts on Reddit capture debate and discussion across a diverse set of topics, making them a natural starting point for untangling domain-independent linguistic features of dogmatism.",
"Data collection. Subreddits are sub-communities on Reddit oriented around specific interests or topics, such as technology or politics. Sampling from Reddit as a whole would bias the model towards the most commonly discussed content. But by sampling posts from individual subreddits, we can control the kinds of posts we use to train our model. To collect a diverse training dataset, we have randomly sampled 1000 posts each from the subreddits politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage. All posts in our sample appeared between January 2007 and March 2015, and to control for length effects, contain between 300 and 400 characters. This results in a total training dataset of 5000 posts.",
"Dogmatism annotations. Building a useful computational model requires labeled training data. We labeled the Reddit dataset using crowdworkers on Amazon Mechanical Turk (AMT), creating the first public corpus annotated with levels of dogmatism. We asked crowdworkers to rate levels of dogmatism on a 5-point Likert scale, as supported by similar annotation tasks in prior work BIBREF2 . Concretely, we gave crowdworkers the following task: ",
"Given a comment, imagine you hold a well-informed, different opinion from the commenter in question. We'd like you to tell us how likely that commenter would be to engage you in a constructive conversation about your disagreement, where you each are able to explore the other's beliefs. The options are:",
"(5): It's unlikely you'll be able to engage in any substantive conversation. When you respectfully express your disagreement, they are likely to ignore you or insult you or otherwise lower the level of discourse.",
"(4): They are deeply rooted in their opinion, but you are able to exchange your views without the conversation degenerating too much.",
"(3): It's not likely you'll be able to change their mind, but you're easily able to talk and understand each other's point of view.",
"(2): They may have a clear opinion about the subject, but would likely be open to discussing alternative viewpoints.",
"(1): They are not set in their opinion, and it's possible you might change their mind. If the comment does not convey an opinion of any kind, you may also select this option.",
"To ensure quality work, we restricted the task to Masters workers and provided examples corresponding to each point on the scale. Including examples in a task has been shown to significantly increase the agreement and quality of crowdwork BIBREF3 . For instance, here is an example of a highly dogmatic (5) comment: ",
"I won't be happy until I see the executive suite of BofA, Wells, and all the others, frog-marched into waiting squad cars. It's ALREADY BEEN ESTABLISHED that...",
"And a minimally dogmatic (1) comment: ",
"I agree. I would like to compile a playlist for us trance yogi's, even if you just would like to experiment with it. Is there any preference on which platform to use?",
"Each comment has been annotated by three independent workers on AMT, which is enough to produce reliable results in most labeling tasks BIBREF4 . To compute an aggregate measure of dogmatism for each comment, we summed the scores of all three workers. We show the resulting distribution of annotations in Figure 1 .",
"Inter-annotator agreement. To evaluate the reliability of annotations we compute Krippendorff's $\\alpha $ , a measure of agreement designed for variable levels of measurement such as a Likert scale BIBREF5 . An $\\alpha $ of 0 indicates agreement indistinguishable from chance, while an $\\alpha $ of 1 indicates perfect agreement. Across all annotations we find $\\alpha =0.44$ . While workers agree much more than chance, clearly dogmatism is also subjective. In fact, when we examine only the middle two quartiles of the dogmatism annotations, we find agreement is no better than chance. Alternatively, when we measure agreement only among the top and bottom quartiles of annotations, we find agreement of $\\alpha =0.69$ . This suggests comments with scores that are only slightly dogmatic are unreliable and often subject to human disagreement. For this reason, we use only the top and bottom quartiles of comments when training our model."
],
[
"We now consider strategies for identifying dogmatism based on prior work in psychology. We start with the Linguistic Inquiry and Word Count (LIWC), a lexicon popular in the social sciences BIBREF6 . LIWC provides human validated lists of words that correspond to high-level psychological categories such as certainty or perception. In other studies, LIWC has uncovered linguistic signals relating to politeness BIBREF2 , deception BIBREF7 , or authority in texts BIBREF8 . Here, we examine how dogmatism relates to 17 of LIWC's categories (Table 1 ).",
"To compute the relationships between LIWC categories and dogmatism, we first count the relevant category terms that appear in each annotated Reddit comment, normalized by its word count. We then calculate odds ratios on the aggregate counts of each LIWC category over the top and bottom quartiles of dogmatic comments. As we have discussed, using the top and bottom quartiles of comments provides a more reliable signal of dogmatism. We check for significant differences in categories between dogmatic and non-dogmatic comments using the Mann-Whitney U test and apply Holmes method for correction. All odds we report in this section are significant after correction.",
"Dogmatic statements tend to express a high degree of certainty BIBREF1 . Here we consider LIWC categories that express certainty both positively (certainty) and negatively (tentativeness). For example, the word “always” is certain, while “possibly” is tentative. Conforming to existing theory, certainty is more associated with dogmatic comments (1.52 odds), while tentativeness is more associated with the absence of dogmatism (0.88 odds).",
"Terms used to verbalize cognition can act as a hedge that often characterizes non-dogmatic language. LIWC's insight category captures this effect through words such as “think,” “know,” or “believe.” These words add nuance to a statement BIBREF9 , signaling it is the product of someone's mind (“I think you should give this paper a good review”) and not meant to be interpreted as an objective truth. Along these lines, we find the use of terms in the insight category is associated with non-dogmatic comments (0.83 odds).",
"Sensory language, with its focus on description and detail, often signals a lack of any kind of opinion, dogmatic or otherwise. LIWC's perception category captures this idea through words associated with hearing, feeling, or seeing. For example, these words might occur when recounting a personal experience (“I saw his incoming fist”), which even if emotionally charged or negative, is less likely to be dogmatic. We find perception is associated with non-dogmatic comments at 0.77 odds.",
"Drawing comparisons or qualifying something as relative to something else conveys a nuance that is absent from traditionally dogmatic language. The LIWC categories comparison and relativity capture these effects through comparison words such as “than” or “as” and qualifying words such as “during” or “when.” For example, the statement “I hate politicians” is more dogmatic than “I hate politicians when they can't get anything done.' Relativity is associated with non-dogmatic comments at 0.80 odds, but comparison does not reach significance.",
"Pronouns can be surprisingly revealing indicators of language: for example, signaling one's gender or hierarchical status in a conversation BIBREF10 . We find first person singular pronouns are a useful negative signal for dogmatism (0.46 odds), while second person singular pronouns (2.18 odds) and third person plural (1.63 odds) are a useful positive signal. Looking across the corpus, we see I often used with a hedge (“I think” or “I know”), while you and they tend to characterize the beliefs of others, often in a strongly opinionated way (“you are a moron” or “they are keeping us down”). Other pronoun types do not show significant relationships.",
"Like pronouns, verb tense can reveal subtle signals in language use, such as the tendency of medical inpatients to focus on the past BIBREF11 . On social media, comments written in the present tense are more likely to be oriented towards a user's current interaction (“this is all so stupid”), creating opportunities to signal dogmatism. Alternatively, comments in the past tense are more likely to refer to outside experiences (“it was an awful party”), speaking less to a user's stance towards an ongoing discussion. We find present tense is a positive signal for dogmatism (1.11 odds) and past tense is a negative signal (0.69 odds).",
"Dogmatic language can be either positively or negatively charged in sentiment: for example, consider the positive statement “Trump is the SAVIOR of this country!!!” or the negative statement “Are you REALLY that stupid?? Education is the only way out of this horrible mess. It's hard to imagine how anyone could be so deluded.” In diverse communities, where people hold many different kinds of opinions, dogmatic opinions will often tend to come into conflict with one another BIBREF12 , producing a greater likelihood of negative sentiment. Perhaps for this reason, negative emotion (2.09 odds) and swearing (3.80 odds) are useful positive signals of dogmatism, while positive emotion shows no significant relationship.",
"Finally, we find that interrogative language (1.12 odds) and negation (1.35 odds) are two additional positive signals of dogmatism. While interrogative words like “how” or “what” have many benign uses, they disproportionately appear in our data in the form of rhetorical or emotionally charged questions, such as “how can anyone be that dumb?”",
"Many of these linguistic signals are correlated with each other, suggesting that dogmatism is the cumulative effect of many component relationships. For example, consider the relatively non-dogmatic statement: “I think the reviewers are wrong in this instance.” Removing signals of insight, we have: “the reviewers are wrong in this instance,” which is slightly more dogmatic. Then removing relativity, we have: “the reviewers are wrong.” And finally, adding certainty, we have a dogmatic statement: “the reviewers are always wrong.”"
],
[
"We now show how we can use the linguistic feature sets we have described to build a classifier that predicts dogmatism in comments. A predictive model further validates our feature sets, and also allows us to analyze dogmatism in millions of other Reddit comments in a scalable way, with multiple uses in ongoing, downstream analyses.",
"Prediction task. Our goal is (1) to understand how well we can use the strategies in Section 3 to predict dogmatism, and (2) to test the domain-independence of these strategies. First, we test the performance of our model under cross-validation within the Reddit comment dataset. We then evaluate the Reddit-based model on a held out corpus of New York Times comments annotated using the technique in Section 2. We did not refer to this second dataset during feature construction.",
"For classification, we consider two classes of comments: dogmatic and non-dogmatic. As in the prior analysis, we draw these comments from the top and bottom quartiles of the dogmatism distribution. This means the classes are balanced, with 2,500 total comments in the Reddit training data and 500 total comments in the New York Times testing data.",
"We compare the predictions of logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features. BOW and SENT provide baselines for the task. We compute BOW features using term frequency-inverse document frequency (TF-IDF) and category-based features by normalizing counts for each category by the number of words in each document. The BOW classifiers are trained with regularization (L2 penalties of 1.5).",
"Classification results. We present classification accuracy in Table 2 . BOW shows an AUC of 0.853 within Reddit and 0.776 on the held out New York Times comments. The linguistic features boost classification results within Reddit (0.881) and on the held out New York Times comments (0.791). While linguistic signals by themselves provide strong predictive power (0.801 AUC within domain), sentiment signals are much less predictive.",
"These results suggest that linguistic features inspired by prior efforts in psychology are useful for predicting dogmatism in practice and generalize across new domains."
],
[
"We now apply our dogmatism classifier to a larger dataset of posts, examining how dogmatic language shapes the Reddit community. Concretely, we apply the BOW+LING model trained on the full Reddit dataset to millions of new unannotated posts, labeling these posts with a probability of dogmatism according to the classifier (0=non-dogmatic, 1=dogmatic). We then use these dogmatism annotations to address four research questions."
],
[
"A natural starting point for analyzing dogmatism on Reddit is to examine how it characterizes the site's sub-communities. For example, we might expect to see that subreddits oriented around topics such as abortion or climate change are more dogmatic, and subreddits about cooking are less so.",
"To answer this question, we randomly sample 1.6 million posts from the entire Reddit community between 2007 and 2015. We then annotate each of these posts with dogmatism using our classifier, and compute the average dogmatism level for each subreddit in the sample with at least 100 posts.",
"We present the results of this analysis in Table 3 . The subreddits with the highest levels of dogmatism tend to be oriented around politics and religion (DebateAChristian or ukpolitics), while those with the lowest levels tend to focus on hobbies (photography or homebrewing). The subreddit with the highest average dogmatism level, cringepics, is a place to make fun of socially awkward messages, often from would-be romantic partners. Dogmatism here tends to take the form of “how could someone be that stupid” and is directed at the subject of the post, as opposed to other members of the community.",
"Similarly, SubredditDrama is a community where people come to talk about fights on the internet or social media. These fights are often then extended in discussion, for example: “If the best you can come up with is that something you did was legal, it's probably time to own up to being an ass.” The presence of this subreddit in our analysis provides a further sanity check that our model is capturing a robust signal of dogmatism."
],
[
"Dogmatism is widely considered to be a domain-specific attitude (for example, oriented towards religion or politics) as opposed to a deeper personality trait BIBREF1 . Here we use Reddit as a lens to examine this idea more closely. Are users who are dogmatic about one topic likely to be dogmatic about others? Do clusters of dogmatism exist around particular topics? To find out, we examine the relationships between subreddits over which individual users are dogmatic. For example, if many users often post dogmatic comments on both the politics and Christianity subreddits, but less often on worldnews, that would suggest politics and Christianity are linked per a boost in likelihood of individuals being dogmatic in both.",
"We sample 1000 Reddit users who posted at least once a year between 2007 and 2015 to construct a corpus of 10 million posts that constitute their entire post history. We then annotate these posts using the classifier and compute the average dogmatism score per subreddit per user. For example, one user might have an average dogmatism level of 0.55 for the politics subreddit and 0.45 for the economics subreddit. Most users do not post in all subreddits, so we track only subreddits for which a user had posted at least 10 times. Any subreddits with an average dogmatism score higher than 0.50 we consider to be a user's dogmatic subreddits. We then count all pairs of these dogmatic subreddits. For example, 45 users have politics and technology among their dogmatic subreddits, so we consider politics and technology as linked 45 times. We compute the mutual information BIBREF13 between these links, which gives us a measure of the subreddits that are most related through dogmatism.",
"We present the results of this analysis in Table 4 , choosing clusters that represent a diverse set of topics. For example, Libertarianism is linked through dogmatism to other political communities like Anarcho_Capitalism, ronpaul, or ukpolitics, as well as other topical subreddits like guns or economy. Similarly, people who are dogmatic in the business subreddit also tend to be dogmatic in subreddits for Bitcoin, socialism, and technology. Notably, when we apply the same mutual information analysis to links defined by subreddits posted in by the same user, we see dramatically different results. For example, the subreddits most linked to science through user posts are UpliftingNews, photoshopbattles, and firstworldanarchist, and millionairemakers.",
"Finally, we see less obvious connections between subreddits that suggest some people may be dogmatic by nature. For example, among the users who are dogmatic on politics, they are also disproportionately dogmatic on unrelated subreddits such as science ( $p<0.001$ ), technology ( $p<0.001$ ), IAmA ( $p<0.001$ ), and AskReddit ( $p<0.05$ ), with p-values computed under a binomial test."
],
[
"We have shown dogmatism is captured by many linguistic features, but can we discover other high-level user behaviors that are similarly predictive?",
"To find out, we compute metrics of user behavior using the data sample of 1000 users and 10 million posts described in Section 5.2. Specifically, we calculate (1) activity: a user's total number of posts, (2) breadth: the number of subreddits a user has posted in, (3) focus: the proportion of a user's posts that appear in the subreddit where they are most active, and (4) engagement: the average number of posts a user contributes to each discussion they engage in. We then fit these behavioral features to a linear regression model where we predict each user's average dogmatism level. Positive coefficients in this model are positively predictive of dogmatism, while negative coefficients are negatively predictive. We find this model is significantly predicitive of dogmatism ( $R^2=0.1$ , $p<0.001$ ), with all features reaching statistical significance ( $p<0.001$ ). Activity and focus are positively associated with dogmatism, while breadth and engagement are negatively associated (Table 5 ). Together, these results suggest dogmatic users tend to post frequently and in specific communities, but are not as inclined to continue to engage with a discussion, once it has begun."
],
[
"How does interacting with a dogmatic comment impact a conversation? Are users able to shrug it off? Or do otherwise non-dogmatic users become more dogmatic themselves?",
"To answer this question, we sample 600,000 conversations triples from Reddit. These conversations consist of two people (A and B) talking, with the structure: A1 $\\rightarrow $ B $\\rightarrow $ A2. This allows us to measure the impact of B's dogmatism on A's response, while also controlling for the dogmatism level initially set by A. Concretely, we model the impact of dogmatism on these conversations through a linear regression. This model takes two features, the dogmatism levels of A1 and B, and predicts the dogmatism response of A2. If B's dogmatism has no effect on A's response, the coefficient that corresponds to B will not be significant in the model. Alternatively, if B's dogmatism does have some effect, it will be captured by the model's coefficient.",
"We find the coefficient of the B feature in the model is positively associated with dogmatism ( $p<0.001$ ). In other words, engagement with a dogmatic comment tends to make a user more dogmatic themselves. This effect holds when we run the same model on data subsets consisting only of dogmatic or non-dogmatic users, and also when we conservatively remove all words used by B from A's response (i.e., controlling for quoting effects)."
],
[
"In contrast to the computational models we have presented, dogmatism is usually measured in psychology through survey scales, in which study participants answer questions designed to reveal underlying personality attributes BIBREF1 . Over time, these surveys have been updated BIBREF14 and improved to meet standards of psychometric validity BIBREF15 .",
"These surveys are often used to study the relationship between dogmatism and other psychological phenomena. For example, dogmatic people tend to show an increased tendency for confrontation BIBREF16 or moral conviction and religiosity BIBREF17 , and less likelihood of cognitive flexibility BIBREF18 , even among stereotypically non-dogmatic groups like atheists BIBREF19 . From a behavioral standpoint, dogmatic people solve problems differently, spending less time framing a problem and expressing more certainty in their solution BIBREF20 . Here we similarly examine how user behaviors on Reddit relate to a language model of dogmatism.",
"Ertel sought to capture dogmatism linguistically, though a small lexicon of words that correspond with high-level concepts like certainty and compromise dota. McKenny then used this dictionary to relate dogmatism to argument quality in student essays dogmatism-essays. Our work expands on this approach, applying supervised models based on a broader set of linguistic categories to identify dogmatism in text.",
"Other researchers have studied topics similar to dogmatism, such as signals of cognitive style in right-wing political thought BIBREF21 , the language used by trolls on social media BIBREF22 , or what makes for impartial language on twitter BIBREF23 . A similar flavor of work has examined linguistic models that capture politeness BIBREF2 , deception BIBREF24 , and authority BIBREF8 . We took inspiration from these models when constructing the feature sets in our work.",
"Finally, while we examine what makes an opinion dogmatic, other work has pushed further into the structure of arguments, for example classifying their justifications BIBREF25 , or what makes an argument likely to win BIBREF26 . Our model may allow future researchers to probe these questions more deeply."
],
[
"We have constructed the first corpus of social media posts annotated with dogmatism scores, allowing us to explore linguistic features of dogmatism and build a predictive model that analyzes new content. We apply this model to Reddit, where we discover behavioral predictors of dogmatism and topical patterns in the comments of dogmatic users.",
"Could we use this computational model to help users shed their dogmatic beliefs? Looking forward, our work makes possible new avenues for encouraging pro-social behavior in online communities. "
]
],
"section_name": [
"Introduction",
"Dogmatism data",
"Approaches to Identifying Dogmatism",
"Predicting dogmatism",
"Dogmatism in the Reddit Community ",
"What subreddits have the highest and lowest levels of dogmatism? (R1)",
"How do dogmatic beliefs cluster? (R2)",
"What user behaviors are predictive of dogmatism? (R3)",
"How does dogmatism impact a conversation? (R4)",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"294cb3efe84c256e1dbc602d1527da260d2306f6",
"500690d7e0ef2e420f5614cef342a008281bc68f"
],
"answer": [
{
"evidence": [
"Data collection. Subreddits are sub-communities on Reddit oriented around specific interests or topics, such as technology or politics. Sampling from Reddit as a whole would bias the model towards the most commonly discussed content. But by sampling posts from individual subreddits, we can control the kinds of posts we use to train our model. To collect a diverse training dataset, we have randomly sampled 1000 posts each from the subreddits politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage. All posts in our sample appeared between January 2007 and March 2015, and to control for length effects, contain between 300 and 400 characters. This results in a total training dataset of 5000 posts."
],
"extractive_spans": [
"politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage. "
],
"free_form_answer": "",
"highlighted_evidence": [
"To collect a diverse training dataset, we have randomly sampled 1000 posts each from the subreddits politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Data collection. Subreddits are sub-communities on Reddit oriented around specific interests or topics, such as technology or politics. Sampling from Reddit as a whole would bias the model towards the most commonly discussed content. But by sampling posts from individual subreddits, we can control the kinds of posts we use to train our model. To collect a diverse training dataset, we have randomly sampled 1000 posts each from the subreddits politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage. All posts in our sample appeared between January 2007 and March 2015, and to control for length effects, contain between 300 and 400 characters. This results in a total training dataset of 5000 posts.",
"We now apply our dogmatism classifier to a larger dataset of posts, examining how dogmatic language shapes the Reddit community. Concretely, we apply the BOW+LING model trained on the full Reddit dataset to millions of new unannotated posts, labeling these posts with a probability of dogmatism according to the classifier (0=non-dogmatic, 1=dogmatic). We then use these dogmatism annotations to address four research questions."
],
"extractive_spans": [],
"free_form_answer": "training data has posts from politics, business, science and other popular topics; the trained model is applied to millions of unannotated posts on all of Reddit",
"highlighted_evidence": [
"To collect a diverse training dataset, we have randomly sampled 1000 posts each from the subreddits politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage.",
"Concretely, we apply the BOW+LING model trained on the full Reddit dataset to millions of new unannotated posts, labeling these posts with a probability of dogmatism according to the classifier (0=non-dogmatic, 1=dogmatic)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"02c984a519661853b8ece0a27ce665d854551f7c"
]
},
{
"annotation_id": [
"a0fdd35b9537e29d4c6fe7dbdd6e810cdf97636a",
"b93c42d01594b0e8799f18b4c9476c78735f4d3a"
],
"answer": [
{
"evidence": [
"We compare the predictions of logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features. BOW and SENT provide baselines for the task. We compute BOW features using term frequency-inverse document frequency (TF-IDF) and category-based features by normalizing counts for each category by the number of words in each document. The BOW classifiers are trained with regularization (L2 penalties of 1.5)."
],
"extractive_spans": [
"logistic regression models"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare the predictions of logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compare the predictions of logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features. BOW and SENT provide baselines for the task. We compute BOW features using term frequency-inverse document frequency (TF-IDF) and category-based features by normalizing counts for each category by the number of words in each document. The BOW classifiers are trained with regularization (L2 penalties of 1.5)."
],
"extractive_spans": [
"logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features."
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare the predictions of logistic regression models based on unigram bag-of-words features (BOW), sentiment signals (SENT), the linguistic features from our earlier analyses (LING), and combinations of these features."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"45b212ff3348e2473d3e5504ca1200bcf85fcbf5",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"somewhat",
"somewhat"
],
"question": [
"what are the topics pulled from Reddit?",
"What predictive model do they build?"
],
"question_id": [
"b6ae8e10c6a0d34c834f18f66ab730b670fb528c",
"a87a009c242d57c51fc94fe312af5e02070f898b"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"social",
"social"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: We crowdsourced dogmatism labels for 5000 comments. The distribution is slightly skewed towards higher levels of dogmatism. For example, crowdworkers unanimously labeled 206 comments as highly dogmatic (5× 3 = 15), but only 47 as minimally dogmatic (1× 3 = 3).",
"Table 1: Linguistic features that capture high level psychological categories and their relationship with dogmatic comments. Strategy describes the psychological category. Odds describes the likelihood that a category will appear more often in a dogmatic comment (e.g., dogmatic comments are 2.18 times more likely to mention you-oriented phrases). Example illustrates a comment that matches the category. * indicates significance (p < 0.05) after correction with Holmes method.",
"Table 2: The AUC scores for dogmatism classifiers within and across domains. BOW (bag-of-words) and SENT (sentiment signals) are baselines, and LING uses the linguistic features from Table 1. We compute in-domain accuracy using 15-fold cross-validation on the Reddit dataset, and cross-domain accuracy by training on Reddit and evaluating on comments on articles from the New York Times. Chance AUC is 0.5.",
"Table 3: Subreddits with the highest and lowest dogmatism scores. Politics and religion are common themes among the most dogmatic subreddits, while hobbies (e.g., photography, homebrewing, buildapc) show the least dogmatism.",
"Table 4: Clusters of subreddits that share dogmatic users. For example, users who are dogmatic on the conspiracy subreddit (a place to discuss conspiracy theories) are also likely to be dogmatic on guns or occupywallstreet.",
"Table 5: User behavioral features that are positively and negatively associated with dogmatism. ↑ means the feature is positively predictive with dogmatism, and ↓ means the feature is negatively predictive. For example, the more subreddits a user posts in, the less likely they are to be dogmatic. All features are statistically significant (p < 0.001)."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png"
]
} | [
"what are the topics pulled from Reddit?"
] | [
[
"1609.00425-Dogmatism data-1",
"1609.00425-Dogmatism in the Reddit Community -0"
]
] | [
"training data has posts from politics, business, science and other popular topics; the trained model is applied to millions of unannotated posts on all of Reddit"
] | 2 |
1801.05147 | Adversarial Learning for Chinese NER from Crowd Annotations | To quickly obtain new labeled data, we can choose crowdsourcing as an alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. In this paper, we propose an approach to performing crowd annotation learning for Chinese Named Entity Recognition (NER) to make full use of the noisy sequence labels from multiple annotators. Inspired by adversarial learning, our approach uses a common Bi-LSTM and a private Bi-LSTM for representing annotator-generic and -specific information. The annotator-generic information is the common knowledge for entities easily mastered by the crowd. Finally, we build our Chinese NE tagger based on the LSTM-CRF model. In our experiments, we create two data sets for Chinese NER tasks from two domains. The experimental results show that our system achieves better scores than strong baseline systems. | {
"paragraphs": [
[
"There has been significant progress on Named Entity Recognition (NER) in recent years using models based on machine learning algorithms BIBREF0 , BIBREF1 , BIBREF2 . As with other Natural Language Processing (NLP) tasks, building NER systems typically requires a massive amount of labeled training data which are annotated by experts. In real applications, we often need to consider new types of entities in new domains where we do not have existing annotated data. For such new types of entities, however, it is very hard to find experts to annotate the data within short time limits and hiring experts is costly and non-scalable, both in terms of time and money.",
"In order to quickly obtain new training data, we can use crowdsourcing as one alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. It is one biggest challenge to build a powerful NER system on such a low quality annotated data. Although we can obtain high quality annotations for each input sentence by majority voting, it can be a waste of human labors to achieve such a goal, especially for some ambiguous sentences which may require a number of annotations to reach an agreement. Thus majority work directly build models on crowd annotations, trying to model the differences among annotators, for example, some of the annotators may be more trustful BIBREF3 , BIBREF4 .",
"Here we focus mainly on the Chinese NER, which is more difficult than NER for other languages such as English for the lack of morphological variations such as capitalization and in particular the uncertainty in word segmentation. The Chinese NE taggers trained on news domain often perform poor in other domains. Although we can alleviate the problem by using character-level tagging to resolve the problem of poor word segmentation performances BIBREF5 , still there exists a large gap when the target domain changes, especially for the texts of social media. Thus, in order to get a good tagger for new domains and also for the conditions of new entity types, we require large amounts of labeled data. Therefore, crowdsourcing is a reasonable solution for these situations.",
"In this paper, we propose an approach to training a Chinese NER system on the crowd-annotated data. Our goal is to extract additional annotator independent features by adversarial training, alleviating the annotation noises of non-experts. The idea of adversarial training in neural networks has been used successfully in several NLP tasks, such as cross-lingual POS tagging BIBREF6 and cross-domain POS tagging BIBREF7 . They use it to reduce the negative influences of the input divergences among different domains or languages, while we use adversarial training to reduce the negative influences brought by different crowd annotators. To our best knowledge, we are the first to apply adversarial training for crowd annotation learning.",
"In the learning framework, we perform adversarial training between the basic NER and an additional worker discriminator. We have a common Bi-LSTM for representing annotator-generic information and a private Bi-LSTM for representing annotator-specific information. We build another label Bi-LSTM by the crowd-annotated NE label sequence which reflects the mind of the crowd annotators who learn entity definitions by reading the annotation guidebook. The common and private Bi-LSTMs are used for NER, while the common and label Bi-LSTMs are used as inputs for the worker discriminator. The parameters of the common Bi-LSTM are learned by adversarial training, maximizing the worker discriminator loss and meanwhile minimizing the NER loss. Thus the resulting features of the common Bi-LSTM are worker invariant and NER sensitive.",
"For evaluation, we create two Chinese NER datasets in two domains: dialog and e-commerce. We require the crowd annotators to label the types of entities, including person, song, brand, product, and so on. Identifying these entities is useful for chatbot and e-commerce platforms BIBREF8 . Then we conduct experiments on the newly created datasets to verify the effectiveness of the proposed adversarial neural network model. The results show that our system outperforms very strong baseline systems. In summary, we make the following contributions:"
],
[
"Our work is related to three lines of research: Sequence labeling, Adversarial training, and Crowdsourcing.",
"Sequence labeling. NER is widely treated as a sequence labeling problem, by assigning a unique label over each sentential word BIBREF9 . Early studies on sequence labeling often use the models of HMM, MEMM, and CRF BIBREF10 based on manually-crafted discrete features, which can suffer the feature sparsity problem and require heavy feature engineering. Recently, neural network models have been successfully applied to sequence labeling BIBREF1 , BIBREF11 , BIBREF2 . Among these work, the model which uses Bi-LSTM for feature extraction and CRF for decoding has achieved state-of-the-art performances BIBREF11 , BIBREF2 , which is exploited as the baseline model in our work.",
"Adversarial Training. Adversarial Networks have achieved great success in computer vision such as image generation BIBREF12 , BIBREF13 . In the NLP community, the method is mainly exploited under the settings of domain adaption BIBREF14 , BIBREF7 , cross-lingual BIBREF15 , BIBREF6 and multi-task learning BIBREF16 , BIBREF17 . All these settings involve the feature divergences between the training and test examples, and aim to learn invariant features across the divergences by an additional adversarial discriminator, such as domain discriminator. Our work is similar to these work but is applies on crowdsourcing learning, aiming to find invariant features among different crowdsourcing workers.",
"Crowdsourcing. Most NLP tasks require a massive amount of labeled training data which are annotated by experts. However, hiring experts is costly and non-scalable, both in terms of time and money. Instead, crowdsourcing is another solution to obtain labeled data at a lower cost but with relative lower quality than those from experts. BIBREF18 snow2008cheap collected labeled results for several NLP tasks from Amazon Mechanical Turk and demonstrated that non-experts annotations were quite useful for training new systems. In recent years, a series of work have focused on how to use crowdsourcing data efficiently in tasks such as classification BIBREF19 , BIBREF20 , and compare quality of crowd and expert labels BIBREF21 .",
"In sequence labeling tasks, BIBREF22 dredze2009sequence viewed this task as a multi-label problem while BIBREF3 rodrigues2014sequence took workers identities into account by assuming that each sentential word was tagged correctly by one of the crowdsourcing workers and proposed a CRF-based model with multiple annotators. BIBREF4 nguyen2017aggregating introduced a crowd representation in which the crowd vectors were added into the LSTM-CRF model at train time, but ignored them at test time. In this paper, we apply adversarial training on crowd annotations on Chinese NER in new domains, and achieve better performances than previous studies on crowdsourcing learning."
],
[
"We use a neural CRF model as the baseline system BIBREF9 , treating NER as a sequence labeling problem over Chinese characters, which has achieved state-of-the-art performances BIBREF5 . To this end, we explore the BIEO schema to convert NER into sequence labeling, following BIBREF2 lample-EtAl:2016:N16-1, where sentential character is assigned with one unique tag. Concretely, we tag the non-entity character by label “O”, the beginning character of an entity by “B-XX”, the ending character of an entity by “E-XX” and the other character of an entity by “I-XX”, where “XX” denotes the entity type.",
"We build high-level neural features from the input character sequence by a bi-directional LSTM BIBREF2 . The resulting features are combined and then are fed into an output CRF layer for decoding. In summary, the baseline model has three main components. First, we make vector representations for sentential characters $\\mathbf {x}_1\\mathbf {x}_2\\cdots \\mathbf {x}_n$ , transforming the discrete inputs into low-dimensional neural inputs. Second, feature extraction is performed to obtain high-level features $\\mathbf {h}_1^{\\text{ner}}\\mathbf {h}_2^{\\text{ner}}\\cdots \\mathbf {h}_n^{\\text{ner}}$ , by using a bi-directional LSTM (Bi-LSTM) structure together with a linear transformation over $\\mathbf {x}_1\\mathbf {x}_2\\cdots \\mathbf {x}_n$ . Third, we apply a CRF tagging module over $\\mathbf {h}_1^{\\text{ner}}\\mathbf {h}_2^{\\text{ner}}\\cdots \\mathbf {h}_n^{\\text{ner}}$ , obtaining the final output NE labels. The overall framework of the baseline model is shown by the right part of Figure 1 ."
],
[
"To represent Chinese characters, we simply exploit a neural embedding layer to map discrete characters into the low-dimensional vector representations. The goal is achieved by a looking-up table $\\mathbf {E}^W$ , which is a model parameter and will be fine-tuned during training. The looking-up table can be initialized either by random or by using a pretrained embeddings from large scale raw corpus. For a given Chinese character sequence $c_1c_2\\cdots c_n$ , we obtain the vector representation of each sentential character by: $ \\mathbf {x}_t = \\text{look-up}(c_t, \\mathbf {E}^W), \\text{~~~} t \\in [1, n]$ ."
],
[
"Based on the vector sequence $\\mathbf {x}_1\\mathbf {x}_2\\cdots \\mathbf {x}_n$ , we extract higher-level features $\\mathbf {h}_1^{\\text{ner}}\\mathbf {h}_2^{\\text{ner}}\\cdots \\mathbf {h}_n^{\\text{ner}}$ by using a bidirectional LSTM module and a simple feed-forward neural layer, which are then used for CRF tagging at the next step.",
"LSTM is a type of recurrent neural network (RNN), which is designed for solving the exploding and diminishing gradients of basic RNNs BIBREF23 . It has been widely used in a number of NLP tasks, including POS-tagging BIBREF11 , BIBREF24 , parsing BIBREF25 and machine translation BIBREF26 , because of its strong capabilities of modeling natural language sentences.",
"By traversing $\\mathbf {x}_1\\mathbf {x}_2\\cdots \\mathbf {x}_n$ by order and reversely, we obtain the output features $\\mathbf {h}_1^{\\text{private}}\\mathbf {h}_2^{\\text{private}}\\cdots \\mathbf {h}_n^{\\text{private}}$ of the bi-LSTM, where $\\mathbf {h}_t^{\\text{private}} = \\overrightarrow{\\mathbf {h}}_t \\oplus \\overleftarrow{\\mathbf {h}}_t $ . Here we refer this Bi-LSTM as private in order to differentiate it with the common Bi-LSTM over the same character inputs which will be introduced in the next section.",
"Further we make an integration of the output vectors of bi-directional LSTM by a linear feed-forward neural layer, resulting in the features $\\mathbf {h}_1^{\\text{ner}}\\mathbf {h}_2^{\\text{ner}}\\cdots \\mathbf {h}_n^{\\text{ner}}$ by equation: ",
"$$\\mathbf {h}_t^{\\text{ner}} = \\mathbf {W} \\mathbf {h}_t^{\\text{private}} + \\mathbf {b},$$ (Eq. 6) ",
"where $\\mathbf {W}$ and $\\mathbf {b}$ are both model parameters."
],
[
"Finally we feed the resulting features $\\mathbf {h}_t^{\\text{ner}}, t\\in [1, n]$ into a CRF layer directly for NER decoding. CRF tagging is one globally normalized model, aiming to find the best output sequence considering the dependencies between successive labels. In the sequence labeling setting for NER, the output label of one position has a strong dependency on the label of the previous position. For example, the label before “I-XX” must be either “B-XX” or “I-XX”, where “XX” should be exactly the same.",
"CRF involves two parts for prediction. First we should compute the scores for each label based $\\mathbf {h}_t^{\\text{ner}}$ , resulting in $\\mathbf {o}_t^{\\text{ner}}$ , whose dimension is the number of output labels. The other part is a transition matrix $\\mathbf {T}$ which defines the scores of two successive labels. $\\mathbf {T}$ is also a model parameter. Based on $\\mathbf {o}_t^{\\text{ner}}$ and $\\mathbf {T}$ , we use the Viterbi algorithm to find the best-scoring label sequence.",
"We can formalize the CRF tagging process as follows: ",
"$$\\begin{split}\n& \\mathbf {o}_t^{\\text{ner}} = \\mathbf {W}^{\\text{ner}} \\mathbf {h}_t^{\\text{ner}}, \\text{~~~~} t \\in [1,n] \\\\\n& \\text{score}(\\mathbf {X}, \\mathbf {y}) = \\sum _{t = 1}^{n}(\\mathbf {o}_{t,y_t} + T_{y_{t-1},y_t}) \\\\\n& \\mathbf {y}^{\\text{ner}} = \\mathop {arg~max}_{\\mathbf {y}}\\big (\\text{score}(\\mathbf {X}, \\mathbf {y}))\\big ), \\\\\n\\end{split}$$ (Eq. 8) ",
"where $\\text{score}(\\cdot )$ is the scoring function for a given output label sequence $\\mathbf {y} = y_1y_2 \\cdots y_n$ based on input $\\mathbf {X}$ , $\\mathbf {y}^{\\text{ner}}$ is the resulting label sequence, $\\mathbf {W}^{\\text{ner}}$ is a model parameter."
],
[
"To train model parameters, we exploit a negative log-likelihood objective as the loss function. We apply softmax over all candidate output label sequences, thus the probability of the crowd-annotated label sequence is computed by: ",
"$$p(\\mathbf {\\bar{y}}|\\mathbf {X}) = \\frac{\\exp \\big (\\text{score}(\\mathbf {X}, \\mathbf {\\bar{y}})\\big )}{\\sum _{\\mathbf {y} \\in \\mathbf {Y}_{\\mathbf {X}}} \\exp \\big (\\text{score}(\\mathbf {X}, \\mathbf {y})\\big )},$$ (Eq. 10) ",
"where $\\mathbf {\\bar{y}}$ is the crowd-annotated label sequences and $\\mathbf {Y}_{\\mathbf {X}}$ is all candidate label sequence of input $\\mathbf {X}$ .",
"Based on the above formula, the loss function of our baseline model is: ",
"$$\\text{loss}(\\Theta , \\mathbf {X}, \\mathbf {\\bar{y}}) = -\\log p(\\mathbf {\\bar{y}}|\\mathbf {X}),$$ (Eq. 11) ",
"where $\\Theta $ is the set of all model parameters. We use standard back-propagation method to minimize the loss function of the baseline CRF model."
],
[
"Adversarial learning has been an effective mechanism to resolve the problem of the input features between the training and test examples having large divergences BIBREF27 , BIBREF13 . It has been successfully applied on domain adaption BIBREF7 , cross-lingual learning BIBREF15 and multi-task learning BIBREF17 . All settings involve feature shifting between the training and testing.",
"In this paper, our setting is different. We are using the annotations from non-experts, which are noise and can influence the final performances if they are not properly processed. Directly learning based on the resulting corpus may adapt the neural feature extraction into the biased annotations. In this work, we assume that individual workers have their own guidelines in mind after short training. For example, a perfect worker can annotate highly consistently with an expert, while common crowdsourcing workers may be confused and have different understandings on certain contexts. Based on the assumption, we make an adaption for the original adversarial neural network to our setting.",
"Our adaption is very simple. Briefly speaking, the original adversarial learning adds an additional discriminator to classify the type of source inputs, for example, the domain category in the domain adaption setting, while we add a discriminator to classify the annotation workers. Solely the features from the input sentence is not enough for worker classification. The annotation result of the worker is also required. Thus the inputs of our discriminator are different. Here we exploit both the source sentences and the crowd-annotated NE labels as basic inputs for the worker discrimination.",
"In the following, we describe the proposed adversarial learning module, including both the submodels and the training method. As shown by the left part of Figure 1 , the submodel consists of four parts: (1) a common Bi-LSTM over input characters; (2) an additional Bi-LSTM to encode crowd-annotated NE label sequence; (3) a convolutional neural network (CNN) to extract features for worker discriminator; (4) output and prediction."
],
[
"To build the adversarial part, first we create a new bi-directional LSTM, named by the common Bi-LSTM: ",
"$$\\mathbf {h}_1^{\\text{\\tiny common}} \\mathbf {h}_2^{\\text{\\tiny common}} \\cdots \\mathbf {h}_n^{\\text{\\tiny common}} = \\text{Bi-LSTM}(\\mathbf {x}_1\\mathbf {x}_2\\cdots \\mathbf {x}_n).$$ (Eq. 13) ",
"As shown in Figure 1 , this Bi-LSTM is constructed over the same input character representations of the private Bi-LSTM, in order to extract worker independent features.",
"The resulting features of the common Bi-LSTM are used for both NER and the worker discriminator, different with the features of private Bi-LSTM which are used for NER only. As shown in Figure 1 , we concatenate the outputs of the common and private Bi-LSTMs together, and then feed the results into the feed-forward combination layer of the NER part. Thus Formula 6 can be rewritten as: ",
"$$\\mathbf {h}_t^{\\text{ner}} = \\mathbf {W} (\\mathbf {h}_t^{\\text{common}} \\oplus \\mathbf {h}_t^{\\text{private}}) + \\mathbf {b},$$ (Eq. 14) ",
"where $\\mathbf {W}$ is wider than the original combination because the newly-added $\\mathbf {h}_t^{\\text{common}}$ .",
"Noticeably, although the resulting common features are used for the worker discriminator, they actually have no capability to distinguish the workers. Because this part is exploited to maximize the loss of the worker discriminator, it will be interpreted in the later training subsection. These features are invariant among different workers, thus they can have less noises for NER. This is the goal of adversarial learning, and we hope the NER being able to find useful features from these worker independent features."
],
[
"In order to incorporate the annotated NE labels to predict the exact worker, we build another bi-directional LSTM (named by label Bi-LSTM) based on the crowd-annotated NE label sequence. This Bi-LSTM is used for worker discriminator only. During the decoding of the testing phase, we will never have this Bi-LSTM, because the worker discriminator is no longer required.",
"Assuming the crowd-annotated NE label sequence annotated by one worker is $\\mathbf {\\bar{y}} = \\bar{y}_1\\bar{y}_2 \\cdots \\bar{y}_n$ , we exploit a looking-up table $\\mathbf {E}^{L}$ to obtain the corresponding sequence of their vector representations $\\mathbf {x^{\\prime }}_1\\mathbf {x^{\\prime }}_2\\cdots \\mathbf {x^{\\prime }}_n$ , similar to the method that maps characters into their neural representations. Concretely, for one NE label $\\bar{y}_t$ ( $t \\in [1, n]$ ), we obtain its neural vector by: $\\mathbf {x^{\\prime }}_t = \\text{look-up}(\\bar{y}_t, \\mathbf {E}^L)$ .",
"Next step we apply bi-directional LSTM over the sequence $\\mathbf {x^{\\prime }}_1\\mathbf {x^{\\prime }}_2\\cdots \\mathbf {x^{\\prime }}_n$ , which can be formalized as: ",
"$$\\mathbf {h}_1^{\\text{label}} \\mathbf {h}_2^{\\text{label}} \\cdots \\mathbf {h}_n^{\\text{label}} = \\text{Bi-LSTM}(\\mathbf {x^{\\prime }}_1\\mathbf {x^{\\prime }}_2\\cdots \\mathbf {x^{\\prime }}_n).$$ (Eq. 16) ",
"The resulting feature sequence is concatenated with the outputs of the common Bi-LSTM, and further be used for worker classification."
],
[
"Following, we add a convolutional neural network (CNN) module based on the concatenated outputs of the common Bi-LSTM and the label Bi-LSTM, to produce the final features for worker discriminator. A convolutional operator with window size 5 is used, and then max pooling strategy is applied over the convolution sequence to obtain the final fixed-dimensional feature vector. The whole process can be described by the following equations: ",
"$$\\begin{split}\n&\\mathbf {h}_t^{\\text{worker}} = \\mathbf {h}_t^{\\text{common}} \\oplus \\mathbf {h}_t^{\\text{label}} \\\\\n&\\mathbf {\\tilde{h}}_t^{\\text{worker}} = \\tanh (\\mathbf {W}^{\\text{cnn}}[\\mathbf {h}_{t-2}^{\\text{worker}}, \\mathbf {h}_{t-1}^{\\text{worker}}, \\cdots , \\mathbf {h}_{t+2}^{\\text{worker}}]) \\\\\n&\\mathbf {h}^{\\text{worker}} = \\text{max-pooling}(\\mathbf {\\tilde{h}}_1^{\\text{worker}}\\mathbf {\\tilde{h}}_2^{\\text{worker}} \\cdots \\mathbf {\\tilde{h}}_n^{\\text{worker}}) \\\\\n\\end{split}$$ (Eq. 18) ",
"where $t \\in [1,n]$ and $\\mathbf {W}^{\\text{cnn}}$ is one model parameter. We exploit zero vector to paddle the out-of-index vectors."
],
[
"After obtaining the final feature vector for the worker discriminator, we use it to compute the output vector, which scores all the annotation workers. The score function is defined by: ",
"$$\\mathbf {o}^{\\text{worker}} = \\mathbf {W}^{\\text{worker}} \\mathbf {h}^{\\text{worker}},$$ (Eq. 20) ",
"where $\\mathbf {W}^{\\text{worker}}$ is one model parameter and the output dimension equals the number of total non-expert annotators. The prediction is to find the worker which is responsible for this annotation."
],
[
"The training objective with adversarial neural network is different from the baseline model, as it includes the extra worker discriminator. Thus the new objective includes two parts, one being the negative log-likelihood from NER which is the same as the baseline, and the other being the negative the log-likelihood from the worker discriminator.",
"In order to obtain the negative log-likelihood of the worker discriminator, we use softmax to compute the probability of the actual worker $\\bar{z}$ as well, which is defined by: ",
"$$p(\\bar{z}|\\mathbf {X}, \\mathbf {\\bar{y}}) = \\frac{\\exp (\\mathbf {o}^{\\text{worker}}_{\\bar{z}})}{\\sum _{z} \\exp (\\mathbf {o}^{\\text{worker}}_z)},$$ (Eq. 22) ",
"where $z$ should enumerate all workers.",
"Based on the above definition of probability, our new objective is defined as follows: ",
"$$\\begin{split}\n\\text{R}(\\Theta , \\Theta ^{\\prime }, \\mathbf {X}, \\mathbf {\\bar{y}}, \\bar{z}) &= \\text{loss}(\\Theta , \\mathbf {X}, \\mathbf {\\bar{y}}) - \\text{loss}(\\Theta , \\Theta ^{\\prime }, \\mathbf {X}) \\\\\n\\text{~~~~~~} &= -\\log p(\\mathbf {\\bar{y}}|\\mathbf {X}) + \\log p(\\bar{z}|\\mathbf {X}, \\mathbf {\\bar{y}}),\n\\end{split}$$ (Eq. 23) ",
"where $\\Theta $ is the set of all model parameters related to NER, and $\\Theta ^{\\prime }$ is the set of the remaining parameters which are only related to the worker discriminator, $\\mathbf {X}$ , $\\mathbf {\\bar{y}}$ and $\\bar{z}$ are the input sentence, the crowd-annotated NE labels and the corresponding annotator for this annotation, respectively. It is worth noting that the parameters of the common Bi-LSTM are included in the set of $\\Theta $ by definition.",
"In particular, our goal is not to simply minimize the new objective. Actually, we aim for a saddle point, finding the parameters $\\Theta $ and $\\Theta ^{\\prime }$ satisfying the following conditions: ",
"$$\\begin{split}\n\\hat{\\Theta } &= \\mathop {arg~min}_{\\Theta }\\text{R}(\\Theta , \\Theta ^{\\prime }, \\mathbf {X}, \\mathbf {\\bar{y}}, \\bar{z}) \\\\\n\\hat{\\Theta }^{\\prime } &= \\mathop {arg~max}_{\\Theta ^{\\prime }}\\text{R}(\\hat{\\Theta }, \\Theta ^{\\prime }, \\mathbf {X}, \\mathbf {\\bar{y}}, \\bar{z}) \\\\\n\\end{split}$$ (Eq. 24) ",
"where the first equation aims to find one $\\Theta $ that minimizes our new objective $\\text{R}(\\cdot )$ , and the second equation aims to find one $\\Theta ^{\\prime }$ maximizing the same objective.",
"Intuitively, the first equation of Formula 24 tries to minimize the NER loss, but at the same time maximize the worker discriminator loss by the shared parameters of the common Bi-LSTM. Thus the resulting features of common Bi-LSTM actually attempt to hurt the worker discriminator, which makes these features worker independent since they are unable to distinguish different workers. The second equation tries to minimize the worker discriminator loss by its own parameter $\\Theta ^{\\prime }$ .",
"We use the standard back-propagation method to train the model parameters, the same as the baseline model. In order to incorporate the term of the argmax part of Formula 24 , we follow the previous work of adversarial training BIBREF13 , BIBREF15 , BIBREF17 , by introducing a gradient reverse layer between the common Bi-LSTM and the CNN module, whose forward does nothing but the backward simply negates the gradients."
],
[
"With the purpose of obtaining evaluation datasets from crowd annotators, we collect the sentences from two domains: Dialog and E-commerce domain. We hire undergraduate students to annotate the sentences. They are required to identify the predefined types of entities in the sentences. Together with the guideline document, the annotators are educated some tips in fifteen minutes and also provided with 20 exemplifying sentences.",
"Labeled Data: DL-PS. In Dialog domain (DL), we collect raw sentences from a chatbot application. And then we randomly select 20K sentences as our pool and hire 43 students to annotate the sentences. We ask the annotators to label two types of entities: Person-Name and Song-Name. The annotators label the sentences independently. In particular, each sentence is assigned to three annotators for this data. Although the setting can be wasteful of labor, we can use the resulting dataset to test several well-known baselines such as majority voting.",
"After annotation, we remove some illegal sentences reported by the annotators. Finally, we have 16,948 sentences annotated by the students. Table 1 shows the information of annotated data. The average Kappa value among the annotators is 0.6033, indicating that the crowd annotators have moderate agreement on identifying entities on this data.",
"In order to evaluate the system performances, we create a set of corpus with gold annotations. Concretely, we randomly select 1,000 sentences from the final dataset and let two experts generate the gold annotations. Among them, we use 300 sentences as the development set and the remaining 700 as the test set. The rest sentences with only student annotations are used as the training set.",
"Labeled data: EC-MT and EC-UQ. In E-commerce domain (EC), we collect raw sentences from two types of texts: one is titles of merchandise entries (EC-MT) and another is user queries (EC-UQ). The annotators label five types of entities: Brand, Product, Model, Material, and Specification. These five types of entities are very important for E-commerce platform, for example building knowledge graph of merchandises. Five students participate the annotations for this domain since the number of sentences is small. We use the similar strategy as DL-PS to annotate the sentences, except that only two annotators are assigned for each sentence, because we aim to test the system performances under very small duplicated annotations.",
"Finally, we obtain 2,337 sentences for EC-MT and 2,300 for EC-UQ. Table 1 shows the information of annotated results. Similarly, we produce the development and test datasets for system evaluation, by randomly selecting 400 sentences and letting two experts to generate the groundtruth annotations. Among them, we use 100 sentences as the development set and the remaining 300 as the test set. The rest sentences with only crowdsourcing annotations are used as the training set.",
"Unlabeled data. The vector representations of characters are basic inputs of our baseline and proposed models, which are obtained by the looking-up table $\\mathbf {E}^W$ . As introduced before, we can use pretrained embeddings from large-scale raw corpus to initialize the table. In order to pretrain the character embeddings, we use one large-scale unlabeled data from the user-generated content in Internet. Totally, we obtain a number of 5M sentences. Finally, we use the tool word2vec to pretrain the character embeddings based on the unlabeled dataset in our experiments."
],
[
"For evaluation, we use the entity-level metrics of Precision (P), Recall (R), and their F1 value in our experiments, treating one tagged entity as correct only when it matches the gold entity exactly.",
"There are several hyper-parameters in the baseline LSTM-CRF and our final models. We set them empirically by the development performances. Concretely, we set the dimension size of the character embeddings by 100, the dimension size of the NE label embeddings by 50, and the dimension sizes of all the other hidden features by 200.",
"We exploit online training with a mini-batch size 128 to learn model parameters. The max-epoch iteration is set by 200, and the best-epoch model is chosen according to the development performances. We use RMSprop BIBREF28 with a learning rate $10^{-3}$ to update model parameters, and use $l_2$ -regularization by a parameter $10^{-5}$ . We adopt the dropout technique to avoid overfitting by a drop value of $0.2$ ."
],
[
"The proposed approach (henceforward referred to as “ALCrowd”) is compared with the following systems:",
"CRF: We use the Crfsuite tool to train a model on the crowdsourcing labeled data. As for the feature settings, we use the supervised version of BIBREF0 zhao2008unsupervised.",
"CRF-VT: We use the same settings of the CRF system, except that the training data is the voted version, whose groundtruths are produced by majority voting at the character level for each annotated sentence.",
"CRF-MA: The CRF model proposed by BIBREF3 rodrigues2014sequence, which uses a prior distributation to model multiple crowdsourcing annotators. We use the source code provided by the authors.",
"LSTM-CRF: Our baseline system trained on the crowdsourcing labeled data.",
"LSTM-CRF-VT: Our baseline system trained on the voted corpus, which is the same as CRF-VT.",
"LSTM-Crowd: The LSTM-CRF model with crowd annotation learning proposed by BIBREF4 nguyen2017aggregating. We use the source code provided by the authors.",
"The first three systems are based on the CRF model using traditional handcrafted features, and the last three systems are based on the neural LSTM-CRF model. Among them, CRF-MA, LSTM-Crowd and our system with adversarial learning (ALCrowd) are based on crowd annotation learning that directly trains the model on the crowd-annotations. Five systems, including CRF, CRF-MA, LSTM-CRF, LSTM-Crowd, and ALCrowd, are trained on the original version of labeled data, while CRF-VT and LSTM-CRF-VT are trained on the voted version. Since CRF-VT, CRF-MA and LSTM-CRF-VT all require ground-truth answers for each training sentence, which are difficult to be produced with only two annotations, we do not apply the three models on the two EC datasets."
],
[
"In this section, we show the model performances of our proposed crowdsourcing learning system (ALCrowd), and meanwhile compare it with the other systems mentioned above. Table 2 shows the experimental results on the DL-PS datasets and Table 3 shows the experiment results on the EC-MT and EC-UQ datasets, respectively.",
"The results of CRF and LSTM-CRF mean that the crowd annotation is an alternative solution with low cost for labeling data that could be used for training a NER system even there are some inconsistencies. Compared with CRF, LSTM-CRF achieves much better performances on all the three data, showing +6.12 F1 improvement on DL-PS, +4.51 on EC-MT, and +9.19 on EC-UQ. This indicates that LSTM-CRF is a very strong baseline system, demonstrating the effectiveness of neural network.",
"Interestingly, when compared with CRF and LSTM-CRF, CRF-VT and LSTM-CRF-VT trained on the voted version perform worse in the DL-PS dataset. This trend is also mentioned in BIBREF4 nguyen2017aggregating. This fact shows that the majority voting method might be unsuitable for our task. There are two possible reasons accounting for the observation. On the one hand, simple character-level voting based on three annotations for each sentence may be still not enough. In the DL-PS dataset, even with only two predefined entity types, one character can have nine NE labels. Thus the majority-voting may be incapable of handling some cases. While the cost by adding more annotations for each sentence would be greatly increased. On the other hand, the lost information produced by majority-voting may be important, at least the ambiguous annotations denote that the input sentence is difficult for NER. The normal CRF and LSTM-CRF models without discard any annotations can differentiate these difficult contexts through learning.",
"Three crowd-annotation learning systems provide better performances than their counterpart systems, (CRF-MA VS CRF) and (LSTM-Crowd/ALCrowd VS LSTM-CRF). Compared with the strong baseline LSTM-CRF, ALCrowd shows its advantage with +1.08 F1 improvements on DL-PS, +1.24 on EC-MT, and +2.38 on EC-UQ, respectively. This indicates that adding the crowd-annotation learning is quite useful for building NER systems. In addition, ALCrowd also outperforms LSTM-Crowd on all the datasets consistently, demonstrating the high effectiveness of ALCrowd in extracting worker independent features. Among all the systems, ALCrowd performs the best, and significantly better than all the other models (the p-value is below $10^{-5}$ by using t-test). The results indicate that with the help of adversarial training, our system can learn a better feature representation from crowd annotation."
],
[
"Impact of Character Embeddings. First, we investigate the effect of the pretrained character embeddings in our proposed crowdsourcing learning model. The comparison results are shown in Figure 2 , where Random refers to the random initialized character embeddings, and Pretrained refers to the embeddings pretrained on the unlabeled data. According to the results, we find that our model with the pretrained embeddings significantly outperforms that using the random embeddings, demonstrating that the pretrained embeddings successfully provide useful information.",
"Case Studies. Second, we present several case studies in order to study the differences between our baseline and the worker adversarial models. We conduct a closed test on the training set, the results of which can be regarded as modifications of the training corpus, since there exist inconsistent annotations for each training sentence among the different workers. Figure 3 shows the two examples from the DL-PS dataset, which compares the outputs of the baseline and our final models, as well as the majority-voting strategy.",
"In the first case, none of the annotations get the correct NER result, but our proposed model can capture it. The result of LSTM-CRF is the same as majority-voting. In the second example, the output of majority-voting is the worst, which can account for the reason why the same model trained on the voted corpus performs so badly, as shown in Table 2 . The model of LSTM-CRF fails to recognize the named entity “Xiexie” because of not trusting the second annotation, treating it as one noise annotation. Our proposed model is able to recognize it, because of its ability of extracting worker independent features."
],
[
"In this paper, we presented an approach to performing crowd annotation learning based on the idea of adversarial training for Chinese Named Entity Recognition (NER). In our approach, we use a common and private Bi-LSTMs for representing annotator-generic and -specific information, and learn a label Bi-LSTM from the crowd-annotated NE label sequences. Finally, the proposed approach adopts a LSTM-CRF model to perform tagging. In our experiments, we create two data sets for Chinese NER tasks in the dialog and e-commerce domains. The experimental results show that the proposed approach outperforms strong baseline systems."
],
[
"This work is supported by the National Natural Science Foundation of China (Grant No. 61572338, 61525205, and 61602160). This work is also partially supported by the joint research project of Alibaba and Soochow University. Wenliang is also partially supported by Collaborative Innovation Center of Novel Software Technology and Industrialization."
]
],
"section_name": [
"Introduction",
"Related Work",
"Baseline: LSTM-CRF",
"Vector Representation of Characters",
"Feature Extraction",
"CRF Tagging",
"Training",
"Worker Adversarial",
"Common Bi-LSTM over Characters",
"Additional Bi-LSTM over Annotated NER Labels",
"CNN",
"Output and Prediction",
"Adversarial Training",
"Data Sets",
"Settings",
"Comparison Systems",
"Main Results",
"Discussion",
"Conclusions",
" Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"29718ce7b55a1aa58565b0181a60b2b5f10cbfed",
"bc59c8637ed0e5ae90fc49220824ea00d782f8c5"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Main results on the DL-PS data.",
"FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."
],
"extractive_spans": [],
"free_form_answer": "F1 scores of 85.99 on the DL-PS data, 75.15 on the EC-MT data and 71.53 on the EC-UQ data ",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Main results on the DL-PS data.",
"FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Main results on the DL-PS data.",
"FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."
],
"extractive_spans": [],
"free_form_answer": "F1 of 85.99 on the DL-PS dataset (dialog domain); 75.15 on EC-MT and 71.53 on EC-UQ (e-commerce domain)",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Main results on the DL-PS data.",
"FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5d0eb97e8e840e171f73b7642c2c89dd3984157b",
"416daf57ea25409ce3ae47c8b21992cd60cc07cf"
]
},
{
"annotation_id": [
"98040dc7538524dfe76903d0ee8a92963fa07815",
"add1ec2e6f42a12b8c81a30d24f0579d4994dd05"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"With the purpose of obtaining evaluation datasets from crowd annotators, we collect the sentences from two domains: Dialog and E-commerce domain. We hire undergraduate students to annotate the sentences. They are required to identify the predefined types of entities in the sentences. Together with the guideline document, the annotators are educated some tips in fifteen minutes and also provided with 20 exemplifying sentences."
],
"extractive_spans": [],
"free_form_answer": "They did not use any platform, instead they hired undergraduate students to do the annotation.",
"highlighted_evidence": [
"With the purpose of obtaining evaluation datasets from crowd annotators, we collect the sentences from two domains: Dialog and E-commerce domain. We hire undergraduate students to annotate the sentences. They are required to identify the predefined types of entities in the sentences. Together with the guideline document, the annotators are educated some tips in fifteen minutes and also provided with 20 exemplifying sentences."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"416daf57ea25409ce3ae47c8b21992cd60cc07cf",
"5d0eb97e8e840e171f73b7642c2c89dd3984157b"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"What accuracy does the proposed system achieve?",
"What crowdsourcing platform is used?"
],
"question_id": [
"ef4dba073d24042f24886580ae77add5326f2130",
"2df4a045a9cd7b44874340b6fdf9308d3c55327a"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: The framework of the proposed model, which consists of two parts.",
"Table 1: Statistics of labeled datasets.",
"Table 2: Main results on the DL-PS data.",
"Table 3: Main results on the EC-MT and EC-UQ datasets.",
"Figure 3: Case studies of different systems, where named entities are illustrated by square brackets.",
"Figure 2: Comparisons by using different character embeddings, where the Y-axis shows the F1 values"
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Figure3-1.png",
"7-Figure2-1.png"
]
} | [
"What accuracy does the proposed system achieve?",
"What crowdsourcing platform is used?"
] | [
[
"1801.05147-6-Table3-1.png",
"1801.05147-6-Table2-1.png"
],
[
"1801.05147-Data Sets-0"
]
] | [
"F1 of 85.99 on the DL-PS dataset (dialog domain); 75.15 on EC-MT and 71.53 on EC-UQ (e-commerce domain)",
"They did not use any platform, instead they hired undergraduate students to do the annotation."
] | 3 |
1811.00383 | Addressing word-order Divergence in Multilingual Neural Machine Translation for extremely Low Resource Languages | Transfer learning approaches for Neural Machine Translation (NMT) train a NMT model on the assisting-target language pair (parent model) which is later fine-tuned for the source-target language pair of interest (child model), with the target language being the same. In many cases, the assisting language has a different word order from the source language. We show that divergent word order adversely limits the benefits from transfer learning when little to no parallel corpus between the source and target language is available. To bridge this divergence, We propose to pre-order the assisting language sentence to match the word order of the source language and train the parent model. Our experiments on many language pairs show that bridging the word order gap leads to significant improvement in the translation quality. | {
"paragraphs": [
[
"Deep Learning approaches have achieved impressive results on various NLP tasks BIBREF0 , BIBREF1 , BIBREF2 and have become the de facto approach for any NLP task. However, these deep learning techniques have found to be less effective for low-resource languages when the available training data is very less BIBREF3 . Recently, several approaches like Multi-task learning BIBREF4 , multilingual learning BIBREF5 , semi-supervised learning BIBREF2 , BIBREF6 and transfer learning BIBREF7 , BIBREF3 have been explored by the deep learning community to overcome data sparsity in low-resource languages. Transfer learning trains a model for a parent task and fine-tunes the learned parent model weights (features) for a related child task BIBREF7 , BIBREF8 . This effectively reduces the requirement on training data for the child task as the model would have learned relevant features from the parent task data thereby, improving the performance on the child task.",
"Transfer learning has also been explored in the multilingual Neural Machine Translation BIBREF3 , BIBREF9 , BIBREF10 . The goal is to improve the NMT performance on the source to target language pair (child task) using an assisting source language (assisting to target translation is the parent task). Here, the parent model is trained on the assisting and target language parallel corpus and the trained weights are used to initialize the child model. The child model can now be fine-tuned on the source-target language pairs, if parallel corpus is available. The divergence between the source and the assisting language can adversely impact the benefits obtained from transfer learning. Multiple studies have shown that transfer learning works best when the languages are related BIBREF3 , BIBREF10 , BIBREF9 . Several studies have tried to address lexical divergence between the source and the target languages BIBREF10 , BIBREF11 , BIBREF12 . However, the effect of word order divergence and its mitigation has not been explored. In a practical setting, it is not uncommon to have source and assisting languages with different word order. For instance, it is possible to find parallel corpora between English and some Indian languages, but very little parallel corpora between Indian languages. Hence, it is natural to use English as an assisting language for inter-Indian language translation.",
"To see how word order divergence can be detrimental, let us consider the case of the standard RNN (Bi-LSTM) encoder-attention-decoder architecture BIBREF13 . The encoder generates contextual representations (annotation vectors) for each source word, which are used by the attention network to match the source words to the current decoder state. The contextual representation is word-order dependent. Hence, if the assisting and the source languages do not have similar word order the generated contextual representations will not be consistent. The attention network (and hence the decoder) sees different contextual representations for similar words in parallel sentences across different languages. This makes it difficult to transfer knowledge learned from the assisting language to the source language.",
"We illustrate this by visualizing the contextual representations generated by the encoder of an English to Hindi NMT system for two versions of the English input: (a) original word order (SVO) (b) word order of the source language (SOV, for Bengali). Figure FIGREF1 shows that the encoder representations obtained are very different. The attention network and the decoder now have to work with very different representations. Note that the plot below does not take into account further lexical and other divergences between source and assisting languages, since we demonstrated word order divergence with the same language on the source side.",
"To address this word order divergence, we propose to pre-order the assisting language sentences to match the word order of the source language. We consider an extremely resource constrained scenario, where we do not have any parallel corpus for the child task. We are limited to a bilingual dictionary for transfer information from the assisting to the source language. From our experiments, we show that there is a significant increase in the translation accuracy for the unseen source-target language pair."
],
[
" BIBREF3 explored transfer learning for NMT on low-resource languages. They studied the influence of language divergence between languages chosen for training the parent and child model, and showed that choosing similar languages for training the parent and child model leads to better improvements from transfer learning. A limitation of BIBREF3 approach is that they ignore the lexical similarity between languages and also the source language embeddings are randomly initialized. BIBREF10 , BIBREF11 , BIBREF12 take advantage of lexical similarity between languages in their work. BIBREF10 proposed to use Byte-Pair Encoding (BPE) to represent the sentences in both the parent and the child language to overcome the above limitation. They show using BPE benefits transfer learning especially when the involved languages are closely-related agglutinative languages. Similarly, BIBREF11 utilize lexical similarity between the source and assisting languages by training a character-level NMT system. BIBREF12 address lexical divergence by using bilingual embeddings and mixture of universal token embeddings. One of the languages' vocabulary, usually English vocabulary is considered as universal tokens and every word in the other languages is represented as a mixture of universal tokens. They show results on extremely low-resource languages."
],
[
"To the best of our knowledge, no work has addressed word order divergence in transfer learning for multilingual NMT. However, some work exists for other NLP tasks that could potentially address word order. For Named Entity Recognition (NER), BIBREF14 use a self-attention layer after the Bi-LSTM layer to address word-order divergence for Named Entity Recognition (NER) task. The approach does not show any significant improvements over multiple languages. A possible reason is that the divergence has to be addressed before/during construction of the contextual embeddings in the Bi-LSTM layer, and the subsequent self-attention layer does not address word-order divergence. BIBREF15 use adversarial training for cross-lingual question-question similarity ranking in community question answering. The adversarial training tries to force the encoder representations of similar sentences from different input languages to have similar representations."
],
[
"Pre-ordering the source language sentences to match the target language word order has been useful in addressing word-order divergence for Phrase-Based SMT BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 . Recently, BIBREF20 proposed a way to measure and reduce the divergence between the source and target languages based on morphological and syntactic properties, also termed as anisomorphism. They demonstrated that by reducing the anisomorphism between the source and target languages, consistent improvements in NMT performance were obtained. The NMT system used additional features like word forms, POS tags and dependency relations in addition to parallel corpora. On the other hand, BIBREF21 observed a drop in performance due to pre-ordering for NMT. Unlike BIBREF20 , the NMT system was trained on pre-ordered sentences and no additional features were provided to the system. Note that all these works address source-target divergence, not divergence between source languages in multilingual NMT."
],
[
"Consider the task of translating from an extremely low-resource language (source) to a target language. The parallel corpus between the two languages if available may be too small to train a NMT model. Similar to existing works BIBREF3 , BIBREF10 , BIBREF12 , we use transfer learning to overcome data sparsity and train a NMT model between the source and the target languages. Specifically, the NMT model (parent model) is trained on the assisting language and target language pairs. We choose English as the assisting language in all our experiments. In our resource-scarce scenario, we have no parallel corpus for the child task. Hence, at test time, the source language sentence is translated using the parent model after performing a word-by-word translation into the assisting language.",
"Since the source language and the assisting language (English) have different word order, we hypothesize that it leads to inconsistencies in the contextual representations generated by the encoder for the two languages. In this paper, we propose to pre-order English sentences (assisting language sentences) to match the word-order of the source language and train the parent model on this pre-ordered corpus. In our experiments, we look at scenarios where the assisting language has SVO word order and the source language has SOV word order.",
"For instance, consider the English sentence Anurag will meet Thakur. One of the pre-ordering rule swaps the position of the noun phrase followed by a transitive verb with the transitive verb. The original and the resulting re-ordered parse tree will be as shown in the Table TABREF5 . Applying this reordering rule to the above sentence Anurag will meet Thakur will yield the reordered sentence Anurag Thakur will meet. Additionally, the Table TABREF5 shows the parse trees for the above sentence with and without pre-ordering.",
"Pre-ordering should also be beneficial for other word order divergence scenarios (e.g., SOV to SVO), but we leave verification of these additional scenarios for future work."
],
[
"In this section, we describe the languages experimented with, datasets used, the network hyper-parameters used in our experiments."
],
[
"We experimented with English INLINEFORM0 Hindi translation as the parent task. English is the assisting source language. Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks. Hindi, Bengali, Gujarati and Marathi are Indo-Aryan languages, while Malayalam and Tamil are Dravidian languages. All these languages have a canonical SOV word order."
],
[
"For training English-Hindi NMT systems, we use the IITB English-Hindi parallel corpus BIBREF22 ( INLINEFORM0 sentences from the training set) and the ILCI English-Hindi parallel corpus ( INLINEFORM1 sentences). The ILCI (Indian Language Corpora Initiative) multilingual parallel corpus BIBREF23 spans multiple Indian languages from the health and tourism domains. We use the 520-sentence dev-set of the IITB parallel corpus for validation. For each child task, we use INLINEFORM2 sentences from ILCI corpus as the test set."
],
[
"We use OpenNMT-Torch BIBREF24 to train the NMT system. We use the standard sequence-to-sequence architecture with attention BIBREF13 . We use an encoder which contains two layers of bidirectional LSTMs with 500 neurons each. The decoder contains two LSTM layers with 500 neurons each. Input feeding approach BIBREF1 is used where the previous attention hidden state is fed as input to the decoder LSTM. We use a mini-batch of size 50 and use a dropout layer. We begin with an initial learning rate of INLINEFORM0 and decay the learning rate by a factor of INLINEFORM1 when the perplexity on validation set increases. The training is stopped when the learning rate falls below INLINEFORM2 or number of epochs=22. The English input is initialized with pre-trained embeddings trained using fastText BIBREF25 .",
"English vocabulary consists of INLINEFORM0 tokens appearing at least 2 times in the English training corpus. For constructing the Hindi vocabulary we considered only those tokens appearing at least 5 times in the training split resulting in a vocabulary size of INLINEFORM1 tokens. For representing English and other source languages into a common space, we translate each word in the source language into English using a bilingual dictionary (Google Translate word translation in our case). In an end-to-end solution, it would have been ideal to use bilingual embeddings or obtain word-by-word translations via bilingual embeddings BIBREF14 . But, the quality of publicly available bilingual embeddings for English-Indian languages is very low for obtaining good-quality, bilingual representations BIBREF26 , BIBREF27 . We also found that these embeddings were not useful for transfer learning.",
"We use the CFILT-preorder system for reordering English sentences to match the Indian language word order. It contains two re-ordering systems: (1) generic rules that apply to all Indian languages BIBREF17 , and (2) hindi-tuned rules which improve the generic rules by incorporating improvements found through an error analysis of English-Hindi reordering BIBREF28 . These Hindi-tuned rules have been found to improve reordering for many English to Indian language pairs BIBREF29 ."
],
[
"In this section, we describe the results from our experiments on NMT task. We report the results on X-Hindi pair, where X is one of Bengali, Gujarati, Marathi, Tamil, and Malayalam. The results are presented in the Table TABREF6 . We report BLEU scores and LeBLEU scores BIBREF30 . We observe that both the pre-ordering configurations significantly improve the BLEU scores over the baseline scores. We observe larger gains when generic pre-ordering rules are used compared to the Hindi-tuned pre-ordering rules.",
"These results support our hypothesis that word-order divergence can limit the benefits of multilingual translation. Reducing the word order divergence can improve translation in extremely low-resource scenarios.",
"An analysis of the outputs revealed that pre-ordering significantly reducing the number of UNK tokens (placeholder for unknown words) in the test output (Table TABREF14 ). We hypothesize that due to word order divergence between English and Indian languages, the encoder representation generated is not consistent leading to decoder generating unknown words. However, the pre-ordered models generate better contextual representations leading to less number of unknown tokens and better translation which is also reflected in the BLEU scores."
],
[
"In this paper, we show that handling word-order divergence between source and assisting languages is crucial for the success of multilingual NMT in an extremely low-resource setting. We show that pre-ordering the assisting language to match the word order of the source language significantly improves translation quality in an extremely low-resource setting. While the current work focused on Indian languages, we would like to validate the hypothesis on a more diverse set of languages."
]
],
"section_name": [
"Introduction",
"Addressing Lexical Divergence",
"Addressing Word Order Divergence",
"Use of Pre-ordering",
"Proposed Solution",
"Experimental Setup",
"Languages",
"Datasets",
"Network",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"6930f7fb4f318817c67a8c48f098211e38824b45",
"b6fd3b68fce94f81a56667e705bba1d2df03f66e"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We use the CFILT-preorder system for reordering English sentences to match the Indian language word order. It contains two re-ordering systems: (1) generic rules that apply to all Indian languages BIBREF17 , and (2) hindi-tuned rules which improve the generic rules by incorporating improvements found through an error analysis of English-Hindi reordering BIBREF28 . These Hindi-tuned rules have been found to improve reordering for many English to Indian language pairs BIBREF29 ."
],
"extractive_spans": [
"CFILT-preorder system"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the CFILT-preorder system for reordering English sentences to match the Indian language word order. It contains two re-ordering systems: (1) generic rules that apply to all Indian languages BIBREF17 , and (2) hindi-tuned rules which improve the generic rules by incorporating improvements found through an error analysis of English-Hindi reordering BIBREF28 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"29a07008df2866038a313b6c016f266992633f14",
"612347660b7c64d3e840db48e898705e0d681db8"
],
"answer": [
{
"evidence": [
"We experimented with English INLINEFORM0 Hindi translation as the parent task. English is the assisting source language. Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks. Hindi, Bengali, Gujarati and Marathi are Indo-Aryan languages, while Malayalam and Tamil are Dravidian languages. All these languages have a canonical SOV word order."
],
"extractive_spans": [],
"free_form_answer": "5",
"highlighted_evidence": [
"Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Languages",
"We experimented with English INLINEFORM0 Hindi translation as the parent task. English is the assisting source language. Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks. Hindi, Bengali, Gujarati and Marathi are Indo-Aryan languages, while Malayalam and Tamil are Dravidian languages. All these languages have a canonical SOV word order."
],
"extractive_spans": [
"Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks."
],
"free_form_answer": "",
"highlighted_evidence": [
"Languages\nWe experimented with English INLINEFORM0 Hindi translation as the parent task. English is the assisting source language. Bengali, Gujarati, Marathi, Malayalam and Tamil are the primary source languages, and translation from these to Hindi constitute the child tasks. Hindi, Bengali, Gujarati and Marathi are Indo-Aryan languages, while Malayalam and Tamil are Dravidian languages. All these languages have a canonical SOV word order."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7bd52c59a3c57bbd5951607dfe0aaaff88099004",
"990716547d6c5c922485ab72cdef6088f0a40494"
],
"answer": [
{
"evidence": [
"Datasets",
"For training English-Hindi NMT systems, we use the IITB English-Hindi parallel corpus BIBREF22 ( INLINEFORM0 sentences from the training set) and the ILCI English-Hindi parallel corpus ( INLINEFORM1 sentences). The ILCI (Indian Language Corpora Initiative) multilingual parallel corpus BIBREF23 spans multiple Indian languages from the health and tourism domains. We use the 520-sentence dev-set of the IITB parallel corpus for validation. For each child task, we use INLINEFORM2 sentences from ILCI corpus as the test set."
],
"extractive_spans": [
"IITB English-Hindi parallel corpus BIBREF22",
"ILCI English-Hindi parallel corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"Datasets\nFor training English-Hindi NMT systems, we use the IITB English-Hindi parallel corpus BIBREF22 ( INLINEFORM0 sentences from the training set) and the ILCI English-Hindi parallel corpus ( INLINEFORM1 sentences). The ILCI (Indian Language Corpora Initiative) multilingual parallel corpus BIBREF23 spans multiple Indian languages from the health and tourism domains. We use the 520-sentence dev-set of the IITB parallel corpus for validation. For each child task, we use INLINEFORM2 sentences from ILCI corpus as the test set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For training English-Hindi NMT systems, we use the IITB English-Hindi parallel corpus BIBREF22 ( INLINEFORM0 sentences from the training set) and the ILCI English-Hindi parallel corpus ( INLINEFORM1 sentences). The ILCI (Indian Language Corpora Initiative) multilingual parallel corpus BIBREF23 spans multiple Indian languages from the health and tourism domains. We use the 520-sentence dev-set of the IITB parallel corpus for validation. For each child task, we use INLINEFORM2 sentences from ILCI corpus as the test set."
],
"extractive_spans": [
"IITB English-Hindi parallel corpus",
"ILCI English-Hindi parallel corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"For training English-Hindi NMT systems, we use the IITB English-Hindi parallel corpus BIBREF22 ( INLINEFORM0 sentences from the training set) and the ILCI English-Hindi parallel corpus ( INLINEFORM1 sentences). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How do they match words before reordering them?",
"On how many language pairs do they show that preordering assisting language sentences helps translation quality?",
"Which dataset(s) do they experiment with?"
],
"question_id": [
"a313e98994fc039a82aa2447c411dda92c65a470",
"37861be6aecd9242c4fdccdfcd06e48f3f1f8f81",
"7e62a53823aba08bc26b2812db016f5ce6159565"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Example showing transitive verb before and after reordering (Adapted from Chatterjee et al. (2014))",
"Table 2: Transfer learning results for X -Hindi pair, trained on English-Hindi corpus and sentences from X word translated to English.",
"Table 3: Number of UNK tokens generated by each model on the test set.",
"Table 4: Sample Hindi translation generated by the Gujarati-Hindi NMT model. Text in red indicates phrase dropped by the no pre-ordered model.",
"Table 5: Transfer learning results (BLEU) for Indian Language-Hindi pair, fine-tuned with varying number of Indian Language-Hindi parallel sentences. †Indicates statistically significant difference between Pre-ordered and No Pre-ordered results using paired bootstrap resampling (Koehn, 2004) for a p-value less than 0.05. No Transfer Learning model refers to training the model on varying number of Indian Language-Hindi parallel sentences with randomly initialized weights."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png",
"5-Table5-1.png"
]
} | [
"On how many language pairs do they show that preordering assisting language sentences helps translation quality?"
] | [
[
"1811.00383-Languages-0"
]
] | [
"5"
] | 4 |
1909.09067 | A Corpus for Automatic Readability Assessment and Text Simplification of German | In this paper, we present a corpus for use in automatic readability assessment and automatic text simplification of German. The corpus is compiled from web sources and consists of approximately 211,000 sentences. As a novel contribution, it contains information on text structure, typography, and images, which can be exploited as part of machine learning approaches to readability assessment and text simplification. The focus of this publication is on representing such information as an extension to an existing corpus standard. | {
"paragraphs": [
[
"Simplified language is a variety of standard language characterized by reduced lexical and syntactic complexity, the addition of explanations for difficult concepts, and clearly structured layout. Among the target groups of simplified language commonly mentioned are persons with cognitive impairment or learning disabilities, prelingually deaf persons, functionally illiterate persons, and foreign language learners BIBREF0.",
"Two natural language processing tasks deal with the concept of simplified language: automatic readability assessment and automatic text simplification. Readability assessment refers to the process of determining the level of difficulty of a text, e.g., along readability measures, school grades, or levels of the Common European Framework of Reference for Languages (CEFR) BIBREF1. Readability measures, in their traditional form, take into account only surface features. For example, the Flesch Reading Ease Score BIBREF2 measures the length of words (in syllables) and sentences (in words). While readability has been shown to correlate with such features to some extent BIBREF3, a consensus has emerged according to which they are not sufficient to account for all of the complexity inherent in a text. As [p. 2618]kauchak-et-al-2014 state, “the usability of readability formulas is limited and there is little evidence that the output of these tools directly results in improved understanding by readers”. Recently, more sophisticated models employing (deeper) linguistic features such as lexical, semantic, morphological, morphosyntactic, syntactic, pragmatic, discourse, psycholinguistic, and language model features have been proposed BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8.",
"Automatic text simplification was initiated in the late 1990s BIBREF9, BIBREF10 and since then has been approached by means of rule-based and statistical methods. As part of a rule-based approach, the operations carried out typically include replacing complex lexical and syntactic units by simpler ones. A statistical approach generally conceptualizes the simplification task as one of converting a standard-language into a simplified-language text using machine translation. nisioi-et-al-2017 introduced neural machine translation to automatic text simplification. Research on automatic text simplification is comparatively widespread for languages such as English, Swedish, Spanish, and Brazilian Portuguese. To the authors' knowledge, no productive system exists for German. suter-2015, suter-et-al-2016 presented a prototype of a rule-based system for German.",
"Machine learning approaches to both readability assessment and text simplification rely on data systematically prepared in the form of corpora. Specifically, for automatic text simplification via machine translation, pairs of standard-language/simplified-language texts aligned at the sentence level (i.e., parallel corpora) are needed.",
"The paper at hand introduces a corpus developed for use in automatic readability assessment and automatic text simplification of German. The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information. The importance of considering such information has repeatedly been asserted theoretically BIBREF11, BIBREF12, BIBREF0. The remainder of this paper is structured as follows: Section SECREF2 presents previous corpora used for automatic readability assessment and text simplification. Section SECREF3 describes our corpus, introducing its novel aspects and presenting the primary data (Section SECREF7), the metadata (Section SECREF10), the secondary data (Section SECREF28), the profile (Section SECREF35), and the results of machine learning experiments carried out on the corpus (Section SECREF37)."
],
[
"A number of corpora for use in automatic readability assessment and automatic text simplification exist. The most well-known example is the Parallel Wikipedia Simplification Corpus (PWKP) compiled from parallel articles of the English Wikipedia and Simple English Wikipedia BIBREF13 and consisting of around 108,000 sentence pairs. The corpus profile is shown in Table TABREF2. While the corpus represents the largest dataset involving simplified language to date, its application has been criticized for various reasons BIBREF15, BIBREF14, BIBREF16; among these, the fact that Simple English Wikipedia articles are not necessarily direct translations of articles from the English Wikipedia stands out. hwang-et-al-2015 provided an updated version of the corpus that includes a total of 280,000 full and partial matches between the two Wikipedia versions. Another frequently used data collection for English is the Newsela Corpus BIBREF14 consisting of 1,130 news articles, each simplified into four school grade levels by professional editors. Table TABREF3 shows the profile of the Newsela Corpus. The table obviates that the difference in vocabulary size between the English and the simplified English side of the PWKP Corpus amounts to only 18%, while the corresponding number for the English side and the level representing the highest amount of simplification in the Newsela Corpus (Simple-4) is 50.8%. Vocabulary size as an indicator of lexical richness is generally taken to correlate positively with complexity BIBREF17.",
"gasperin-et-al-2010 compiled the PorSimples Corpus consisting of Brazilian Portuguese texts (2,116 sentences), each with a natural and a strong simplification, resulting in around 4,500 aligned sentences. drndarevic-saggion-2012, bott-et-al-2012, bott-saggion-2012 produced the Simplext Corpus consisting of 200 Spanish/simplified Spanish document pairs, amounting to a total of 1,149 (Spanish)/1,808 (simplified Spanish) sentences (approximately 1,000 aligned sentences).",
"klaper-ebling-volk-2013 created the first parallel corpus for German/simplified German, consisting of 256 parallel texts downloaded from the web (approximately 70,000 tokens)."
],
[
"Section SECREF2 demonstrated that the only corpus containing simplified German available is that of klaper-ebling-volk-2013. Since its creation, a number of legal and political developments have spurred the availability of data in simplified German. Among these developments is the introduction of a set of regulations for accessible information technology (Barrierefreie-Informationstechnik-Verordnung, BITV 2.0) in Germany and the ratification of the United Nations Convention on the Rights of Persons with Disabilities (CRPD) in Switzerland. The paper at hand introduces a corpus that represents an enhancement of the corpus of klaper-ebling-volk-2013 in the following ways:",
"The corpus contains more parallel data.",
"The corpus additionally contains monolingual-only data (simplified German).",
"The corpus newly contains information on text structure, typography, and images.",
"The simplified German side of the parallel data together with the monolingual-only data can be used for automatic readability assessment. The parallel data in the corpus is useful both for deriving rules for a rule-based text simplification system in a data-driven manner and for training a data-driven machine translation system. A data augmentation technique such as back-translation BIBREF18 can be applied to the monolingual-only data to arrive at additional (synthetic) parallel data."
],
[
"The corpus contains PDFs and webpages collected from web sources in Germany, Austria, and Switzerland at the end of 2018/beginning of 2019. The web sources mostly consist of websites of governments, specialised institutions, translation agencies, and non-profit organisations (92 different domains). The documents cover a range of topics, such as politics (e.g., instructions for voting), health (e.g., what to do in case of pregnancy), and culture (e.g., introduction to art museums).",
"For the webpages, a static dump of all documents was created. Following this, the documents were manually checked to verify the language. The main content was subsequently extracted, i.e., HTML markup and boilerplate removed using the Beautiful Soup library for Python. Information on text structure (e.g., paragraphs, lines) and typography (e.g., boldface, italics) was retained. Similarly, image information (content, position, and dimensions of an image) was preserved.",
"For PDFs, the PDFlib Text and Image Extraction Toolkit (TET) was used to extract the plain text and record information on text structure, typography, and images. The toolkit produces output in an XML format (TETML)."
],
[
"Metadata was collected automatically from the HTML (webpages) and TETML (PDFs) files, complemented manually, and recorded in the Open Language Archives Community (OLAC) Standard. OLAC is based on a reduced version of the Dublin Core Metadata Element Set (DCMES). Of the 15 elements of this “Simple Dublin Core” set, the following 12 were actively used along with controlled vocabularies of OLAC and Dublin Core:",
"title: title of the document, with the language specified as the value of an xml:lang attribute and alternatives to the original title (e.g., translations) stored as dcterms:alternative (cf. Figure FIGREF11 for an example)",
"contributor: all person entities linked to the creation of a document, with an olac:code attribute with values from the OLAC role vocabulary used to further specify the role of the contributor, e.g., author, editor, publisher, or translator",
"date: date mentioned in the metadata of the HTML or PDF source or, for news and blog articles, date mentioned in the body of the text, in W3C date and time format",
"description: value of the description in the metadata of an HTML document or list of sections of a PDF document, using the Dublin Core qualifier TableOfContents",
"format: distinction between the Internet Media Types (MIME types) text/html (for webpages) and application/pdf (for PDFs)",
"identifier: URL of the document or International Standard Book Number (ISBN) for books or brochures",
"language: language of the document as value of the attribute olac:code (i.e., de, as conforming to ISO 639), with the CEFR level as optional element content",
"publisher: organization or person that made the document available",
"relation: used to establish a link between documents in German and simplified German for the parallel part of the corpus, using the Dublin Core qualifiers hasVersion (for the German text) and isVersionOf (for the simplified German text)",
"rights: any piece of information about the rights of a document, as far as available in the source",
"source: source document, i.e., HTML for web documents and TETML for PDFs",
"type: nature or genre of the content of the document, which, in accordance with the DCMI Type Vocabulary, is Text in all cases and additionally StillImage in cases where a document also contains images. Additionally, the linguistic type is specified according to the OLAC Linguistic Data Type Vocabulary, as either primary_text (applies to most documents) or lexicon in cases where a document represents an entry of a simplified language vocabulary",
"The elements coverage (to denote the spatial or temporal scope of the content of a resource), creator (to denote the author of a text, see contributor above), and subject (to denote the topic of the document content) were not used.",
"Figure FIGREF11 shows an example of OLAC metadata. The source document described with this metadata record is a PDF structured into chapters, with text corresponding to the CEFR level A2 and images. Metadata in OLAC can be converted into the metadata standard of CLARIN (a European research infrastructure for language resources and technology), the Component MetaData Infrastructure (CMDI). The CMDI standard was chosen since it is the supported metadata version of CLARIN, which is specifically popular in German-speaking countries.",
"Information on the language level of a simplified German text (typically A1, A2, or B1) is particularly valuable, as it allows for conducting automatic readability assessment and graded automatic text simplification experiments on the data. 52 websites and 233 PDFs (amounting to approximately 26,000 sentences) have an explicit language level label."
],
[
"Annotations were added in the Text Corpus Format by WebLicht (TCF) developed as part of CLARIN. TCF supports standoff annotation, which allows for representation of annotations with conflicting hierarchies. TCF does not assign a separate file for each annotation layer; instead, the source text and all annotation layers are stored jointly in a single file. A token layer acts as the key element to which all other annotation layers are linked.",
"The following types of annotations were added: text structure, fonts, images, tokens, parts of speech, morphological units, lemmas, sentences, and dependency parses. TCF does not readily accommodate the incorporation of all of these types of information. We therefore extended the format in the following ways:",
"Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer (cf. Figure FIGREF34 for an example)",
"Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation was added as part of a textspan element in the textstructure layer",
"A separate images layer was introduced to hold image elements that take as attributes the x and y coordinates of the images, their dimensions (width and height), and the number of the page on which they occur",
"A separate fonts layer was introduced to preserve detailed information on the font configurations referenced in the tokens layer",
"Linguistic annotation was added automatically using the ParZu dependency parser for German BIBREF19 (for tokens and dependency parses), the NLTK toolkit BIBREF20 (for sentences), the TreeTagger BIBREF21 (for part-of-speech tags and lemmas), and Zmorge BIBREF22 (for morphological units). Figure FIGREF34 shows a sample corpus annotation. Together, the metadata shown in Figure FIGREF11 and the annotations presented in Figure FIGREF34 constitute a complete TCF file."
],
[
"The resulting corpus contains 6,217 documents (5,461 monolingual documents plus 378 documents for each side of the parallel data). Table TABREF36 shows the corpus profile. The monolingual-only documents on average contain fewer sentences than the simplified German side of the parallel data (average document length in sentences 31.64 vs. 55.75). The average sentence length is almost equal (approx. 11 tokens). Hence, the monolingual-only texts are shorter than the simplified German texts in the parallel data. Compared to their German counterparts, the simplified German texts in the parallel data have clearly undergone a process of lexical simplification: The vocabulary is smaller by 51% (33,384 vs. 16,352 types), which is comparable to the rate of reduction reported in Section SECREF2 for the Newsela Corpus (50.8%)."
],
[
"battisti-2019 applied unsupervised machine learning techniques to the simplified German texts of the corpus presented in this paper with the aim of investigating evidence of multiple complexity levels. While the detailed results are beyond the scope of this paper, the author found features based on the structural information that is a unique property of this corpus (e.g., number of images, number of paragraphs, number of lines, number of words of a specific font type, and adherence to a one-sentence-per-line rule) to be predictive of the level of difficulty of a simplified German text. To our knowledge, this is the first study to deliver empirical proof of the relevance of such features."
],
[
"We have introduced a corpus compiled for use in automatic readability assessment and automatic text simplification of German. While such tasks have been addressed for other languages, research on German is still scarce. The features exploited as part of machine learning approaches to readability assessment so far typically include surface and/or (deeper) linguistic features. The corpus presented in this paper additionally contains information on text structure, typography, and images. These features have been shown to be indicative of simple vs. complex texts both theoretically and, using the corpus described in this paper, empirically.",
"Information on text structure, typography, and images can also be leveraged as part of a neural machine translation approach to text simplification. A set of parallel documents used in machine translation additionally requires sentence alignments, which are still missing from our corpus. Hence, as a next step, we will include such information using the Customized Alignment for Text Simplification (CATS) tool BIBREF23."
]
],
"section_name": [
"Introduction",
"Previous Corpora for Automatic Readability Assessment and Automatic Text Simplification",
"Building a Corpus for Automatic Readability Assessment and Automatic Text Simplification of German",
"Building a Corpus for Automatic Readability Assessment and Automatic Text Simplification of German ::: Primary Data",
"Building a Corpus for Automatic Readability Assessment and Automatic Text Simplification of German ::: Metadata",
"Building a Corpus for Automatic Readability Assessment and Automatic Text Simplification of German ::: Secondary Data",
"Building a Corpus for Automatic Readability Assessment and Automatic Text Simplification of German ::: Corpus Profile",
"Building a Corpus for Automatic Readability Assessment and Automatic Text Simplification of German ::: Empirical validation of the corpus",
"Conclusion and Outlook"
]
} | {
"answers": [
{
"annotation_id": [
"711f1ad56049181f6dda2384816a3df8b4e1056a",
"7e5eae60acdde32c732f6562a721a1d9399e1b9a"
],
"answer": [
{
"evidence": [
"The paper at hand introduces a corpus developed for use in automatic readability assessment and automatic text simplification of German. The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information. The importance of considering such information has repeatedly been asserted theoretically BIBREF11, BIBREF12, BIBREF0. The remainder of this paper is structured as follows: Section SECREF2 presents previous corpora used for automatic readability assessment and text simplification. Section SECREF3 describes our corpus, introducing its novel aspects and presenting the primary data (Section SECREF7), the metadata (Section SECREF10), the secondary data (Section SECREF28), the profile (Section SECREF35), and the results of machine learning experiments carried out on the corpus (Section SECREF37).",
"Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation was added as part of a textspan element in the textstructure layer"
],
"extractive_spans": [
"paragraphs",
"lines",
"Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation"
],
"free_form_answer": "",
"highlighted_evidence": [
"The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information.",
"Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation was added as part of a textspan element in the textstructure layer"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The paper at hand introduces a corpus developed for use in automatic readability assessment and automatic text simplification of German. The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information. The importance of considering such information has repeatedly been asserted theoretically BIBREF11, BIBREF12, BIBREF0. The remainder of this paper is structured as follows: Section SECREF2 presents previous corpora used for automatic readability assessment and text simplification. Section SECREF3 describes our corpus, introducing its novel aspects and presenting the primary data (Section SECREF7), the metadata (Section SECREF10), the secondary data (Section SECREF28), the profile (Section SECREF35), and the results of machine learning experiments carried out on the corpus (Section SECREF37).",
"Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation was added as part of a textspan element in the textstructure layer"
],
"extractive_spans": [],
"free_form_answer": "paragraph, lines, textspan element (paragraph segmentation, line segmentation, Information on physical page segmentation(for PDF only))",
"highlighted_evidence": [
"The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information. ",
"Information on physical page segmentation (for PDFs only), paragraph segmentation, and line segmentation was added as part of a textspan element in the textstructure layer"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2aaab77dd35850fc276681cabbfbd1d5843bfdfc",
"e24b8928bdb338196e68a30eed426e18662f6071"
],
"answer": [
{
"evidence": [
"The paper at hand introduces a corpus developed for use in automatic readability assessment and automatic text simplification of German. The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information. The importance of considering such information has repeatedly been asserted theoretically BIBREF11, BIBREF12, BIBREF0. The remainder of this paper is structured as follows: Section SECREF2 presents previous corpora used for automatic readability assessment and text simplification. Section SECREF3 describes our corpus, introducing its novel aspects and presenting the primary data (Section SECREF7), the metadata (Section SECREF10), the secondary data (Section SECREF28), the profile (Section SECREF35), and the results of machine learning experiments carried out on the corpus (Section SECREF37).",
"Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer (cf. Figure FIGREF34 for an example)"
],
"extractive_spans": [
"font type",
"font style",
"Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page"
],
"free_form_answer": "",
"highlighted_evidence": [
"The focus of this publication is on representing information that is valuable for these tasks but that hitherto has largely been ignored in machine learning approaches centering around simplified language, specifically, text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and image (content, position, and dimensions) information.",
"Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer (cf. Figure FIGREF34 for an example)"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer (cf. Figure FIGREF34 for an example)",
"A separate fonts layer was introduced to preserve detailed information on the font configurations referenced in the tokens layer",
"For the webpages, a static dump of all documents was created. Following this, the documents were manually checked to verify the language. The main content was subsequently extracted, i.e., HTML markup and boilerplate removed using the Beautiful Soup library for Python. Information on text structure (e.g., paragraphs, lines) and typography (e.g., boldface, italics) was retained. Similarly, image information (content, position, and dimensions of an image) was preserved."
],
"extractive_spans": [
"font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer",
"A separate fonts layer was introduced to preserve detailed information on the font configurations referenced in the tokens layer"
],
"free_form_answer": "",
"highlighted_evidence": [
"Information on the font type and font style (e.g., italics, bold print) of a token and its position on the physical page (for PDFs only) was specified as attributes to the token elements of the tokens layer (cf. Figure FIGREF34 for an example)",
"A separate fonts layer was introduced to preserve detailed information on the font configurations referenced in the tokens layer",
"Information on text structure (e.g., paragraphs, lines) and typography (e.g., boldface, italics) was retained."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"Which information about text structure is included in the corpus?",
"Which information about typography is included in the corpus?"
],
"question_id": [
"9eabb54c2408dac24f00f92cf1061258c7ea2e1a",
"3d013f15796ae7fed5272183a166c45f16e24e39"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"German",
"German"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Parallel Wikipedia Simplification Corpus (PWKP) (Zhu et al., 2010): Profile (from Xu et al. (2015))",
"Table 2: Newsela Corpus (Xu et al., 2015): Profile",
"Table 3: Corpus profile"
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"8-Table3-1.png"
]
} | [
"Which information about text structure is included in the corpus?"
] | [
[
"1909.09067-Building a Corpus for Automatic Readability Assessment and Automatic Text Simplification of German ::: Secondary Data-3",
"1909.09067-Introduction-4"
]
] | [
"paragraph, lines, textspan element (paragraph segmentation, line segmentation, Information on physical page segmentation(for PDF only))"
] | 5 |
1909.00512 | How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings | Replacing static word embeddings with contextualized word representations has yielded significant improvements on many NLP tasks. However, just how contextual are the contextualized representations produced by models such as ELMo and BERT? Are there infinitely many context-specific representations for each word, or are words essentially assigned one of a finite number of word-sense representations? For one, we find that the contextualized representations of all words are not isotropic in any layer of the contextualizing model. While representations of the same word in different contexts still have a greater cosine similarity than those of two different words, this self-similarity is much lower in upper layers. This suggests that upper layers of contextualizing models produce more context-specific representations, much like how upper layers of LSTMs produce more task-specific representations. In all layers of ELMo, BERT, and GPT-2, on average, less than 5% of the variance in a word's contextualized representations can be explained by a static embedding for that word, providing some justification for the success of contextualized representations. | {
"paragraphs": [
[
"The application of deep learning methods to NLP is made possible by representing words as vectors in a low-dimensional continuous space. Traditionally, these word embeddings were static: each word had a single vector, regardless of context BIBREF0, BIBREF1. This posed several problems, most notably that all senses of a polysemous word had to share the same representation. More recent work, namely deep neural language models such as ELMo BIBREF2 and BERT BIBREF3, have successfully created contextualized word representations, word vectors that are sensitive to the context in which they appear. Replacing static embeddings with contextualized representations has yielded significant improvements on a diverse array of NLP tasks, ranging from question-answering to coreference resolution.",
"The success of contextualized word representations suggests that despite being trained with only a language modelling task, they learn highly transferable and task-agnostic properties of language. In fact, linear probing models trained on frozen contextualized representations can predict linguistic properties of words (e.g., part-of-speech tags) almost as well as state-of-the-art models BIBREF4, BIBREF5. Still, these representations remain poorly understood. For one, just how contextual are these contextualized word representations? Are there infinitely many context-specific representations that BERT and ELMo can assign to each word, or are words essentially assigned one of a finite number of word-sense representations?",
"We answer this question by studying the geometry of the representation space for each layer of ELMo, BERT, and GPT-2. Our analysis yields some surprising findings:",
"In all layers of all three models, the contextualized word representations of all words are not isotropic: they are not uniformly distributed with respect to direction. Instead, they are anisotropic, occupying a narrow cone in the vector space. The anisotropy in GPT-2's last layer is so extreme that two random words will on average have almost perfect cosine similarity! Given that isotropy has both theoretical and empirical benefits for static embeddings BIBREF6, the extent of anisotropy in contextualized representations is surprising.",
"Occurrences of the same word in different contexts have non-identical vector representations. Where vector similarity is defined as cosine similarity, these representations are more dissimilar to each other in upper layers. This suggests that, much like how upper layers of LSTMs produce more task-specific representations BIBREF4, upper layers of contextualizing models produce more context-specific representations.",
"Context-specificity manifests very differently in ELMo, BERT, and GPT-2. In ELMo, representations of words in the same sentence grow more similar to each other as context-specificity increases in upper layers; in BERT, they become more dissimilar to each other in upper layers but are still more similar than randomly sampled words are on average; in GPT-2, however, words in the same sentence are no more similar to each other than two randomly chosen words.",
"After adjusting for the effect of anisotropy, on average, less than 5% of the variance in a word's contextualized representations can be explained by their first principal component. This holds across all layers of all models. This suggests that contextualized representations do not correspond to a finite number of word-sense representations, and even in the best possible scenario, static embeddings would be a poor replacement for contextualized ones. Still, static embeddings created by taking the first principal component of a word's contextualized representations outperform GloVe and FastText embeddings on many word vector benchmarks.",
"These insights help justify why the use of contextualized representations has led to such significant improvements on many NLP tasks."
],
[
"Skip-gram with negative sampling (SGNS) BIBREF0 and GloVe BIBREF1 are among the best known models for generating static word embeddings. Though they learn embeddings iteratively in practice, it has been proven that in theory, they both implicitly factorize a word-context matrix containing a co-occurrence statistic BIBREF7, BIBREF8. Because they create a single representation for each word, a notable problem with static word embeddings is that all senses of a polysemous word must share a single vector."
],
[
"Given the limitations of static word embeddings, recent work has tried to create context-sensitive word representations. ELMo BIBREF2, BERT BIBREF3, and GPT-2 BIBREF9 are deep neural language models that are fine-tuned to create models for a wide range of downstream NLP tasks. Their internal representations of words are called contextualized word representations because they are a function of the entire input sentence. The success of this approach suggests that these representations capture highly transferable and task-agnostic properties of language BIBREF4.",
"ELMo creates contextualized representations of each token by concatenating the internal states of a 2-layer biLSTM trained on a bidirectional language modelling task BIBREF2. In contrast, BERT and GPT-2 are bi-directional and uni-directional transformer-based language models respectively. Each transformer layer of 12-layer BERT (base, cased) and 12-layer GPT-2 creates a contextualized representation of each token by attending to different parts of the input sentence BIBREF3, BIBREF9. BERT – and subsequent iterations on BERT BIBREF10, BIBREF11 – have achieved state-of-the-art performance on various downstream NLP tasks, ranging from question-answering to sentiment analysis."
],
[
"Prior analysis of contextualized word representations has largely been restricted to probing tasks BIBREF12, BIBREF5. This involves training linear models to predict syntactic (e.g., part-of-speech tag) and semantic (e.g., word relation) properties of words. Probing models are based on the premise that if a simple linear model can be trained to accurately predict a linguistic property, then the representations implicitly encode this information to begin with. While these analyses have found that contextualized representations encode semantic and syntactic information, they cannot answer how contextual these representations are, and to what extent they can be replaced with static word embeddings, if at all. Our work in this paper is thus markedly different from most dissections of contextualized representations. It is more similar to BIBREF13, which studied the geometry of static word embedding spaces."
],
[
"The contextualizing models we study in this paper are ELMo, BERT, and GPT-2. We choose the base cased version of BERT because it is most comparable to GPT-2 with respect to number of layers and dimensionality. The models we work with are all pre-trained on their respective language modelling tasks. Although ELMo, BERT, and GPT-2 have 2, 12, and 12 hidden layers respectively, we also include the input layer of each contextualizing model as its 0th layer. This is because the 0th layer is not contextualized, making it a useful baseline against which to compare the contextualization done by subsequent layers."
],
[
"To analyze contextualized word representations, we need input sentences to feed into our pre-trained models. Our input data come from the SemEval Semantic Textual Similarity tasks from years 2012 - 2016 BIBREF14, BIBREF15, BIBREF16, BIBREF17. We use these datasets because they contain sentences in which the same words appear in different contexts. For example, the word `dog' appears in “A panda dog is running on the road.” and “A dog is trying to get bacon off his back.” If a model generated the same representation for `dog' in both these sentences, we could infer that there was no contextualization; conversely, if the two representations were different, we could infer that they were contextualized to some extent. Using these datasets, we map words to the list of sentences they appear in and their index within these sentences. We do not consider words that appear in less than 5 unique contexts in our analysis."
],
[
"We measure how contextual a word representation is using three different metrics: self-similarity, intra-sentence similarity, and maximum explainable variance."
],
[
"Let $w$ be a word that appears in sentences $\\lbrace s_1, ..., s_n \\rbrace $ at indices $\\lbrace i_1, ..., i_n \\rbrace $ respectively, such that $w = s_1[i_1] = ... = s_n[i_n]$. Let $f_{\\ell }(s,i)$ be a function that maps $s[i]$ to its representation in layer $\\ell $ of model $f$. The self similarity of $w$ in layer $\\ell $ is",
"where $\\cos $ denotes the cosine similarity. In other words, the self-similarity of a word $w$ in layer $\\ell $ is the average cosine similarity between its contextualized representations across its $n$ unique contexts. If layer $\\ell $ does not contextualize the representations at all, then $\\textit {SelfSim}_\\ell (w) = 1$ (i.e., the representations are identical across all contexts). The more contextualized the representations are for $w$, the lower we would expect its self-similarity to be."
],
[
"Let $s$ be a sentence that is a sequence $\\left< w_1, ..., w_n \\right>$ of $n$ words. Let $f_{\\ell }(s,i)$ be a function that maps $s[i]$ to its representation in layer $\\ell $ of model $f$. The intra-sentence similarity of $s$ in layer $\\ell $ is",
"Put more simply, the intra-sentence similarity of a sentence is the average cosine similarity between its word representations and the sentence vector, which is just the mean of those word vectors. This measure captures how context-specificity manifests in the vector space. For example, if both $\\textit {IntraSim}_\\ell (s)$ and $\\textit {SelfSim}_\\ell (w)$ are low $\\forall \\ w \\in s$, then the model contextualizes words in that layer by giving each one a context-specific representation that is still distinct from all other word representations in the sentence. If $\\textit {IntraSim}_\\ell (s)$ is high but $\\textit {SelfSim}_\\ell (w)$ is low, this suggests a less nuanced contextualization, where words in a sentence are contextualized simply by making their representations converge in vector space."
],
[
"Let $w$ be a word that appears in sentences $\\lbrace s_1, ..., s_n \\rbrace $ at indices $\\lbrace i_1, ..., i_n \\rbrace $ respectively, such that $w = s_1[i_1] = ... = s_n[i_n]$. Let $f_{\\ell }(s,i)$ be a function that maps $s[i]$ to its representation in layer $\\ell $ of model $f$. Where $[ f_{\\ell }(s_1, i_1) ... f_{\\ell }(s_n, i_n) ]$ is the occurrence matrix of $w$ and $\\sigma _1 ... \\sigma _m$ are the first $m$ singular values of this matrix, the maximum explainable variance is",
"$\\textit {MEV}_\\ell (w)$ is the proportion of variance in $w$'s contextualized representations for a given layer that can be explained by their first principal component. It gives us an upper bound on how well a static embedding could replace a word's contextualized representations. The closer $\\textit {MEV}_\\ell (w)$ is to 0, the poorer a replacement a static embedding would be; if $\\textit {MEV}_\\ell (w) = 1$, then a static embedding would be a perfect replacement for the contextualized representations."
],
[
"It is important to consider isotropy (or the lack thereof) when discussing contextuality. For example, if word vectors were perfectly isotropic (i.e., directionally uniform), then $\\textit {SelfSim}_\\ell (w) = 0.95$ would suggest that $w$'s representations were poorly contextualized. However, consider the scenario where word vectors are so anisotropic that any two words have on average a cosine similarity of 0.99. Then $\\textit {SelfSim}_\\ell (w) = 0.95$ would actually suggest the opposite – that $w$'s representations were well contextualized. This is because representations of $w$ in different contexts would on average be more dissimilar to each other than two randomly chosen words.",
"To adjust for the effect of anisotropy, we use three anisotropic baselines, one for each of our contextuality measures. For self-similarity and intra-sentence similarity, the baseline is the average cosine similarity between the representations of uniformly randomly sampled words from different contexts. The more anisotropic the word representations are in a given layer, the closer this baseline is to 1. For maximum explainable variance (MEV), the baseline is the proportion of variance in uniformly randomly sampled word representations that is explained by their first principal component. The more anisotropic the representations in a given layer, the closer this baseline is to 1: even for a random assortment of words, the principal component would be able to explain a large proportion of the variance.",
"Since contextuality measures are calculated for each layer of a contextualizing model, we calculate separate baselines for each layer as well. We then subtract from each measure its respective baseline to get the anisotropy-adjusted contexuality measure. For example, the anisotropy-adjusted self-similarity is",
"where $\\mathcal {O}$ is the set of all word occurrences and $f_{\\ell }(\\cdot )$ maps a word occurrence to its representation in layer $\\ell $ of model $f$. Unless otherwise stated, references to contextuality measures in the rest of the paper refer to the anisotropy-adjusted measures, where both the raw measure and baseline are estimated with 1K uniformly randomly sampled word representations."
],
[
"If word representations from a particular layer were isotropic (i.e., directionally uniform), then the average cosine similarity between uniformly randomly sampled words would be 0 BIBREF18. The closer this average is to 1, the more anisotropic the representations. The geometric interpretation of anisotropy is that the word representations all occupy a narrow cone in the vector space rather than being uniform in all directions; the greater the anisotropy, the narrower this cone BIBREF13. As seen in Figure FIGREF20, this implies that in almost all layers of BERT, ELMo and GPT-2, the representations of all words occupy a narrow cone in the vector space. The only exception is ELMo's input layer, which produces static character-level embeddings without using contextual or even positional information BIBREF2. It should be noted that not all static embeddings are necessarily isotropic, however; BIBREF13 found that skipgram embeddings, which are also static, are not isotropic."
],
[
"As seen in Figure FIGREF20, for GPT-2, the average cosine similarity between uniformly randomly words is roughly 0.6 in layers 2 through 8 but increases exponentially from layers 8 through 12. In fact, word representations in GPT-2's last layer are so anisotropic that any two words have on average an almost perfect cosine similarity! This pattern holds for BERT and ELMo as well, though there are exceptions: for example, the anisotropy in BERT's penultimate layer is much higher than in its final layer.",
"Isotropy has both theoretical and empirical benefits for static word embeddings. In theory, it allows for stronger “self-normalization” during training BIBREF18, and in practice, subtracting the mean vector from static embeddings leads to improvements on several downstream NLP tasks BIBREF6. Thus the extreme degree of anisotropy seen in contextualized word representations – particularly in higher layers – is surprising. As seen in Figure FIGREF20, for all three models, the contextualized hidden layer representations are almost all more anisotropic than the input layer representations, which do not incorporate context. This suggests that high anisotropy is inherent to, or least a by-product of, the process of contextualization."
],
[
"Recall from Definition 1 that the self-similarity of a word, in a given layer of a given model, is the average cosine similarity between its representations in different contexts, adjusted for anisotropy. If the self-similarity is 1, then the representations are not context-specific at all; if the self-similarity is 0, that the representations are maximally context-specific. In Figure FIGREF24, we plot the average self-similarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2. For example, the self-similarity is 1.0 in ELMo's input layer because representations in that layer are static character-level embeddings.",
"In all three models, the higher the layer, the lower the self-similarity is on average. In other words, the higher the layer, the more context-specific the contextualized representations. This finding makes intuitive sense. In image classification models, lower layers recognize more generic features such as edges while upper layers recognize more class-specific features BIBREF19. Similarly, upper layers of LSTMs trained on NLP tasks learn more task-specific representations BIBREF4. Therefore, it follows that upper layers of neural language models learn more context-specific representations, so as to predict the next word for a given context more accurately. Of all three models, representations in GPT-2 are the most context-specific, with those in GPT-2's last layer being almost maximally context-specific."
],
[
"Across all layers, stopwords have among the lowest self-similarity of all words, implying that their contextualized representations are among the most context-specific. For example, the words with the lowest average self-similarity across ELMo's layers are `and', `of', `'s', `the', and `to'. This is relatively surprising, given that these words are not polysemous. This finding suggests that the variety of contexts a word appears in, rather than its inherent polysemy, is what drives variation in its contextualized representations. This answers one of the questions we posed in the introduction: ELMo, BERT, and GPT-2 are not simply assigning one of a finite number of word-sense representations to each word; otherwise, there would not be so much variation in the representations of words with so few word senses."
],
[
"As noted earlier, contextualized representations are more context-specific in upper layers of ELMo, BERT, and GPT-2. However, how does this increased context-specificity manifest in the vector space? Do word representations in the same sentence converge to a single point, or do they remain distinct from one another while still being distinct from their representations in other contexts? To answer this question, we can measure a sentence's intra-sentence similarity. Recall from Definition 2 that the intra-sentence similarity of a sentence, in a given layer of a given model, is the average cosine similarity between each of its word representations and their mean, adjusted for anisotropy. In Figure FIGREF25, we plot the average intra-sentence similarity of 500 uniformly randomly sampled sentences."
],
[
"As word representations in a sentence become more context-specific in upper layers, the intra-sentence similarity also rises. This suggests that, in practice, ELMo ends up extending the intuition behind Firth's BIBREF20 distributional hypothesis to the sentence level: that because words in the same sentence share the same context, their contextualized representations should also be similar."
],
[
"As word representations in a sentence become more context-specific in upper layers, they drift away from one another, although there are exceptions (see layer 12 in Figure FIGREF25). However, in all layers, the average similarity between words in the same sentence is still greater than the average similarity between randomly chosen words (i.e., the anisotropy baseline). This suggests a more nuanced contextualization than in ELMo, with BERT recognizing that although the surrounding sentence informs a word's meaning, two words in the same sentence do not necessarily have a similar meaning because they share the same context."
],
[
"On average, the unadjusted intra-sentence similarity is roughly the same as the anisotropic baseline, so as seen in Figure FIGREF25, the anisotropy-adjusted intra-sentence similarity is close to 0 in most layers of GPT-2. In fact, the intra-sentence similarity is highest in the input layer, which does not contextualize words at all. This is in contrast to ELMo and BERT, where the average intra-sentence similarity is above 0.20 for all but one layer.",
"As noted earlier when discussing BERT, this behavior still makes intuitive sense: two words in the same sentence do not necessarily have a similar meaning simply because they share the same context. The success of GPT-2 suggests that unlike anisotropy, which accompanies context-specificity in all three models, a high intra-sentence similarity is not inherent to contextualization. Words in the same sentence can have highly contextualized representations without those representations being any more similar to each other than two random word representations. It is unclear, however, whether these differences in intra-sentence similarity can be traced back to differences in model architecture; we leave this question as future work."
],
[
"Recall from Definition 3 that the maximum explainable variance (MEV) of a word, for a given layer of a given model, is the proportion of variance in its contextualized representations that can be explained by their first principal component. This gives us an upper bound on how well a static embedding could replace a word's contextualized representations. Because contextualized representations are anisotropic (see section SECREF21), much of the variation across all words can be explained by a single vector. We adjust for anisotropy by calculating the proportion of variance explained by the first principal component of uniformly randomly sampled word representations and subtracting this proportion from the raw MEV. In Figure FIGREF29, we plot the average anisotropy-adjusted MEV across uniformly randomly sampled words.",
"In no layer of ELMo, BERT, or GPT-2 can more than 5% of the variance in a word's contextualized representations be explained by a static embedding, on average. Though not visible in Figure FIGREF29, the raw MEV of many words is actually below the anisotropy baseline: i.e., a greater proportion of the variance across all words can be explained by a single vector than can the variance across all representations of a single word. Note that the 5% threshold represents the best-case scenario, and there is no theoretical guarantee that a word vector obtained using GloVe, for example, would be similar to the static embedding that maximizes MEV. This suggests that contextualizing models are not simply assigning one of a finite number of word-sense representations to each word – otherwise, the proportion of variance explained would be much higher. Even the average raw MEV is below 5% for all layers of ELMo and BERT; only for GPT-2 is the raw MEV non-negligible, being around 30% on average for layers 2 to 11 due to extremely high anisotropy."
],
[
"As noted earlier, we can create static embeddings for each word by taking the first principal component (PC) of its contextualized representations in a given layer. In Table TABREF34, we plot the performance of these PC static embeddings on several benchmark tasks. These tasks cover semantic similarity, analogy solving, and concept categorization: SimLex999 BIBREF21, MEN BIBREF22, WS353 BIBREF23, RW BIBREF24, SemEval-2012 BIBREF25, Google analogy solving BIBREF0 MSR analogy solving BIBREF26, BLESS BIBREF27 and AP BIBREF28. We leave out layers 3 - 10 in Table TABREF34 because their performance is between those of Layers 2 and 11.",
"The best-performing PC static embeddings belong to the first layer of BERT, although those from the other layers of BERT and ELMo also outperform GloVe and FastText on most benchmarks. For all three contextualizing models, PC static embeddings created from lower layers are more effective those created from upper layers. Those created using GPT-2 also perform markedly worse than their counterparts from ELMo and BERT. Given that upper layers are much more context-specific than lower layers, and given that GPT-2's representations are more context-specific than ELMo and BERT's (see Figure FIGREF24), this suggests that the PCs of highly context-specific representations are less effective on traditional benchmarks. Those derived from less context-specific representations, such as those from Layer 1 of BERT, are much more effective."
],
[
"Our findings offer some new directions for future work. For one, as noted earlier in the paper, BIBREF6 found that making static embeddings more isotropic – by subtracting their mean from each embedding – leads to surprisingly large improvements in performance on downstream tasks. Given that isotropy has benefits for static embeddings, it may also have benefits for contextualized word representations, although the latter have already yielded significant improvements despite being highly anisotropic. Therefore, adding an anisotropy penalty to the language modelling objective – to encourage the contextualized representations to be more isotropic – may yield even better results.",
"Another direction for future work is generating static word representations from contextualized ones. While the latter offer superior performance, there are often challenges to deploying large models such as BERT in production, both with respect to memory and run-time. In contrast, static representations are much easier to deploy. Our work in section 4.3 suggests that not only it is possible to extract static representations from contextualizing models, but that these extracted vectors often perform much better on a diverse array of tasks compared to traditional static embeddings such as GloVe and FastText. This may be a means of extracting some use from contextualizing models without incurring the full cost of using them in production."
],
[
"In this paper, we investigated how contextual contextualized word representations truly are. For one, we found that upper layers of ELMo, BERT, and GPT-2 produce more context-specific representations than lower layers. This increased context-specificity is always accompanied by increased anisotropy. However, context-specificity also manifests differently across the three models; the anisotropy-adjusted similarity between words in the same sentence is highest in ELMo but almost non-existent in GPT-2. We ultimately found that after adjusting for anisotropy, on average, less than 5% of the variance in a word's contextualized representations could be explained by a static embedding. This means that even in the best-case scenario, in all layers of all models, static word embeddings would be a poor replacement for contextualized ones. These insights help explain some of the remarkable success that contextualized representations have had on a diverse array of NLP tasks."
],
[
"We thank the anonymous reviewers for their insightful comments. We thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for their financial support."
]
],
"section_name": [
"Introduction",
"Related Work ::: Static Word Embeddings",
"Related Work ::: Contextualized Word Representations",
"Related Work ::: Probing Tasks",
"Approach ::: Contextualizing Models",
"Approach ::: Data",
"Approach ::: Measures of Contextuality",
"Approach ::: Measures of Contextuality ::: Definition 1",
"Approach ::: Measures of Contextuality ::: Definition 2",
"Approach ::: Measures of Contextuality ::: Definition 3",
"Approach ::: Adjusting for Anisotropy",
"Findings ::: (An)Isotropy ::: Contextualized representations are anisotropic in all non-input layers.",
"Findings ::: (An)Isotropy ::: Contextualized representations are generally more anisotropic in higher layers.",
"Findings ::: Context-Specificity ::: Contextualized word representations are more context-specific in higher layers.",
"Findings ::: Context-Specificity ::: Stopwords (e.g., `the', `of', `to') have among the most context-specific representations.",
"Findings ::: Context-Specificity ::: Context-specificity manifests very differently in ELMo, BERT, and GPT-2.",
"Findings ::: Context-Specificity ::: In ELMo, words in the same sentence are more similar to one another in upper layers.",
"Findings ::: Context-Specificity ::: In BERT, words in the same sentence are more dissimilar to one another in upper layers.",
"Findings ::: Context-Specificity ::: In GPT-2, word representations in the same sentence are no more similar to each other than randomly sampled words.",
"Findings ::: Static vs. Contextualized ::: On average, less than 5% of the variance in a word's contextualized representations can be explained by a static embedding.",
"Findings ::: Static vs. Contextualized ::: Principal components of contextualized representations in lower layers outperform GloVe and FastText on many benchmarks.",
"Future Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"2b51036f1dd82e127006af1a267198c7f06b0e9b",
"794c2b132bdbbdde76b9ae8d2d17e9133bd51592"
],
"answer": [
{
"evidence": [
"We measure how contextual a word representation is using three different metrics: self-similarity, intra-sentence similarity, and maximum explainable variance."
],
"extractive_spans": [],
"free_form_answer": "They measure self-similarity, intra-sentence similarity and maximum explainable variance of the embeddings in the upper layers.",
"highlighted_evidence": [
"We measure how contextual a word representation is using three different metrics: self-similarity, intra-sentence similarity, and maximum explainable variance."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Recall from Definition 1 that the self-similarity of a word, in a given layer of a given model, is the average cosine similarity between its representations in different contexts, adjusted for anisotropy. If the self-similarity is 1, then the representations are not context-specific at all; if the self-similarity is 0, that the representations are maximally context-specific. In Figure FIGREF24, we plot the average self-similarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2. For example, the self-similarity is 1.0 in ELMo's input layer because representations in that layer are static character-level embeddings.",
"In all three models, the higher the layer, the lower the self-similarity is on average. In other words, the higher the layer, the more context-specific the contextualized representations. This finding makes intuitive sense. In image classification models, lower layers recognize more generic features such as edges while upper layers recognize more class-specific features BIBREF19. Similarly, upper layers of LSTMs trained on NLP tasks learn more task-specific representations BIBREF4. Therefore, it follows that upper layers of neural language models learn more context-specific representations, so as to predict the next word for a given context more accurately. Of all three models, representations in GPT-2 are the most context-specific, with those in GPT-2's last layer being almost maximally context-specific.",
"As seen in Figure FIGREF20, for GPT-2, the average cosine similarity between uniformly randomly words is roughly 0.6 in layers 2 through 8 but increases exponentially from layers 8 through 12. In fact, word representations in GPT-2's last layer are so anisotropic that any two words have on average an almost perfect cosine similarity! This pattern holds for BERT and ELMo as well, though there are exceptions: for example, the anisotropy in BERT's penultimate layer is much higher than in its final layer.",
"As word representations in a sentence become more context-specific in upper layers, they drift away from one another, although there are exceptions (see layer 12 in Figure FIGREF25). However, in all layers, the average similarity between words in the same sentence is still greater than the average similarity between randomly chosen words (i.e., the anisotropy baseline). This suggests a more nuanced contextualization than in ELMo, with BERT recognizing that although the surrounding sentence informs a word's meaning, two words in the same sentence do not necessarily have a similar meaning because they share the same context."
],
"extractive_spans": [],
"free_form_answer": "They plot the average cosine similarity between uniformly random words increases exponentially from layers 8 through 12. \nThey plot the average self-similarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2 and shown that the higher layer produces more context-specific embeddings.\nThey plot that word representations in a sentence become more context-specific in upper layers, they drift away from one another.",
"highlighted_evidence": [
"Recall from Definition 1 that the self-similarity of a word, in a given layer of a given model, is the average cosine similarity between its representations in different contexts, adjusted for anisotropy. If the self-similarity is 1, then the representations are not context-specific at all; if the self-similarity is 0, that the representations are maximally context-specific. In Figure FIGREF24, we plot the average self-similarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2. For example, the self-similarity is 1.0 in ELMo's input layer because representations in that layer are static character-level embeddings.\n\nIn all three models, the higher the layer, the lower the self-similarity is on average. In other words, the higher the layer, the more context-specific the contextualized representations. ",
"As seen in Figure FIGREF20, for GPT-2, the average cosine similarity between uniformly randomly words is roughly 0.6 in layers 2 through 8 but increases exponentially from layers 8 through 12. In fact, word representations in GPT-2's last layer are so anisotropic that any two words have on average an almost perfect cosine similarity! This pattern holds for BERT and ELMo as well, though there are exceptions: for example, the anisotropy in BERT's penultimate layer is much higher than in its final layer.",
"As word representations in a sentence become more context-specific in upper layers, they drift away from one another, although there are exceptions (see layer 12 in Figure FIGREF25). However, in all layers, the average similarity between words in the same sentence is still greater than the average similarity between randomly chosen words (i.e., the anisotropy baseline). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"67ac81e2a4f7d6d28498852dd4603e2e86bba580",
"e817b89dea1cd7b083475ef39aaf43350b083963"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: The performance of various static embeddings on word embedding benchmark tasks. The best result for each task is in bold. For the contextualizing models (ELMo, BERT, GPT-2), we use the first principal component of a word’s contextualized representations in a given layer as its static embedding. The static embeddings created using ELMo and BERT’s contextualized representations often outperform GloVe and FastText vectors."
],
"extractive_spans": [],
"free_form_answer": "They use the first principal component of a word's contextualized representation in a given layer as its static embedding.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: The performance of various static embeddings on word embedding benchmark tasks. The best result for each task is in bold. For the contextualizing models (ELMo, BERT, GPT-2), we use the first principal component of a word’s contextualized representations in a given layer as its static embedding. The static embeddings created using ELMo and BERT’s contextualized representations often outperform GloVe and FastText vectors."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As noted earlier, we can create static embeddings for each word by taking the first principal component (PC) of its contextualized representations in a given layer. In Table TABREF34, we plot the performance of these PC static embeddings on several benchmark tasks. These tasks cover semantic similarity, analogy solving, and concept categorization: SimLex999 BIBREF21, MEN BIBREF22, WS353 BIBREF23, RW BIBREF24, SemEval-2012 BIBREF25, Google analogy solving BIBREF0 MSR analogy solving BIBREF26, BLESS BIBREF27 and AP BIBREF28. We leave out layers 3 - 10 in Table TABREF34 because their performance is between those of Layers 2 and 11."
],
"extractive_spans": [
" by taking the first principal component (PC) of its contextualized representations in a given layer"
],
"free_form_answer": "",
"highlighted_evidence": [
"As noted earlier, we can create static embeddings for each word by taking the first principal component (PC) of its contextualized representations in a given layer. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What experiments are proposed to test that upper layers produce context-specific embeddings?",
"How do they calculate a static embedding for each word?"
],
"question_id": [
"1ec152119cf756b16191b236c85522afeed11f59",
"891c2001d6baaaf0da4e65b647402acac621a7d2"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"BERT",
"BERT"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: In almost all layers of BERT, ELMo, and GPT-2, the word representations are anisotropic (i.e., not directionally uniform): the average cosine similarity between uniformly randomly sampled words is non-zero. The one exception is ELMo’s input layer; this is not surprising given that it generates character-level embeddings without using context. Representations in higher layers are generally more anisotropic than those in lower ones.",
"Figure 2: The average cosine similarity between representations of the same word in different contexts is called the word’s self-similarity (see Definition 1). Above, we plot the average self-similarity of uniformly randomly sampled words after adjusting for anisotropy (see section 3.4). In all three models, the higher the layer, the lower the self-similarity, suggesting that contextualized word representations are more context-specific in higher layers.",
"Figure 3: The intra-sentence similarity is the average cosine similarity between each word representation in a sentence and their mean (see Definition 2). Above, we plot the average intra-sentence similarity of uniformly randomly sampled sentences, adjusted for anisotropy. This statistic reflects how context-specificity manifests in the representation space, and as seen above, it manifests very differently for ELMo, BERT, and GPT-2.",
"Figure 4: The maximum explainable variance (MEV) of a word is the proportion of variance in its contextualized representations that can be explained by their first principal component (see Definition 3). Above, we plot the average MEV of uniformly randomly sampled words after adjusting for anisotropy. In no layer of any model can more than 5% of the variance in a word’s contextualized representations be explained by a static embedding.",
"Table 1: The performance of various static embeddings on word embedding benchmark tasks. The best result for each task is in bold. For the contextualizing models (ELMo, BERT, GPT-2), we use the first principal component of a word’s contextualized representations in a given layer as its static embedding. The static embeddings created using ELMo and BERT’s contextualized representations often outperform GloVe and FastText vectors."
],
"file": [
"5-Figure1-1.png",
"6-Figure2-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png",
"8-Table1-1.png"
]
} | [
"What experiments are proposed to test that upper layers produce context-specific embeddings?",
"How do they calculate a static embedding for each word?"
] | [
[
"1909.00512-Findings ::: Context-Specificity ::: Contextualized word representations are more context-specific in higher layers.-0",
"1909.00512-Findings ::: (An)Isotropy ::: Contextualized representations are generally more anisotropic in higher layers.-0",
"1909.00512-Approach ::: Measures of Contextuality-0",
"1909.00512-Findings ::: Context-Specificity ::: In BERT, words in the same sentence are more dissimilar to one another in upper layers.-0",
"1909.00512-Findings ::: Context-Specificity ::: Contextualized word representations are more context-specific in higher layers.-1"
],
[
"1909.00512-Findings ::: Static vs. Contextualized ::: Principal components of contextualized representations in lower layers outperform GloVe and FastText on many benchmarks.-0",
"1909.00512-8-Table1-1.png"
]
] | [
"They plot the average cosine similarity between uniformly random words increases exponentially from layers 8 through 12. \nThey plot the average self-similarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2 and shown that the higher layer produces more context-specific embeddings.\nThey plot that word representations in a sentence become more context-specific in upper layers, they drift away from one another.",
"They use the first principal component of a word's contextualized representation in a given layer as its static embedding."
] | 7 |
2003.03106 | Sensitive Data Detection and Classification in Spanish Clinical Text: Experiments with BERT | Massive digital data processing provides a wide range of opportunities and benefits, but at the cost of endangering personal data privacy. Anonymisation consists in removing or replacing sensitive information from data, enabling its exploitation for different purposes while preserving the privacy of individuals. Over the years, a lot of automatic anonymisation systems have been proposed; however, depending on the type of data, the target language or the availability of training documents, the task remains challenging still. The emergence of novel deep-learning models during the last two years has brought large improvements to the state of the art in the field of Natural Language Processing. These advancements have been most noticeably led by BERT, a model proposed by Google in 2018, and the shared language models pre-trained on millions of documents. In this paper, we use a BERT-based sequence labelling model to conduct a series of anonymisation experiments on several clinical datasets in Spanish. We also compare BERT to other algorithms. The experiments show that a simple BERT-based model with general-domain pre-training obtains highly competitive results without any domain specific feature engineering. | {
"paragraphs": [
[
"During the first two decades of the 21st century, the sharing and processing of vast amounts of data has become pervasive. This expansion of data sharing and processing capabilities is both a blessing and a curse. Data helps build better information systems for the digital era and enables further research for advanced data management that benefits the society in general. But the use of this very data containing sensitive information conflicts with private data protection, both from an ethical and a legal perspective.",
"There are several application domains on which this situation is particularly acute. This is the case of the medical domain BIBREF0. There are plenty of potential applications for advanced medical data management that can only be researched and developed using real data; yet, the use of medical data is severely limited –when not entirely prohibited– due to data privacy protection policies.",
"One way of circumventing this problem is to anonymise the data by removing, replacing or obfuscating the personal information mentioned, as exemplified in Table TABREF1. This task can be done by hand, having people read and anonymise the documents one by one. Despite being a reliable and simple solution, this approach is tedious, expensive, time consuming and difficult to scale to the potentially thousands or millions of documents that need to be anonymised.",
"For this reason, numerous of systems and approaches have been developed during the last decades to attempt to automate the anonymisation of sensitive content, starting with the automatic detection and classification of sensitive information. Some of these systems rely on rules, patterns and dictionaries, while others use more advanced techniques related to machine learning and, more recently, deep learning.",
"Given that this paper is concerned with text documents (e.g. medical records), the involved techniques are related to Natural Language Processing (NLP). When using NLP approaches, it is common to pose the problem of document anonymisation as a sequence labelling problem, i.e. classifying each token within a sequence as being sensitive information or not. Further, depending on the objective of the anonymisation task, it is also important to determine the type of sensitive information (names of individuals, addresses, age, sex, etc.).",
"The anonymisation systems based on NLP techniques perform reasonably well, but are far from perfect. Depending on the difficulty posed by each dataset or the amount of available data for training machine learning models, the performance achieved by these methods is not enough to fully rely on them in certain situations BIBREF0. However, in the last two years, the NLP community has reached an important milestone thanks to the appearance of the so-called Transformers neural network architectures BIBREF1. In this paper, we conduct several experiments in sensitive information detection and classification on Spanish clinical text using BERT (from `Bidirectional Encoder Representations from Transformers') BIBREF2 as the base for a sequence labelling approach. The experiments are carried out on two datasets: the MEDDOCAN: Medical Document Anonymization shared task dataset BIBREF3, and NUBes BIBREF4, a corpus of real medical reports in Spanish. In these experiments, we compare the performance of BERT with other machine-learning-based systems, some of which use language-specific features. Our aim is to evaluate how good a BERT-based model performs without language nor domain specialisation apart from the training data labelled for the task at hand.",
"The rest of the paper is structured as follows: the next section describes related work about data anonymisation in general and clinical data anonymisation in particular; it also provides a more detailed explanation and background about the Transformers architecture and BERT. Section SECREF3 describes the data involved in the experiments and the systems evaluated in this paper, including the BERT-based system; finally, it details the experimental design. Section SECREF4 introduces the results for each set of experiments. Finally, Section SECREF5 contains the conclusions and future lines of work."
],
[
"The state of the art in the field of Natural Language Processing (NLP) has reached an important milestone in the last couple of years thanks to deep-learning architectures, increasing in several points the performance of new models for almost any text processing task.",
"The major change started with the Transformers model proposed by vaswani2017attention. It substituted the widely used recurrent and convolutional neural network architectures by another approach based solely on self-attention, obtaining an impressive performance gain. The original proposal was focused on an encoder-decoder architecture for machine translation, but soon the use of Transformers was made more general BIBREF1. There are several other popular models that use Transformers, such as Open AI's GPT and GPT2 BIBREF5, RoBERTa BIBREF6 and the most recent XLNet BIBREF7; still, BERT BIBREF2 is one of the most widespread Transformer-based models.",
"BERT trains its unsupervised language model using a Masked Language Model and Next Sentence Prediction. A common problem in NLP is the lack of enough training data. BERT can be pre-trained to learn general or specific language models using very large amounts of unlabelled text (e.g. web content, Wikipedia, etc.), and this knowledge can be transferred to a different downstream task in a process that receives the name fine-tuning.",
"devlin2018bert have used fine-tuning to achieve state-of-the-art results on a wide variety of challenging natural language tasks, such as text classification, Question Answering (QA) and Named Entity Recognition and Classification (NERC). BERT has also been used successfully by other community practitioners for a wide range of NLP-related tasks BIBREF8, BIBREF9.",
"Regarding the task of data anonymisation in particular, anonymisation systems may follow different approaches and pursue different objectives (Cormode and Srivastava, 2009). The first objective of these systems is to detect and classify the sensitive information contained in the documents to be anonymised. In order to achieve that, they use rule-based approaches, Machine Learning (ML) approaches, or a combination of both.",
"Although most of these efforts are for English texts –see, among others, the i2b2 de-identification challenges BIBREF10, BIBREF11, dernon2016deep, or khin2018deep–, other languages are also attracting growing interest. Some examples are mamede2016automated for Portuguese and tveit2004anonymization for Norwegian. With respect to the anonymisation of text written in Spanish, recent studies include medina2018building, hassan2018anonimizacion and garcia2018automating. Most notably, in 2019 the first community challenge about anonymisation of medical documents in Spanish, MEDDOCAN BIBREF3, was held as part of the IberLEF initiative. The winners of the challenge –the Neither-Language-nor-Domain-Experts (NLNDE) BIBREF12– achieved F1-scores as high as 0.975 in the task of sensitive information detection and categorisation by using recurrent neural networks with Conditional Random Field (CRF) output layers.",
"At the same challenge, mao2019hadoken occupied the 8th position among 18 participants using BERT. According to the description of the system, the authors used BERT-Base Multilingual Cased and an output CRF layer. However, their system is $\\sim $3 F1-score points below our implementation without the CRF layer."
],
[
"The aim of this paper is to evaluate BERT's multilingual model and compare it to other established machine-learning algorithms in a specific task: sensitive data detection and classification in Spanish clinical free text. This section describes the data involved in the experiments and the systems evaluated. Finally, we introduce the experimental setup."
],
[
"Two datasets are exploited in this article. Both datasets consist of plain text containing clinical narrative written in Spanish, and their respective manual annotations of sensitive information in BRAT BIBREF13 standoff format. In order to feed the data to the different algorithms presented in Section SECREF7, these datasets were transformed to comply with the commonly used BIO sequence representation scheme BIBREF14."
],
[
"NUBes BIBREF4 is a corpus of around 7,000 real medical reports written in Spanish and annotated with negation and uncertainty information. Before being published, sensitive information had to be manually annotated and replaced for the corpus to be safely shared. In this article, we work with the NUBes version prior to its anonymisation, that is, with the manual annotations of sensitive information. It follows that the version we work with is not publicly available and, due to contractual restrictions, we cannot reveal the provenance of the data. In order to avoid confusion between the two corpus versions, we henceforth refer to the version relevant in this paper as NUBes-PHI (from `NUBes with Personal Health Information').",
"NUBes-PHI consists of 32,055 sentences annotated for 11 different sensitive information categories. Overall, it contains 7,818 annotations. The corpus has been randomly split into train (72%), development (8%) and test (20%) sets to conduct the experiments described in this paper. The size of each split and the distribution of the annotations can be consulted in Tables and , respectively.",
"The majority of sensitive information in NUBes-PHI are temporal expressions (`Date' and `Time'), followed by healthcare facility mentions (`Hospital'), and the age of the patient. Mentions of people are not that frequent, with physician names (`Doctor') occurring much more often than patient names (`Patient'). The least frequent sensitive information types, which account for $\\sim $10% of the remaining annotations, consist of the patient's sex, job, and kinship, and locations other than healthcare facilities (`Location'). Finally, the tag `Other' includes, for instance, mentions to institutions unrelated to healthcare and whether the patient is right- or left-handed. It occurs just 36 times."
],
[
"The organisers of the MEDDOCAN shared task BIBREF3 curated a synthetic corpus of clinical cases enriched with sensitive information by health documentalists. In this regard, the MEDDOCAN evaluation scenario could be said to be somewhat far from the real use case the technology developed for the shared task is supposed to be applied in. However, at the moment it also provides the only public means for a rigorous comparison between systems for sensitive health information detection in Spanish texts.",
"The size of the MEDDOCAN corpus is shown in Table . Compared to NUBes-PHI (Table ), this corpus contains more sensitive information annotations, both in absolute and relative terms.",
"The sensitive annotation categories considered in MEDDOCAN differ in part from those in NUBes-PHI. Most notably, it contains finer-grained labels for location-related mentions –namely, `Address', `Territory', and `Country'–, and other sensitive information categories that we did not encounter in NUBes-PHI (e.g., identifiers, phone numbers, e-mail addresses, etc.). In total, the MEDDOCAN corpus has 21 sensitive information categories. We refer the reader to the organisers' article BIBREF3 for more detailed information about this corpus."
],
[
"Apart from experimenting with a pre-trained BERT model, we have run experiments with other systems and baselines, to compare them and obtain a better perspective about BERT's performance in these datasets."
],
[
"As the simplest baseline, a sensitive data recogniser and classifier has been developed that consists of regular-expressions and dictionary look-ups. For each category to detect a specific method has been implemented. For instance, the Date, Age, Time and Doctor detectors are based on regular-expressions; Hospital, Sex, Kinship, Location, Patient and Job are looked up in dictionaries. The dictionaries are hand-crafted from the training data available, except for the Patient's case, for which the possible candidates considered are the 100 most common female and male names in Spain according to the Instituto Nacional de Estadística (INE; Spanish Statistical Office)."
],
[
"Conditional Random Fields (CRF) BIBREF15 have been extensively used for tasks of sequential nature. In this paper, we propose as one of the competitive baselines a CRF classifier trained with sklearn-crfsuite for Python 3.5 and the following configuration: algorithm = lbfgs; maximum iterations = 100; c1 = c2 = 0.1; all transitions = true; optimise = false. The features extracted from each token are as follows:",
"[noitemsep]",
"prefixes and suffixes of 2 and 3 characters;",
"the length of the token in characters and the length of the sentence in tokens;",
"whether the token is all-letters, a number, or a sequence of punctuation marks;",
"whether the token contains the character `@';",
"whether the token is the start or end of the sentence;",
"the token's casing and the ratio of uppercase characters, digits, and punctuation marks to its length;",
"and, the lemma, part-of-speech tag, and named-entity tag given by ixa-pipes BIBREF16 upon analysing the sentence the token belongs to.",
"Noticeably, none of the features used to train the CRF classifier is domain-dependent. However, the latter group of features is language dependent."
],
[
"spaCy is a widely used NLP library that implements state-of-the-art text processing pipelines, including a sequence-labelling pipeline similar to the one described by strubell2017fast. spaCy offers several pre-trained models in Spanish, which perform basic NLP tasks such as Named Entity Recognition (NER). In this paper, we have trained a new NER model to detect NUBes-PHI labels. For this purpose, the new model uses all the labels of the training corpus coded with its context at sentence level. The network optimisation parameters and dropout values are the ones recommended in the documentation for small datasets. Finally, the model is trained using batches of size 64. No more features are included, so the classifier is language-dependent but not domain-dependent."
],
[
"As introduced earlier, BERT has shown an outstanding performance in NERC-like tasks, improving the start-of-the-art results for almost every dataset and language. We take the same approach here, by using the model BERT-Base Multilingual Cased with a Fully Connected (FC) layer on top to perform a fine-tuning of the whole model for an anonymisation task in Spanish clinical data. Our implementation is built on PyTorch and the PyTorch-Transformers library BIBREF1. The training phase consists in the following steps (roughly depicted in Figure ):",
"Pre-processing: since we are relying on a pre-trained BERT model, we must match the same configuration by using a specific tokenisation and vocabulary. BERT also needs that the inputs contains special tokens to signal the beginning and the end of each sequence.",
"Fine-tuning: the pre-processed sequence is fed into the model. BERT outputs the contextual embeddings that encode each of the inputted tokens. This embedding representation for each token is fed into the FC linear layer after a dropout layer (with a 0.1 dropout probability), which in turn outputs the logits for each possible class. The cross-entropy loss function is calculated comparing the logits and the gold labels, and the error is back-propagated to adjust the model parameters.",
"We have trained the model using an AdamW optimiser BIBREF17 with the learning rate set to 3e-5, as recommended by devlin2018bert, and with a gradient clipping of 1.0. We also applied a learning-rate scheduler that warms up the learning rate from zero to its maximum value as the training progresses, which is also a common practice. For each experiment set proposed below, the training was run with an early-stopping patience of 15 epochs. Then, the model that performed best against the development set was used to produce the reported results.",
"The experiments were run on a 64-core server with operating system Ubuntu 16.04, 250GB of RAM memory, and 4 GeForce RTX 2080 GPUs with 11GB of memory. The maximum sequence length was set at 500 and the batch size at 12. In this setting, each epoch –a full pass through all the training data– required about 10 minutes to complete."
],
[
"We have conducted experiments with BERT in the two datasets of Spanish clinical narrative presented in Section SECREF3 The first experiment set uses NUBes-PHI, a corpus of real medical reports manually annotated with sensitive information. Because this corpus is not publicly available, and in order to compare the BERT-based model to other related published systems, the second set of experiments uses the MEDDOCAN 2019 shared task competition dataset. The following sections provide greater detail about the two experimental setups."
],
[
"In this experiment set, we evaluate all the systems presented in Section SECREF7, namely, the rule-based baseline, the CRF classifier, the spaCy entity tagger, and BERT. The evaluation comprises three scenarios of increasing difficulty:",
"[noitemsep]",
"- Evaluates the performance of the systems at predicting whether each token is sensitive or non-sensitive; that is, the measurements only take into account whether a sensitive token has been recognised or not, regardless of the BIO label and the category assigned. This scenario shows how good a system would be at obfuscating sensitive data (e.g., by replacing sensitive tokens with asterisks).",
"- We measure the performance of the systems at predicting the sensitive information type of each token –i.e., the 11 categories presented in Section SECREF5 or `out'. Detecting entity types correctly is important if a system is going to be used to replace sensitive data by fake data of the same type (e.g., random people names).",
"- This is the strictest evaluation, as it takes into account both the BIO label and the category assigned to each individual token. Being able to discern between two contiguous sensitive entities of the same type is relevant not only because it is helpful when producing fake replacements, but because it also yields more accurate statistics of the sensitive information present in a given document collection.",
"The systems are evaluated in terms of micro-average precision, recall and F1-score in all the scenarios.",
"In addition to the scenarios proposed, a subject worth being studied is the need of labelled data. Manually labelled data is an scarce and expensive resource, which for some application domains or languages is difficult to come by. In order to obtain an estimation of the dependency of each system on the available amount of training data, we have retrained all the compared models using decreasing amounts of data –from 100% of the available training instances to just 1%. The same data subsets have been used to train all the systems. Due to the knowledge transferred from the pre-trained BERT model, the BERT-based model is expected to be more robust to data scarcity than those that start their training from scratch."
],
[
"In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3). Finally, we include the results obtained by mao2019hadoken with a CRF output layer on top of BERT embeddings. MEDDOCAN consists of two scenarios:",
"[noitemsep]",
"- This evaluation measures how good a system is at detecting sensitive text spans, regardless of the category assigned to them.",
"- In this scenario, systems are required to match exactly not only the boundaries of each sensitive span, but also the category assigned.",
"The systems are evaluated in terms of micro-averaged precision, recall and F-1 score. Note that, in contrast to the evaluation in Experiment A, MEDDOCAN measurements are entity-based instead of tokenwise. An exhaustive explanation of the MEDDOCAN evaluation procedure is available online, as well as the official evaluation script, which we used to obtain the reported results."
],
[
"This section describes the results obtained in the two sets of experiments: NUBes-PHI and MEDDOCAN."
],
[
"Table shows the results of the conducted experiments in NUBes-PHI for all the compared systems. The included baseline serves to give a quick insight about how challenging the data is. With simple regular expressions and gazetteers a precision of 0.853 is obtained. On the other hand, the recall, which directly depends on the coverage provided by the rules and resources, drops to 0.469. Hence, this task is unlikely to be solved without the generalisation capabilities provided by machine-learning and deep-learning models.",
"Regarding the detection scenario –that is, the scenario concerned with a binary classification to determine whether each individual token conveys sensitive information or not–, it can be observed that BERT outperforms its competitors. A fact worth highlighting is that, according to these results, BERT achieves a precision lower than the rest of the systems (i.e., it makes more false positive predictions); in exchange, it obtains a remarkably higher recall. Noticeably, it reaches a recall of 0.979, improving by more than 4 points the second-best system, spaCy.",
"The table also shows the results for the relaxed metric that only takes into account the entity type detected, regardless of the BIO label (i.e., ignoring whether the token is at the beginning or in the middle of a sensitive sequence of tokens). The conclusions are very similar to those extracted previously, with BERT gaining 2.1 points of F1-score over the CRF based approach. The confusion matrices of the predictions made by CRF, spaCy, and BERT in this scenario are shown in Table . As can bee seen, BERT has less difficulty in predicting correctly less frequent categories, such as `Location', `Job', and `Patient'. One of the most common mistakes according to the confusion matrices is classifying hospital names as `Location' instead of the more accurate `Hospital'; this is hardly a harmful error, given that a hospital is actually a location. Last, the category `Other' is completely leaked by all the compared systems, most likely due to its almost total lack of support in both training and evaluation datasets.",
"To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems. Despite the fact that, in general, the systems obtain high values, BERT outperforms them again. BERT's F1-score is 1.9 points higher than the next most competitive result in the comparison. More remarkably, the recall obtained by BERT is about 5 points above.",
"Upon manual inspection of the errors committed by the BERT-based model, we discovered that it has a slight tendency towards producing ill-formed BIO sequences (e.g, starting a sensitive span with `Inside' instead of `Begin'; see Table ). We could expect that complementing the BERT-based model with a CRF layer on top would help enforce the emission of valid sequences, alleviating this kind of errors and further improving its results.",
"Finally, Figure shows the impact of decreasing the amount of training data in the detection scenario. It shows the difference in precision, recall, and F1-score with respect to that obtained using 100% of the training data. A general downward trend can be observed, as one would expect: less training data leads to less accurate predictions. However, the BERT-based model is the most robust to training-data reduction, showing an steadily low performance loss. With 1% of the dataset (230 training instances), the BERT-based model only suffers a striking 7-point F1-score loss, in contrast to the 32 and 39 points lost by the CRF and spaCy models, respectively. This steep performance drop stems to a larger extent from recall decline, which is not that marked in the case of BERT. Overall, these results indicate that the transfer-learning achieved through the BERT multilingual pre-trained model not only helps obtain better results, but also lowers the need of manually labelled data for this application domain."
],
[
"The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.",
"With regard to the winner of the MEDDOCAN shared task, the BERT-based model has not improved the scores obtained by neither the domain-dependent (S3) nor the domain-independent (S2) NLNDE model. However, attending to the obtained results, BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCAN shared task competitors. Taking into account that only 3% of the gold labels remain incorrectly annotated, the task can be considered almost solved, and it is not clear if the differences among the systems are actually significant, or whether they stem from minor variations in initialisation or a long-tail of minor labelling inconsistencies."
],
[
"In this work we have briefly introduced the problems related to data privacy protection in clinical domain. We have also described some of the groundbreaking advances on the Natural Language Processing field due to the appearance of Transformers-based deep-learning architectures and transfer learning from very large general-domain multilingual corpora, focusing our attention in one of its most representative examples, Google's BERT model.",
"In order to assess the performance of BERT for Spanish clinical data anonymisation, we have conducted several experiments with a BERT-based sequence labelling approach using the pre-trained multilingual BERT model shared by Google as the starting point for the model training. We have compared this BERT-based sequence labelling against other methods and systems. One of the experiments uses the MEDDOCAN 2019 shared task dataset, while the other uses a novel Spanish clinical reports dataset called NUBes-PHI.",
"The results of the experiments show that, in NUBes-PHI, the BERT-based model outperforms the other systems without requiring any adaptation or domain-specific feature engineering, just by being trained on the provided labelled data. Interestingly, the BERT-based model obtains a remarkably higher recall than the other systems. High recall is a desirable outcome because, when anonymising sensible documents, the accidental leak of sensible data is likely to be more dangerous than the unintended over-obfuscation of non-sensitive text.",
"Further, we have conducted an additional experiment on this dataset by progressively reducing the training data for all the compared systems. The BERT-based model shows the highest robustness to training-data scarcity, loosing only 7 points of F1-score when trained on 230 instances instead of 21,371. These observation are in line with the results obtained by the NLP community using BERT for other tasks.",
"The experiments with the MEDDOCAN 2019 shared task dataset follow the same pattern. In this case, the BERT-based model falls 0.3 F1-score points behind the shared task winning system, but it would have achieved the second position in the competition with no further refinement.",
"Since we have used a pre-trained multilingual BERT model, the same approach is likely to work for other languages just by providing some labelled training data. Further, this is the simplest fine-tuning that can be performed based on BERT. More sophisticated fine-tuning layers could help improve the results. For example, it could be expected that a CRF layer helped enforce better BIO tagging sequence predictions. Precisely, mao2019hadoken participated in the MEDDOCAN competition using a BERT+CRF architecture, but their reported scores are about 3 points lower than our implementation. From the description of their work, it is unclear what the source of this score difference could be.",
"Further, at the time of writing this paper, new multilingual pre-trained models and Transformer architectures have become available. It would not come as a surprise that these new resources and systems –e.g., XLM-RoBERTa BIBREF19 or BETO BIBREF20, a BERT model fully pre-trained on Spanish texts– further advanced the state of the art in this task."
],
[
"This work has been supported by Vicomtech and partially funded by the project DeepReading (RTI2018-096846-B-C21, MCIU/AEI/FEDER,UE)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Materials and Methods",
"Materials and Methods ::: Data",
"Materials and Methods ::: Data ::: NUBes-PHI",
"Materials and Methods ::: Data ::: The MEDDOCAN corpus",
"Materials and Methods ::: Systems",
"Materials and Methods ::: Systems ::: Baseline",
"Materials and Methods ::: Systems ::: CRF",
"Materials and Methods ::: Systems ::: spaCy",
"Materials and Methods ::: Systems ::: BERT",
"Materials and Methods ::: Experimental design",
"Materials and Methods ::: Experimental design ::: Experiment A: NUBes-PHI",
"Materials and Methods ::: Experimental design ::: Experiment B: MEDDOCAN",
"Results",
"Results ::: Experiment A: NUBes-PHI",
"Results ::: Experiment B: MEDDOCAN",
"Conclusions and Future Work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"389ee0746c550138221e69a23a5bb366112eba76",
"b2e0ecb3c3458748e6c91dc7c92bdee2493aaf0c"
],
"answer": [
{
"evidence": [
"To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems. Despite the fact that, in general, the systems obtain high values, BERT outperforms them again. BERT's F1-score is 1.9 points higher than the next most competitive result in the comparison. More remarkably, the recall obtained by BERT is about 5 points above.",
"FLOAT SELECTED: Table 5: Results of Experiment A: NUBES-PHI",
"The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.",
"FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"
],
"extractive_spans": [],
"free_form_answer": "F1 scores are:\nHUBES-PHI: Detection(0.965), Classification relaxed (0.95), Classification strict (0.937)\nMedoccan: Detection(0.972), Classification (0.967)",
"highlighted_evidence": [
"To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems.",
"FLOAT SELECTED: Table 5: Results of Experiment A: NUBES-PHI",
"The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table .",
"FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3). Finally, we include the results obtained by mao2019hadoken with a CRF output layer on top of BERT embeddings. MEDDOCAN consists of two scenarios:",
"The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.",
"FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"
],
"extractive_spans": [
"BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCAN shared task competitors. Taking into account that only 3% of the gold labels remain incorrectly annotated",
" Table "
],
"free_form_answer": "",
"highlighted_evidence": [
"In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3).",
"The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.",
"FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"4ed7a931287256c7e0e84a9604845e5459240bf9",
"a8bdba4a04ef9348e720b6684f3db39f0814582e"
],
"answer": [
{
"evidence": [
"Conditional Random Fields (CRF) BIBREF15 have been extensively used for tasks of sequential nature. In this paper, we propose as one of the competitive baselines a CRF classifier trained with sklearn-crfsuite for Python 3.5 and the following configuration: algorithm = lbfgs; maximum iterations = 100; c1 = c2 = 0.1; all transitions = true; optimise = false. The features extracted from each token are as follows:",
"spaCy is a widely used NLP library that implements state-of-the-art text processing pipelines, including a sequence-labelling pipeline similar to the one described by strubell2017fast. spaCy offers several pre-trained models in Spanish, which perform basic NLP tasks such as Named Entity Recognition (NER). In this paper, we have trained a new NER model to detect NUBes-PHI labels. For this purpose, the new model uses all the labels of the training corpus coded with its context at sentence level. The network optimisation parameters and dropout values are the ones recommended in the documentation for small datasets. Finally, the model is trained using batches of size 64. No more features are included, so the classifier is language-dependent but not domain-dependent.",
"As the simplest baseline, a sensitive data recogniser and classifier has been developed that consists of regular-expressions and dictionary look-ups. For each category to detect a specific method has been implemented. For instance, the Date, Age, Time and Doctor detectors are based on regular-expressions; Hospital, Sex, Kinship, Location, Patient and Job are looked up in dictionaries. The dictionaries are hand-crafted from the training data available, except for the Patient's case, for which the possible candidates considered are the 100 most common female and male names in Spain according to the Instituto Nacional de Estadística (INE; Spanish Statistical Office)."
],
"extractive_spans": [
"NER model",
"CRF classifier trained with sklearn-crfsuite",
"classifier has been developed that consists of regular-expressions and dictionary look-up"
],
"free_form_answer": "",
"highlighted_evidence": [
"Conditional Random Fields (CRF) BIBREF15 have been extensively used for tasks of sequential nature. In this paper, we propose as one of the competitive baselines a CRF classifier trained with sklearn-crfsuite for Python 3.5 and the following configuration: algorithm = lbfgs; maximum iterations = 100; c1 = c2 = 0.1; all transitions = true; optimise = false.",
"spaCy is a widely used NLP library that implements state-of-the-art text processing pipelines, including a sequence-labelling pipeline similar to the one described by strubell2017fast. spaCy offers several pre-trained models in Spanish, which perform basic NLP tasks such as Named Entity Recognition (NER). In this paper, we have trained a new NER model to detect NUBes-PHI labels.",
"As the simplest baseline, a sensitive data recogniser and classifier has been developed that consists of regular-expressions and dictionary look-ups. For each category to detect a specific method has been implemented. For instance, the Date, Age, Time and Doctor detectors are based on regular-expressions; Hospital, Sex, Kinship, Location, Patient and Job are looked up in dictionaries. The dictionaries are hand-crafted from the training data available, except for the Patient's case, for which the possible candidates considered are the 100 most common female and male names in Spain according to the Instituto Nacional de Estadística (INE; Spanish Statistical Office)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Apart from experimenting with a pre-trained BERT model, we have run experiments with other systems and baselines, to compare them and obtain a better perspective about BERT's performance in these datasets.",
"As the simplest baseline, a sensitive data recogniser and classifier has been developed that consists of regular-expressions and dictionary look-ups. For each category to detect a specific method has been implemented. For instance, the Date, Age, Time and Doctor detectors are based on regular-expressions; Hospital, Sex, Kinship, Location, Patient and Job are looked up in dictionaries. The dictionaries are hand-crafted from the training data available, except for the Patient's case, for which the possible candidates considered are the 100 most common female and male names in Spain according to the Instituto Nacional de Estadística (INE; Spanish Statistical Office).",
"Conditional Random Fields (CRF) BIBREF15 have been extensively used for tasks of sequential nature. In this paper, we propose as one of the competitive baselines a CRF classifier trained with sklearn-crfsuite for Python 3.5 and the following configuration: algorithm = lbfgs; maximum iterations = 100; c1 = c2 = 0.1; all transitions = true; optimise = false. The features extracted from each token are as follows:",
"spaCy is a widely used NLP library that implements state-of-the-art text processing pipelines, including a sequence-labelling pipeline similar to the one described by strubell2017fast. spaCy offers several pre-trained models in Spanish, which perform basic NLP tasks such as Named Entity Recognition (NER). In this paper, we have trained a new NER model to detect NUBes-PHI labels. For this purpose, the new model uses all the labels of the training corpus coded with its context at sentence level. The network optimisation parameters and dropout values are the ones recommended in the documentation for small datasets. Finally, the model is trained using batches of size 64. No more features are included, so the classifier is language-dependent but not domain-dependent."
],
"extractive_spans": [
"As the simplest baseline, a sensitive data recogniser and classifier",
"Conditional Random Fields (CRF)",
"spaCy "
],
"free_form_answer": "",
"highlighted_evidence": [
"Apart from experimenting with a pre-trained BERT model, we have run experiments with other systems and baselines, to compare them and obtain a better perspective about BERT's performance in these datasets.",
"As the simplest baseline, a sensitive data recogniser and classifier has been developed that consists of regular-expressions and dictionary look-ups. For each category to detect a specific method has been implemented. For instance, the Date, Age, Time and Doctor detectors are based on regular-expressions; Hospital, Sex, Kinship, Location, Patient and Job are looked up in dictionaries. The dictionaries are hand-crafted from the training data available, except for the Patient's case, for which the possible candidates considered are the 100 most common female and male names in Spain according to the Instituto Nacional de Estadística (INE; Spanish Statistical Office).",
"Conditional Random Fields (CRF) BIBREF15 have been extensively used for tasks of sequential nature.",
"spaCy is a widely used NLP library that implements state-of-the-art text processing pipelines, including a sequence-labelling pipeline similar to the one described by strubell2017fast. spaCy offers several pre-trained models in Spanish, which perform basic NLP tasks such as Named Entity Recognition (NER). In this paper, we have trained a new NER model to detect NUBes-PHI labels."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"2c32845fac0232e7bcbfb4792b174f5dc64d30b8",
"845f55ffc75ff4db3c5c76bad28350e5618e2ea2"
],
"answer": [
{
"evidence": [
"In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3). Finally, we include the results obtained by mao2019hadoken with a CRF output layer on top of BERT embeddings. MEDDOCAN consists of two scenarios:",
"With regard to the winner of the MEDDOCAN shared task, the BERT-based model has not improved the scores obtained by neither the domain-dependent (S3) nor the domain-independent (S2) NLNDE model. However, attending to the obtained results, BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCAN shared task competitors. Taking into account that only 3% of the gold labels remain incorrectly annotated, the task can be considered almost solved, and it is not clear if the differences among the systems are actually significant, or whether they stem from minor variations in initialisation or a long-tail of minor labelling inconsistencies."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. ",
"However, attending to the obtained results, BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCAN shared task competitors. Taking into account that only 3% of the gold labels remain incorrectly annotated, the task can be considered almost solved, and it is not clear if the differences among the systems are actually significant, or whether they stem from minor variations in initialisation or a long-tail of minor labelling inconsistencies."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"The results of the experiments show that, in NUBes-PHI, the BERT-based model outperforms the other systems without requiring any adaptation or domain-specific feature engineering, just by being trained on the provided labelled data. Interestingly, the BERT-based model obtains a remarkably higher recall than the other systems. High recall is a desirable outcome because, when anonymising sensible documents, the accidental leak of sensible data is likely to be more dangerous than the unintended over-obfuscation of non-sensitive text.",
"The experiments with the MEDDOCAN 2019 shared task dataset follow the same pattern. In this case, the BERT-based model falls 0.3 F1-score points behind the shared task winning system, but it would have achieved the second position in the competition with no further refinement."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The results of the experiments show that, in NUBes-PHI, the BERT-based model outperforms the other systems without requiring any adaptation or domain-specific feature engineering, just by being trained on the provided labelled data.",
"The experiments with the MEDDOCAN 2019 shared task dataset follow the same pattern. In this case, the BERT-based model falls 0.3 F1-score points behind the shared task winning system, but it would have achieved the second position in the competition with no further refinement."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e278c3d1060bfa2a2a48b7e2f529a45d90b6dfae",
"eee97d5da2b1be99bc2c0896839397ecfd0e6d42"
],
"answer": [
{
"evidence": [
"Two datasets are exploited in this article. Both datasets consist of plain text containing clinical narrative written in Spanish, and their respective manual annotations of sensitive information in BRAT BIBREF13 standoff format. In order to feed the data to the different algorithms presented in Section SECREF7, these datasets were transformed to comply with the commonly used BIO sequence representation scheme BIBREF14.",
"NUBes BIBREF4 is a corpus of around 7,000 real medical reports written in Spanish and annotated with negation and uncertainty information. Before being published, sensitive information had to be manually annotated and replaced for the corpus to be safely shared. In this article, we work with the NUBes version prior to its anonymisation, that is, with the manual annotations of sensitive information. It follows that the version we work with is not publicly available and, due to contractual restrictions, we cannot reveal the provenance of the data. In order to avoid confusion between the two corpus versions, we henceforth refer to the version relevant in this paper as NUBes-PHI (from `NUBes with Personal Health Information').",
"The organisers of the MEDDOCAN shared task BIBREF3 curated a synthetic corpus of clinical cases enriched with sensitive information by health documentalists. In this regard, the MEDDOCAN evaluation scenario could be said to be somewhat far from the real use case the technology developed for the shared task is supposed to be applied in. However, at the moment it also provides the only public means for a rigorous comparison between systems for sensitive health information detection in Spanish texts."
],
"extractive_spans": [
"MEDDOCAN",
"NUBes-PHI"
],
"free_form_answer": "",
"highlighted_evidence": [
"Two datasets are exploited in this article. Both datasets consist of plain text containing clinical narrative written in Spanish, and their respective manual annotations of sensitive information in BRAT BIBREF13 standoff format.",
"NUBes BIBREF4 is a corpus of around 7,000 real medical reports written in Spanish and annotated with negation and uncertainty information.",
"In order to avoid confusion between the two corpus versions, we henceforth refer to the version relevant in this paper as NUBes-PHI (from `NUBes with Personal Health Information').",
"The organisers of the MEDDOCAN shared task BIBREF3 curated a synthetic corpus of clinical cases enriched with sensitive information by health documentalists"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The anonymisation systems based on NLP techniques perform reasonably well, but are far from perfect. Depending on the difficulty posed by each dataset or the amount of available data for training machine learning models, the performance achieved by these methods is not enough to fully rely on them in certain situations BIBREF0. However, in the last two years, the NLP community has reached an important milestone thanks to the appearance of the so-called Transformers neural network architectures BIBREF1. In this paper, we conduct several experiments in sensitive information detection and classification on Spanish clinical text using BERT (from `Bidirectional Encoder Representations from Transformers') BIBREF2 as the base for a sequence labelling approach. The experiments are carried out on two datasets: the MEDDOCAN: Medical Document Anonymization shared task dataset BIBREF3, and NUBes BIBREF4, a corpus of real medical reports in Spanish. In these experiments, we compare the performance of BERT with other machine-learning-based systems, some of which use language-specific features. Our aim is to evaluate how good a BERT-based model performs without language nor domain specialisation apart from the training data labelled for the task at hand."
],
"extractive_spans": [
"MEDDOCAN",
"NUBes "
],
"free_form_answer": "",
"highlighted_evidence": [
" In this paper, we conduct several experiments in sensitive information detection and classification on Spanish clinical text using BERT (from `Bidirectional Encoder Representations from Transformers') BIBREF2 as the base for a sequence labelling approach. The experiments are carried out on two datasets: the MEDDOCAN: Medical Document Anonymization shared task dataset BIBREF3, and NUBes BIBREF4, a corpus of real medical reports in Spanish."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What is the performance of BERT on the task?",
"What are the other algorithms tested?",
"Does BERT reach the best performance among all the algorithms compared?",
"What are the clinical datasets used in the paper?"
],
"question_id": [
"66c96c297c2cffdf5013bab5e95b59101cb38655",
"6b53e1f46ae4ba9b75117fc6e593abded89366be",
"c0bee6539eb6956a7347daa9d2419b367bd02064",
"3de0487276bb5961586acc6e9f82934ef8cb668c"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"Spanish",
"Spanish",
"Spanish",
"Spanish"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Anonymization examples of “64-year-old patient operated on a hernia on the 12/01/2016 by Dr Lopez”; sensitive data and their substitutions are highlighted in bold.",
"Table 2: Size of the NUBES-PHI corpus",
"Table 4: Size of the MEDDOCAN corpus",
"Table 3: Label distribution in the NUBES-PHI corpus",
"Figure 1: Pre-trained BERT with a Fully Connected layer on top to perform the fine-tuning",
"Table 5: Results of Experiment A: NUBES-PHI",
"Table 6: Confusion matrices for the sensitive information classification task on the NUBES-PHI corpus",
"Table 7: BERT error examples (only BIO-tags are shown; differences between gold annotations and predictions are highlighted in bold)",
"Figure 2: Performance decay with decreasing amounts of training data on the sensitive information detection task in the NUBES-PHI corpus",
"Table 8: Results of Experiment B: MEDDOCAN"
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table4-1.png",
"3-Table3-1.png",
"4-Figure1-1.png",
"5-Table5-1.png",
"6-Table6-1.png",
"6-Table7-1.png",
"7-Figure2-1.png",
"7-Table8-1.png"
]
} | [
"What is the performance of BERT on the task?"
] | [
[
"2003.03106-7-Table8-1.png",
"2003.03106-Materials and Methods ::: Experimental design ::: Experiment B: MEDDOCAN-0",
"2003.03106-Results ::: Experiment A: NUBes-PHI-3",
"2003.03106-Results ::: Experiment B: MEDDOCAN-0",
"2003.03106-5-Table5-1.png"
]
] | [
"F1 scores are:\nHUBES-PHI: Detection(0.965), Classification relaxed (0.95), Classification strict (0.937)\nMedoccan: Detection(0.972), Classification (0.967)"
] | 8 |
1708.01464 | Massively Multilingual Neural Grapheme-to-Phoneme Conversion | Grapheme-to-phoneme conversion (g2p) is necessary for text-to-speech and automatic speech recognition systems. Most g2p systems are monolingual: they require language-specific data or handcrafting of rules. Such systems are difficult to extend to low resource languages, for which data and handcrafted rules are not available. As an alternative, we present a neural sequence-to-sequence approach to g2p which is trained on spelling--pronunciation pairs in hundreds of languages. The system shares a single encoder and decoder across all languages, allowing it to utilize the intrinsic similarities between different writing systems. We show an 11% improvement in phoneme error rate over an approach based on adapting high-resource monolingual g2p models to low-resource languages. Our model is also much more compact relative to previous approaches. | {
"paragraphs": [
[
"Accurate grapheme-to-phoneme conversion (g2p) is important for any application that depends on the sometimes inconsistent relationship between spoken and written language. Most prominently, this includes text-to-speech and automatic speech recognition. Most work on g2p has focused on a few languages for which extensive pronunciation data is available BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Most languages lack these resources. However, a low resource language's writing system is likely to be similar to the writing systems of languages that do have sufficient pronunciation data. Therefore g2p may be possible for low resource languages if this high resource data can be properly utilized.",
"We attempt to leverage high resource data by treating g2p as a multisource neural machine translation (NMT) problem. The source sequences for our system are words in the standard orthography in any language. The target sequences are the corresponding representation in the International Phonetic Alphabet (IPA). Our results show that the parameters learned by the shared encoder–decoder are able to exploit the orthographic and phonemic similarities between the various languages in our data."
],
[
"Our approach is similar in goal to deri2016grapheme's model for adapting high resource g2p models for low resource languages. They trained weighted finite state transducer (wFST) models on a variety of high resource languages, then transferred those models to low resource languages, using a language distance metric to choose which high resource models to use and a phoneme distance metric to map the high resource language's phonemes to the low resource language's phoneme inventory. These distance metrics are computed based on data from Phoible BIBREF4 and URIEL BIBREF5 .",
"Other low resource g2p systems have used a strategy of combining multiple models. schlippe2014combining trained several data-driven g2p systems on varying quantities of monolingual data and combined their outputs with a phoneme-level voting scheme. This led to improvements over the best-performing single system for small quantities of data in some languages. jyothilow trained recurrent neural networks for small data sets and found that a version of their system that combined the neural network output with the output of the wFST-based Phonetisaurus system BIBREF1 did better than either system alone.",
"A different approach came from kim2012universal, who used supervised learning with an undirected graphical model to induce the grapheme–phoneme mappings for languages written in the Latin alphabet. Given a short text in a language, the model predicts the language's orthographic rules. To create phonemic context features from the short text, the model naïvely maps graphemes to IPA symbols written with the same character, and uses the features of these symbols to learn an approximation of the phonotactic constraints of the language. In their experiments, these phonotactic features proved to be more valuable than geographical and genetic features drawn from WALS BIBREF6 ."
],
[
"In recent years, neural networks have emerged as a common way to use data from several languages in a single system. Google's zero-shot neural machine translation system BIBREF7 shares an encoder and decoder across all language pairs. In order to facilitate this multi-way translation, they prepend an artificial token to the beginning of each source sentence at both training and translation time. The token identifies what language the sentence should be translated to. This approach has three benefits: it is far more efficient than building a separate model for each language pair; it allows for translation between languages that share no parallel data; and it improves results on low-resource languages by allowing them to implicitly share parameters with high-resource languages. Our g2p system is inspired by this approach, although it differs in that there is only one target “language”, IPA, and the artificial tokens identify the language of the source instead of the language of the target.",
"Other work has also made use of multilingually-trained neural networks. Phoneme-level polyglot language models BIBREF8 train a single model on multiple languages and additionally condition on externally constructed typological data about the language. ostling2017continuous used a similar approach, in which a character-level neural language model is trained on a massively multilingual corpus. A language embedding vector is concatenated to the input at each time step. The language embeddings their system learned correlate closely to the genetic relationships between languages. However, neither of these models was applied to g2p."
],
[
"g2p is the problem of converting the orthographic representation of a word into a phonemic representation. A phoneme is an abstract unit of sound which may have different realizations in different contexts. For example, the English phoneme has two phonetic realizations (or allophones):",
"English speakers without linguistic training often struggle to perceive any difference between these sounds. Writing systems usually do not distinguish between allophones: and are both written as INLINEFORM0 p INLINEFORM1 in English. The sounds are written differently in languages where they contrast, such as Hindi and Eastern Armenian.",
"Most writing systems in use today are glottographic, meaning that their symbols encode solely phonological information. But despite being glottographic, in few writing systems do graphemes correspond one-to-one with phonemes. There are cases in which multiple graphemes represent a single phoneme, as in the word the in English:",
"",
"There are cases in which a single grapheme represents multiple phonemes, such as syllabaries, in which each symbol represents a syllable.",
"In many languages, there are silent letters, as in the word hora in Spanish:",
"",
"There are more complicated correspondences, such as the silent e in English that affects the pronunciation of the previous vowel, as seen in the pair of words cape and cap.",
"It is possible for an orthographic system to have any or all of the above phenomena while remaining unambiguous. However, some orthographic systems contain ambiguities. English is well-known for its spelling ambiguities. Abjads, used for Arabic and Hebrew, do not give full representation to vowels.",
"Consequently, g2p is harder than simply replacing each grapheme symbol with a corresponding phoneme symbol. It is the problem of replacing a grapheme sequence INLINEFORM0 ",
"with a phoneme sequence INLINEFORM0 ",
"where the sequences are not necessarily of the same length. Data-driven g2p is therefore the problem of finding the phoneme sequence that maximizes the likelihood of the grapheme sequence: INLINEFORM0 ",
"Data-driven approaches are especially useful for problems in which the rules that govern them are complex and difficult to engineer by hand. g2p for languages with ambiguous orthographies is such a problem. Multilingual g2p, in which the various languages have similar but different and possibly contradictory spelling rules, can be seen as an extreme case of that. Therefore, a data-driven sequence-to-sequence model is a natural choice."
],
[
"In order to find the best phoneme sequence, we use a neural encoder–decoder model with attention BIBREF9 . The model consists of two main parts: the encoder compresses each source grapheme sequence INLINEFORM0 into a fixed-length vector. The decoder, conditioned on this fixed-length vector, generates the output phoneme sequence INLINEFORM1 .",
"The encoder and decoder are both implemented as recurrent neural networks, which have the advantage of being able to process sequences of arbitrary length and use long histories efficiently. They are trained jointly to minimize cross-entropy on the training data. We had our best results when using a bidirectional encoder, which consists of two separate encoders which process the input in forward and reverse directions. We used long short-term memory units BIBREF10 for both the encoder and decoder. For the attention mechanism, we used the general global attention architecture described by luong2015effective.",
"We implemented all models with OpenNMT BIBREF11 . Our hyperparameters, which we determined by experimentation, are listed in Table TABREF8 ."
],
[
"Presenting pronunciation data in several languages to the network might create problems because different languages have different pronunciation patterns. For example, the string `real' is pronounced differently in English, German, Spanish, and Portuguese. We solve this problem by prepending each grapheme sequence with an artificial token consisting of the language's ISO 639-3 code enclosed in angle brackets. The English word `real', for example, would be presented to the system as",
" INLINEFORM0 eng INLINEFORM1 r e a l",
"The artificial token is treated simply as an element of the grapheme sequence. This is similar to the approach taken by johnson2016google in their zero-shot NMT system. However, their source-side artificial tokens identify the target language, whereas ours identify the source language. An alternative approach, used by ostling2017continuous, would be to concatenate a language embedding to the input at each time step. They do not evaluate their approach on grapheme-to-phoneme conversion."
],
[
"In order to train a neural g2p system, one needs a large quantity of pronunciation data. A standard dataset for g2p is the Carnegie Mellon Pronouncing Dictionary BIBREF12 . However, that is a monolingual English resource, so it is unsuitable for our multilingual task. Instead, we use the multilingual pronunciation corpus collected by deri2016grapheme for all experiments. This corpus consists of spelling–pronunciation pairs extracted from Wiktionary. It is already partitioned into training and test sets. Corpus statistics are presented in Table TABREF10 .",
"In addition to the raw IPA transcriptions extracted from Wiktionary, the corpus provides an automatically cleaned version of transcriptions. Cleaning is a necessary step because web-scraped data is often noisy and may be transcribed at an inconsistent level of detail. The data cleaning used here attempts to make the transcriptions consistent with the phonemic inventories used in Phoible BIBREF4 . When a transcription contains a phoneme that is not in its language's inventory in Phoible, that phoneme is replaced by the phoneme with the most similar articulatory features that is in the language's inventory. Sometimes this cleaning algorithm works well: in the German examples in Table TABREF11 , the raw German symbols and are both converted to . This is useful because the in Ansbach and the in Kaninchen are instances of the same phoneme, so their phonemic representations should use the same symbol. However, the cleaning algorithm can also have negative effects on the data quality. For example, the phoneme is not present in the Phoible inventory for German, but it is used in several German transcriptions in the corpus. The cleaning algorithm converts to in all German transcriptions, whereas would be a more reasonable guess. The cleaning algorithm also removes most suprasegmentals, even though these are often an important part of a language's phonology. Developing a more sophisticated procedure for cleaning pronunciation data is a direction for future work, but in this paper we use the corpus's provided cleaned transcriptions in order to ease comparison to previous results."
],
[
"We present experiments with two versions of our sequence-to-sequence model. LangID prepends each training, validation, and test sample with an artificial token identifying the language of the sample. NoLangID omits this token. LangID and NoLangID have identical structure otherwise. To translate the test corpus, we used a beam width of 100. Although this is an unusually wide beam and had negligible performance effects, it was necessary to compute our error metrics."
],
[
"We use the following three evaluation metrics:",
"Phoneme Error Rate (PER) is the Levenshtein distance between the predicted phoneme sequences and the gold standard phoneme sequences, divided by the length of the gold standard phoneme sequences.",
"Word Error Rate (WER) is the percentage of words in which the predicted phoneme sequence does not exactly match the gold standard phoneme sequence.",
"Word Error Rate 100 (WER 100) is the percentage of words in the test set for which the correct guess is not in the first 100 guesses of the system.",
"In system evaluations, WER, WER 100, and PER numbers presented for multiple languages are averaged, weighting each language equally BIBREF13 .",
"It would be interesting to compute error metrics that incorporate phoneme similarity, such as those proposed by hixon2011phonemic. PER weights all phoneme errors the same, even though some errors are more harmful than others: and are usually contrastive, whereas and almost never are. Such statistics would be especially interesting for evaluating a multilingual system, because different languages often map the same grapheme to phonemes that are only subtly different from each other. However, these statistics have not been widely reported for other g2p systems, so we omit them here."
],
[
"Results on LangID and NoLangID are compared to the system presented by deri2016grapheme, which is identified in our results as wFST. Their results can be divided into two parts:",
"High resource results, computed with wFSTs trained on a combination of Wiktionary pronunciation data and g2p rules extracted from Wikipedia IPA Help pages. They report high resource results for 85 languages.",
"Adapted results, where they apply various mapping strategies in order to adapt high resource models to other languages. The final adapted results they reported include most of the 85 languages with high resource results, as well as the various languages they were able to adapt them for, for a total of 229 languages. This test set omits 23 of the high resource languages that are written in unique scripts or for which language distance metrics could not be computed."
],
[
"We train the LangID and NoLangID versions of our model each on three subsets of the Wiktionary data:",
"LangID-High and NoLangID-High: Trained on data from the 85 languages for which BIBREF13 used non-adapted wFST models.",
"LangID-Adapted and NoLangID-Adapted: Trained on data from any of the 229 languages for which they built adapted models. Because many of these languages had no training data at all, the model is actually only trained on data in 157 languages. As is noted above, the Adapted set omits 23 languages which are in the High test set.",
"LangID-All and NoLangID-All: Trained on data in all 311 languages in the Wiktionary training corpus.",
"In order to ease comparison to Deri and Knight's system, we limited our use of the training corpus to 10,000 words per language. We set aside 10 percent of the data in each language for validation, so the maximum number of training words for any language is 9000 for our systems."
],
[
"On the 229 languages for which deri2016grapheme presented their final results, the LangID version of our system outperforms the baseline by a wide margin. The best performance came with the version of our model that was trained on data in all available languages, not just the languages it was tested on. Using a language ID token improves results considerably, but even NoLangID beats the baseline in WER and WER 100. Full results are presented in Table TABREF24 ."
],
[
"Having shown that our model exceeds the performance of the wFST-adaptation approach, we next compare it to the baseline models for just high resource languages. The wFST models here are purely monolingual – they do not use data adaptation because there is sufficient training data for each of them. Full results are presented in Table TABREF26 . We omit models trained on the Adapted languages because they were not trained on high resource languages with unique writing systems, such as Georgian and Greek, and consequently performed very poorly on them.",
"In contrast to the larger-scale Adapted results, in the High Resource experiments none of the sequence-to-sequence approaches equal the performance of the wFST model in WER and PER, although LangID-High does come close. The LangID models do beat wFST in WER 100. A possible explanation is that a monolingual wFST model will never generate phonemes that are not part of the language's inventory. A multilingual model, on the other hand, could potentially generate phonemes from the inventories of any language it has been trained on.",
"Even if LangID-High does not present a more accurate result, it does present a more compact one: LangID-High is 15.4 MB, while the combined wFST high resource models are 197.5 MB."
],
[
"Finally, we report our models' results on unseen languages in Table TABREF28 . The unseen languages are any that are present in the test corpus but absent from the training data. Deri and Knight did not report results specifically on these languages. Although the NoLangID models sometimes do better on WER 100, even here the LangID models have a slight advantage in WER and PER. This is somewhat surprising because the LangID models have not learned embeddings for the language ID tokens of unseen languages. Perhaps negative associations are also being learned, driving the model towards predicting more common pronunciations for unseen languages."
],
[
"Adding a language ID token always improves results in cases where an embedding has been learned for that token. The power of these embeddings is demonstrated by what happens when one feeds the same input word to the model with different language tokens, as is seen in Table TABREF30 . Impressively, this even works when the source sequence is in the wrong script for the language, as is seen in the entry for Arabic."
],
[
"Because these language ID tokens are so useful, it would be good if they could be effectively estimated for unseen languages. ostling2017continuous found that the language vectors their models learned correlated well to genetic relationships, so it would be interesting to see if the embeddings our source encoder learned for the language ID tokens showed anything similar. In a few cases they do (the languages closest to German in the vector space are Luxembourgish, Bavarian, and Yiddish, all close relatives). However, for the most part the structure of these vectors is not interpretable. Therefore, it would be difficult to estimate the embedding for an unseen language, or to “borrow” the language ID token of a similar language. A more promising way forward is to find a model that uses an externally constructed typological representation of the language."
],
[
"In contrast to the language embeddings, the phoneme embeddings appear to show many regularities (see Table TABREF33 ). This is a sign that our multilingual model learns similar embeddings for phonemes that are written with the same grapheme in different languages. These phonemes tend to be phonetically similar to each other.",
"Perhaps the structure of the phoneme embedding space is what leads to our models' very good performance on WER 100. Even when the model's first predicted pronunciation is not correct, it tends to assign more probability mass to guesses that are more similar to the correct one. Applying some sort of filtering or reranking of the system output might therefore lead to better performance."
],
[
"Because the language ID token is so beneficial to performance, it would be very interesting to find ways to extend a similar benefit to unseen languages. One possible way to do so is with tokens that identify something other than the language, such as typological features about the language's phonemic inventory. This could enable better sharing of resources among languages. Such typological knowledge is readily available in databases like Phoible and WALS for a wide variety of languages. It would be interesting to explore if any of these features is a good predictor of a language's orthographic rules.",
"It would also be interesting to apply the artificial token approach to other problems besides multilingual g2p. One closely related application is monolingual English g2p. Some of the ambiguity of English spelling is due to the wide variety of loanwords in the language, many of which have unassimilated spellings. Knowing the origins of these loanwords could provide a useful hint for figuring out their pronunciations. The etymology of a word could be tagged in an analogous way to how language ID is tagged in multilingual g2p."
]
],
"section_name": [
"Introduction",
"Low Resource g2p",
"Multilingual Neural NLP",
"Grapheme-to-Phoneme",
"Encoder–Decoder Models",
"Training Multilingual Models",
"Data",
"Experiments",
"Evaluation",
"Baseline",
"Training",
"Adapted Results",
"High Resource Results",
"Results on Unseen Languages",
"Language ID Tokens",
"Language Embeddings",
"Phoneme Embeddings",
"Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"8a18acdd8e1d46dfe648242c0e79dfcf57c309cb",
"b5cf091cde474687a8647c85e9ebfb6e92ca45bd"
],
"answer": [
{
"evidence": [
"Even if LangID-High does not present a more accurate result, it does present a more compact one: LangID-High is 15.4 MB, while the combined wFST high resource models are 197.5 MB."
],
"extractive_spans": [],
"free_form_answer": "Using file size on disk",
"highlighted_evidence": [
"Even if LangID-High does not present a more accurate result, it does present a more compact one: LangID-High is 15.4 MB, while the combined wFST high resource models are 197.5 MB."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Even if LangID-High does not present a more accurate result, it does present a more compact one: LangID-High is 15.4 MB, while the combined wFST high resource models are 197.5 MB."
],
"extractive_spans": [
"15.4 MB"
],
"free_form_answer": "",
"highlighted_evidence": [
"Even if LangID-High does not present a more accurate result, it does present a more compact one: LangID-High is 15.4 MB, while the combined wFST high resource models are 197.5 MB."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"38a28419b2d6e002b1177ad39ccd178bf5bcb27b",
"b6fe6a39f0e93a4cfb5bc8b184ca094c30636889"
],
"answer": [
{
"evidence": [
"Results on LangID and NoLangID are compared to the system presented by deri2016grapheme, which is identified in our results as wFST. Their results can be divided into two parts:"
],
"extractive_spans": [
"system presented by deri2016grapheme"
],
"free_form_answer": "",
"highlighted_evidence": [
"Results on LangID and NoLangID are compared to the system presented by deri2016grapheme, which is identified in our results as wFST."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Results on LangID and NoLangID are compared to the system presented by deri2016grapheme, which is identified in our results as wFST. Their results can be divided into two parts:"
],
"extractive_spans": [
"wFST"
],
"free_form_answer": "",
"highlighted_evidence": [
"Results on LangID and NoLangID are compared to the system presented by deri2016grapheme, which is identified in our results as wFST. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"2c45e162c3b19cfbf3bfec2c2b1bb85f09b62194",
"6ea9a44e25b9bc1926663e6fd97603d6d63b646d"
],
"answer": [
{
"evidence": [
"We use the following three evaluation metrics:",
"Phoneme Error Rate (PER) is the Levenshtein distance between the predicted phoneme sequences and the gold standard phoneme sequences, divided by the length of the gold standard phoneme sequences.",
"Word Error Rate (WER) is the percentage of words in which the predicted phoneme sequence does not exactly match the gold standard phoneme sequence.",
"Word Error Rate 100 (WER 100) is the percentage of words in the test set for which the correct guess is not in the first 100 guesses of the system."
],
"extractive_spans": [
"Phoneme Error Rate (PER)",
"Word Error Rate (WER)",
"Word Error Rate 100 (WER 100)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the following three evaluation metrics:\n\nPhoneme Error Rate (PER) is the Levenshtein distance between the predicted phoneme sequences and the gold standard phoneme sequences, divided by the length of the gold standard phoneme sequences.\n\nWord Error Rate (WER) is the percentage of words in which the predicted phoneme sequence does not exactly match the gold standard phoneme sequence.\n\nWord Error Rate 100 (WER 100) is the percentage of words in the test set for which the correct guess is not in the first 100 guesses of the system."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the following three evaluation metrics:",
"Phoneme Error Rate (PER) is the Levenshtein distance between the predicted phoneme sequences and the gold standard phoneme sequences, divided by the length of the gold standard phoneme sequences.",
"Word Error Rate (WER) is the percentage of words in which the predicted phoneme sequence does not exactly match the gold standard phoneme sequence.",
"Word Error Rate 100 (WER 100) is the percentage of words in the test set for which the correct guess is not in the first 100 guesses of the system.",
"In system evaluations, WER, WER 100, and PER numbers presented for multiple languages are averaged, weighting each language equally BIBREF13 ."
],
"extractive_spans": [
"PER",
"WER",
"WER 100"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the following three evaluation metrics:\n\nPhoneme Error Rate (PER) is the Levenshtein distance between the predicted phoneme sequences and the gold standard phoneme sequences, divided by the length of the gold standard phoneme sequences.\n\nWord Error Rate (WER) is the percentage of words in which the predicted phoneme sequence does not exactly match the gold standard phoneme sequence.\n\nWord Error Rate 100 (WER 100) is the percentage of words in the test set for which the correct guess is not in the first 100 guesses of the system.\n\nIn system evaluations, WER, WER 100, and PER numbers presented for multiple languages are averaged, weighting each language equally BIBREF13 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"461e32052ed8a4d8e8478a5ff1902ae0ce9062d6",
"e801986d8411ddf1367dc08c8af3760090fd44f5"
],
"answer": [
{
"evidence": [
"In order to train a neural g2p system, one needs a large quantity of pronunciation data. A standard dataset for g2p is the Carnegie Mellon Pronouncing Dictionary BIBREF12 . However, that is a monolingual English resource, so it is unsuitable for our multilingual task. Instead, we use the multilingual pronunciation corpus collected by deri2016grapheme for all experiments. This corpus consists of spelling–pronunciation pairs extracted from Wiktionary. It is already partitioned into training and test sets. Corpus statistics are presented in Table TABREF10 .",
"In addition to the raw IPA transcriptions extracted from Wiktionary, the corpus provides an automatically cleaned version of transcriptions. Cleaning is a necessary step because web-scraped data is often noisy and may be transcribed at an inconsistent level of detail. The data cleaning used here attempts to make the transcriptions consistent with the phonemic inventories used in Phoible BIBREF4 . When a transcription contains a phoneme that is not in its language's inventory in Phoible, that phoneme is replaced by the phoneme with the most similar articulatory features that is in the language's inventory. Sometimes this cleaning algorithm works well: in the German examples in Table TABREF11 , the raw German symbols and are both converted to . This is useful because the in Ansbach and the in Kaninchen are instances of the same phoneme, so their phonemic representations should use the same symbol. However, the cleaning algorithm can also have negative effects on the data quality. For example, the phoneme is not present in the Phoible inventory for German, but it is used in several German transcriptions in the corpus. The cleaning algorithm converts to in all German transcriptions, whereas would be a more reasonable guess. The cleaning algorithm also removes most suprasegmentals, even though these are often an important part of a language's phonology. Developing a more sophisticated procedure for cleaning pronunciation data is a direction for future work, but in this paper we use the corpus's provided cleaned transcriptions in order to ease comparison to previous results."
],
"extractive_spans": [
"the Carnegie Mellon Pronouncing Dictionary BIBREF12",
"the multilingual pronunciation corpus collected by deri2016grapheme ",
"ranscriptions extracted from Wiktionary"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to train a neural g2p system, one needs a large quantity of pronunciation data. A standard dataset for g2p is the Carnegie Mellon Pronouncing Dictionary BIBREF12 . However, that is a monolingual English resource, so it is unsuitable for our multilingual task. Instead, we use the multilingual pronunciation corpus collected by deri2016grapheme for all experiments. This corpus consists of spelling–pronunciation pairs extracted from Wiktionary. It is already partitioned into training and test sets. Corpus statistics are presented in Table TABREF10",
"In addition to the raw IPA transcriptions extracted from Wiktionary, the corpus provides an automatically cleaned version of transcriptions. Cleaning is a necessary step because web-scraped data is often noisy and may be transcribed at an inconsistent level of detail. The data cleaning used here attempts to make the transcriptions consistent with the phonemic inventories used in Phoible BIBREF4 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to train a neural g2p system, one needs a large quantity of pronunciation data. A standard dataset for g2p is the Carnegie Mellon Pronouncing Dictionary BIBREF12 . However, that is a monolingual English resource, so it is unsuitable for our multilingual task. Instead, we use the multilingual pronunciation corpus collected by deri2016grapheme for all experiments. This corpus consists of spelling–pronunciation pairs extracted from Wiktionary. It is already partitioned into training and test sets. Corpus statistics are presented in Table TABREF10 ."
],
"extractive_spans": [
"multilingual pronunciation corpus collected by deri2016grapheme"
],
"free_form_answer": "",
"highlighted_evidence": [
" Instead, we use the multilingual pronunciation corpus collected by deri2016grapheme for all experiments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"how is model compactness measured?",
"what was the baseline?",
"what evaluation metrics were used?",
"what datasets did they use?"
],
"question_id": [
"113d791df6fcfc9cecfb7b1bebaf32cc2e4402ab",
"0752d71a0a1f73b3482a888313622ce9e9870d6e",
"55c8f7acbfd4f5cde634aaecd775b3bb32e9ffa3",
"4eaf9787f51cd7cdc45eb85cf223d752328c6ee4"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Hyperparameters for multilingual g2p models",
"Table 2: Corpus Statistics",
"Table 3: Example entries from the Wiktionary training corpus",
"Table 4: Adapted Results",
"Table 7: The word ‘juice’ translated by the LangID-All model with various language ID tokens. The incorrect English pronunciation rhymes with the system’s result for ‘ice’",
"Table 5: High Resource Results",
"Table 6: Results on languages not in the training corpus",
"Table 8: Selected phonemes and the most similar phonemes, measured by the cosine similarity of the embeddings learned by the LangID-All model"
],
"file": [
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"6-Table7-1.png",
"6-Table5-1.png",
"6-Table6-1.png",
"7-Table8-1.png"
]
} | [
"how is model compactness measured?"
] | [
[
"1708.01464-High Resource Results-2"
]
] | [
"Using file size on disk"
] | 9 |
2002.03407 | Abstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis | Training abstractive summarization models typically requires large amounts of data, which can be a limitation for many domains. In this paper we explore using domain transfer and data synthesis to improve the performance of recent abstractive summarization methods when applied to small corpora of student reflections. First, we explored whether tuning state of the art model trained on newspaper data could boost performance on student reflection data. Evaluations demonstrated that summaries produced by the tuned model achieved higher ROUGE scores compared to model trained on just student reflection data or just newspaper data. The tuned model also achieved higher scores compared to extractive summarization baselines, and additionally was judged to produce more coherent and readable summaries in human evaluations. Second, we explored whether synthesizing summaries of student data could additionally boost performance. We proposed a template-based model to synthesize new data, which when incorporated into training further increased ROUGE scores. Finally, we showed that combining data synthesis with domain transfer achieved higher ROUGE scores compared to only using one of the two approaches. | {
"paragraphs": [
[
"Recently, with the emergence of neural seq2seq models, abstractive summarization methods have seen great performance strides BIBREF0, BIBREF1, BIBREF2. However, complex neural summarization models with thousands of parameters usually require a large amount of training data. In fact, much of the neural summarization work has been trained and tested in news domains where numerous large datasets exist. For example, the CNN/DailyMail (CNN/DM) BIBREF3, BIBREF4 and New York Times (NYT) datasets are in the magnitude of 300k and 700k documents, respectively. In contrast, in other domains such as student reflections, summarization datasets are only in the magnitude of tens or hundreds of documents (e.g., BIBREF5). We hypothesize that training complex neural abstractive summarization models in such domains will not yield good performing models, and we will indeed later show that this is the case for student reflections.",
"To improve performance in low resource domains, we explore three directions. First, we explore domain transfer for abstractive summarization. While domain transfer is not new, compared to prior summarization studies BIBREF6, BIBREF7, our training (news) and tuning (student reflection) domains are quite dissimilar, and the in-domain data is small. Second, we propose a template-based synthesis method to create synthesized summaries, then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline. Lastly, we combine both directions. Evaluations of neural abstractive summarization method across four student reflection corpora show the utility of all three methods."
],
[
"Abstractive Summarization. Abstractive summarization aims to generate coherent summaries with high readability, and has seen increasing interest and improved performance due to the emergence of seq2seq models BIBREF8 and attention mechanisms BIBREF9. For example, BIBREF0, BIBREF2, and BIBREF1 in addition to using encoder-decoder model with attention, they used pointer networks to solve the out of vocabulary issue, while BIBREF0 used coverage mechanism to solve the problem of word repetition. In addition, BIBREF2 and BIBREF10 used reinforcement learning in an end-to-end setting.",
"To our knowledge, training such neural abstractive summarization models in low resource domains using domain transfer has not been thoroughly explored on domains different than news. For example, BIBREF4 reported the results of training on CNN/DM data while evaluating on DUC data without any tuning. Note that these two datasets are both in the news domain, and both consist of well written, structured documents. The domain transfer experiments of BIBREF1 similarly used two different news summarization datasets (CNN/DM and NYT). Our work differs in several ways from these two prior domain transfer efforts. First, our experiments involve two entirely different domains: news and student reflections. Unlike news, student reflection documents lack global structure, are repetitive, and contain many sentence fragments and grammatical mistakes. Second, the prior approaches either trained a part of the model using NYT data while retaining the other part of the model trained only on CNN/DM data BIBREF1, or didn't perform any tuning at all BIBREF4. In contrast, we do the training in two consecutive phases, pretraining and fine tuning. Finally, BIBREF1 reported that while training with domain transfer outperformed training only on out-of-domain data, it was not able to beat training only on in-domain data. This is likely because their in and out-of-domain data sizes are comparable, unlike in our case of scarce in-domain data.",
"In a different approach to abstractive summarization, BIBREF11 developed a soft template based neural method consisting of an end-to-end deep model for template retrieval, reranking and summary rewriting. While we also develop a template based model, our work differs in both model structure and purpose.",
"Data Synthesis. Data synthesis for text summarization is underexplored, with most prior work focusing on machine translation, and text normalization. BIBREF12 proposed doing data augmentation through word replacement, using WordNet BIBREF13 and vector space similarity, respectively. We will use a WordNet replacement method as a baseline synthesis method in the experiments described below. In contrast, BIBREF14 synthesized/augmented data through back-translation and word replacement using language models. BIBREF15 is another recent work that was done in parallel and is very close to ours. However, in addition to the difference in both our and their model, we think it might be infeasible to back generate student reflections from a human summary, especially an abstractive one."
],
[
"Student reflections are comments provided by students in response to a set of instructor prompts. The prompts are directed towards gathering students' feedback on course material. Student reflections are collected directly following each of a set of classroom lectures over a semester. In this paper, the set of reflections for each prompt in each lecture is considered a student reflection document. The objective of our work is to provide a comprehensive and meaningful abstractive summary of each student reflection document. Our dataset consists of documents and summaries from four course instantiations: ENGR (Introduction to Materials Science and Engineering), Stat2015 and Stat2016 (Statistics for Industrial Engineers, taught in 2015 and 2016, respectively), and CS (Data Structures in Computer Science). All reflections were collected in response to two pedagogically-motivated prompts BIBREF16: “Point of Interest (POI): Describe what you found most interesting in today's class” and “Muddiest Point (MP): Describe what was confusing or needed more detail.”",
"For each reflection document, at least one human (either a TA or domain expert) created summaries. Table TABREF4 shows example reference summary produced by one annotator for the CS course. Table TABREF5 summarizes the dataset in terms of number of lectures, number of prompts per lecture, average number of reflections per prompt, and number of abstractive reference summaries for each set of reflections."
],
[
"To overcome the size issue of the student reflection dataset, we first explore the effect of incorporating domain transfer into a recent abstractive summarization model: pointer networks with coverage mechanism (PG-net)BIBREF0. To experiment with domain transfer, the model was pretrained using the CNN/DM dataset, then fine tuned using the student reflection dataset (see the Experiments section). A second approach we explore to overcome the lack of reflection data is data synthesis. We first propose a template model for synthesizing new data, then investigate the performance impact of using this data when training the summarization model. The proposed model makes use of the nature of datasets such as ours, where the reference summaries tend to be close in structure: humans try to find the major points that students raise, then present the points in a way that marks their relative importance (recall the CS example in Table TABREF4). Our third explored approach is to combine domain transfer with data synthesis."
],
[
"Our motivation for using templates for data synthesis is that seq2seq synthesis models (as discussed in related work) tend to generate irrelevant and repeated words BIBREF17, while templates can produce more coherent and concise output. Also, extracting templates can be done either manually or automatically typically by training a few parameters or even doing no training, then external information in the form of keywords or snippets can be populated into the templates with the help of more sophisticated models. Accordingly, using templates can be very tempting for domains with limited resources such as ours.",
"Model Structure. The model consists of 4 modules:",
"1. Template extraction: To convert human summaries into templates, we remove keywords in the summary to leave only non-keywords. We use Rapid Automatic Keyword Extraction (RAKE) BIBREF18 to identify keywords.",
"2. Template clustering: Upon converting human summaries into templates, we cluster them into $N$ clusters with the goal of using any template from the same cluster interchangeably. A template is first converted into embeddings using a pretrained BERT model BIBREF19, where template embedding is constructed by average pooling word embeddings. Templates are then clustered using k-medoid.",
"3. Summary rewriting: An encoder-attention-decoder with pointer network is trained to perform the rewriting task. The model is trained to inject keywords into a template and perform rewriting into a coherent paragraph. The produced rewrites are considered as candidate summaries.",
"4. Summary selection: After producing candidate summaries, we need to pick the best ones. We argue that the best candidates are those that are coherent and also convey the same meaning as the original human summary. We thus use a hybrid metric to score candidates, where the metric is a weighted sum of two scores and is calculated using Equations 1, 2, and 3. Eq.1 measures coherency using a language model (LM), Eq.2 measures how close a candidate is to a human summary using ROUGE scores, while Eq.3 picks the highest scored $N$ candidates as the final synthetic set.",
"CS and HS are a candidate and human summary. $P(w)$ is the probability of word $w$ using a language model. $\\alpha , \\beta $ are weighting parameters. In this work we use $\\alpha =\\beta =1$ for all experiments. $R_{i}(CS,HS)$ is ROUGE-i score between CS and HS for i=1, 2, and $l$.",
"Model Training. Before using the synthesis model, some of the constructing modules (rewriting module, scoring LM) need training. To train the rewriting model, we use another dataset consisting of a set of samples, where each sample can be a text snippet (sentence, paragraph, etc.). For each sample, keywords are extracted using RAKE, then removed. The keywords plus the sample with no keywords are then passed to the rewriting model. The training objective of this model is to reconstruct the original sample, which can be seen as trying to inject extracted keywords back into a template. Model Usage. To use the synthesis model to generate new samples, the set of human summaries are fed to the model, passing through the sub-modules in the following order:",
"1. Human summaries first pass through the template extraction module, converting each summary $s_i$ into template $t_i$ and the corresponding keywords $kw_i$.",
"2. Templates are then passed to the clustering module, producing a set of clusters. Each cluster $C$ contains a number of similar templates.",
"3. For each template $t_i$ and corresponding keywords $kw_i$ from step 1, find the cluster $C_i$ that contains the template $t_i$, then pass the set of templates within that clusters $\\lbrace t_j\\rbrace \\forall {j},$ if $t_j \\in C_i$ alongside the keywords $kw_i$ to the summary rewriting module. This will produce a set of candidate summaries.",
"4. The summary selection module scores and selects the highest $N$ candidates as the synthetic summaries."
],
[
"Our experimental designs address the following hypotheses:",
"Hypothesis 1 (H1) : Training complex abstractive models with limited in-domain or large quantities of out-of-domain data won't be enough to outperform extractive baselines.",
"Hypothesis 2 (H2) : Domain transfer helps abstractive models even if in-domain and out-of-domain data are very different and the amount of in-domain data is very small.",
"Hypothesis 3 (H3) : Enriching abstractive training data with synthetic data helps overcome in-domain data scarcity.",
"Hypothesis 4 (H4) : The proposed template-based synthesis model outperforms a simple word replacement model.",
"Hypothesis 5 (H5) : Combining domain transfer with data synthesis outperforms using each approach on its own.",
"Hypothesis 6 (H6) : The synthesis model can be extended to perform reflection summarization directly.",
"Extractive Baselines (for testing H1). While BIBREF0 used Lead-3 as an extractive baseline, in our data sentence order doesn't matter as reflections are independent. We thus use a similar in concept baseline: randomly select N reflections. Since the baseline is random we report the average result of 100 runs. Following BIBREF5, we compare results to MEAD BIBREF20 and to BIBREF5's extractive phrase-based model. Since these models extracted 5 phrases as extractive summary, we use N=5 for our three extractive baselines. Additionally we compare to running only the extractive part of Fast-RL.",
"Domain Transfer (for testing H2, H5).",
"To observe the impact of using out-of-domain (news) data for pretraining to compensate for low resource in-domain (reflection) data, we train 3 variants of PG-net: model training on CNN/DM; model training on reflections; and model training on CNN/DM then tuning using reflections. Table TABREF11 shows example summaries generated by the three variants of PG-net for a CS document. For all experiments where reflections are used for training/tuning, we train using a leave one course out approach (i.e, in each fold, three courses are used for training and the remaining course for testing). If the experiment involves tuning a combined dictionary of CNN/DM and reflections is used to avoid domain mismatch. To tune model parameters, the best number of steps for training, the learning rate, etc., a randomly selected 50% of the training data is used for validation. We choose the parameters that maximize ROUGE scores over this validation set.",
"To implement PG-net we use OpenNMT BIBREF21 with the original set of parameters. The out-of-domain model is trained for 100k steps using the CNN/DM dataset. Following base model training, we tune the model by training it using student reflections. The tuning is done by lowering the LR from 0.15 to 0.1 and training the model for additional 500 steps. The in-domain model is trained only using reflections. We use the same model architecture as above and train the model for 20k steps using adagrad and LR of 0.15.",
"Synthesis Baseline (for testing H3, H4). Following BIBREF12, we developed a data synthesis baseline using word replacement via WordNet. The baseline iterates over all words in a summary. If word $X$ has $N$ synonyms in WordNet, the model creates $N$ new versions of the summary and corresponding reflections by replacing the word $X$ with each of the $N$ synonyms.",
"Template Synthesis Model (for testing H4, H5). To synthesize summaries, we use the same leave one course out approach. For each course, we use the data from the other three courses to train the rewriting module and tune the scoring language model. We can also use the summaries from CNN/DM data as additional samples to further train the rewriting module. We then start synthesizing data using that training data as input. First templates are constructed. The templates are then clustered into 8 clusters. We decided to use 8 to avoid clustering templates from POI with MP, as the templates from both prompts would contain very different supporting words. We also wanted to avoid a high level of dissimilarity within each cluster, and allow some diversity. Following the clustering, the rewriting model produces candidate summaries for each human summary. The rewriting model is another PG-net with the same exact parameters. After producing the candidate summaries, a language model is used to score them. The language model is a single layer LSTM language model trained on 36K sentences from Wikipedia and fine tuned using student reflections. In this work we decided to pick only the highest 3 scored candidate summaries as synthetic data, to avoid adding ill-formed summaries to the training data. Since we are adding $N$ synthetic summaries for each set of reflections, that means we are essentially duplicating the size of our original reflection training data by $N$, which is 3 in our case. Table TABREF11 shows a human summary, the keywords extracted, then the output of injecting keywords in a different template using rewriting.",
"Template-based Summarization (for testing H6). While the proposed template-based model was intended for data synthesis, with minor modification it can be adapted for summarization itself. Because the modifications introduce few parameters, the model is suitable for small datasets. Recall that for data synthesis, the input to the template method is a summary. Since for summarization the input instead is a set of reflections, we perform keyword extraction over the set of reflections. We then add an extra logistic regression classifier that uses the set of reflections as input and predicts a cluster of templates constructed from other courses. Using the keywords and the predicted cluster of templates, we use the same rewriting model to produce candidate summaries. The last step in the pipeline is scoring. In data synthesis, a reference summary is used for scoring; however, in summarization we don't have such a reference. To score the candidate summaries, the model only uses the language model and produces the candidate with the highest score."
],
[
"ROUGE Evaluation Results.",
"Table TABREF13 presents summarization performance results for the 4 extractive baselines, for the original and proposed variants of PG-net, and finally for template-summarization. Following BIBREF0, performance is evaluated using ROUGE (1, 2, and $L$) BIBREF22 on F1. The motivation for using domain transfer and data synthesis is our hypothesis (H1). Table TABREF13 supports this hypothesis. All ROUGE scores for PG-net that outperform all extractive baselines (in italics) involve tuning and/or use of synthesised data, except for one R-1 (row 18).",
"As for our second hypothesis (H2), table TABREF13 shows that it is a valid one. For PG-net, comparing the CNN/DM out-of-domain and Student Reflection in-domain results in rows (5 and 6) and (17 and 18) with their corresponding tuned results in rows 9 and 21, we see that fine tuning improves R-1, R-2, and R-$L$ for all courses (rows 5, 6, 9 and 17, 18, 21). Qualitatively, the examples presented in Table TABREF11 clearly show that tuning yields a more coherent and relevant summary. Over all courses, the tuned version of PG-net consistently outperforms the best baseline result for each metric (rows 9 vs. 1, 2, 3, 4 and 21 vs. 13, 14, 15, 16) except for R-2 in Stat2016.",
"To validate our next set of hypothesises (H3, H4. H5), we use the synthesized data in two settings: either using it for training (rows 7, 8 and 19, 20) or tuning (rows 10, 11 and 22, 23). Table TABREF13 supports H4 by showing that the proposed synthesis model outperforms the WordNet baseline in training (rows 7, 8 and 19, 20) except Stat2016, and tuning (10, 11 and 22, 23) over all courses. It also shows that while adding synthetic data from the baseline is not always helpful, adding synthetic data from the template model helps to improve both the training and the tuning process. In both CS and ENGR courses, tuning with synthetic data enhances all ROUGE scores compared to tuning with only the original data. (rows 9 and 11). As for Stat2015, R-1 and R-$L$ improved, while R-2 decreased. For Stat2016, R-2 and R-$L$ improved, and R-1 decreased (rows 21 and 23). Training with both student reflection data and synthetic data compared to training with only student reflection data yields similar improvements, supporting H3 (rows 6, 8 and 18, 20). While the increase in ROUGE scores is small, our results show that enriching training data with synthetic data can benefit both the training and tuning of other models. In general, the best results are obtained when using data synthesis for both training and tuning (rows 11 and 23), supporting H5.",
"Finally, while the goal of our template model was to synthesize data, using it for summarization is surprisingly competitive, supporting H6. We believe that training the model with little data is doable due to the small number of parameters (logistic regression classifier only). While rows 12 and 24 are never the best results, they are close to the best involving tuning. This encourages us to enhance our template model and explore templates not so tailored to our data.",
"Human Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don't read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance."
],
[
"We explored improving the performance of neural abstractive summarizers when applied to the low resource domain of student reflections using three approaches: domain transfer, data synthesis and the combination of both. For domain transfer, state of the art abstractive summarization model was pretrained using out-of-domain data (CNN/DM), then tuned using in-domain data (student reflections). The process of tuning improved ROUGE scores on the student reflection data, and at the same time produced more readable summaries. To incorporate synthetic data, we proposed a new template based synthesis model to synthesize new summaries. We showed that enriching the training data with this synthesized data can further increase the benefits of using domain transfer / tuning to increase ROUGE scores. We additionally showed that the proposed synthesis model outperformed a word replacement synthesis baseline. Future plans include trying domain adaptation, enhancing the synthesising process by using other models, further exploring template-based methods, and extending the analysis of the synthesis model to cover other types of data like reviews and opinions."
],
[
"The research reported here was supported, in whole or in part, by the institute of Education Sciences, U.S. Department of Education, through Grant R305A180477 to the University of Pittsburgh. The opinons expressed are those of the authors and do not represent the views of the institute or the U.S. Department of Education"
]
],
"section_name": [
"Introduction",
"Related Work",
"Reflection Summarization Dataset",
"Explored Approaches for Limited Resources",
"Proposed Template-Based Synthesis Model",
"Experiments",
"Results",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"9ba77d845a100340c0415d6c42ac068712971017",
"ea2ee83677de786e764b7e995ca485a36bad64f6"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"67c5fec2bf2c3efdb763b106df3dbca7a0d3073f",
"ccd88cf434b04b8ca2d46c3a24cb47a3ea3d445a"
],
"answer": [
{
"evidence": [
"Human Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don't read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance."
],
"extractive_spans": [
"20 evaluators were recruited from our institution and asked to each perform 20 annotations"
],
"free_form_answer": "",
"highlighted_evidence": [
". While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Human Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don't read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance."
],
"extractive_spans": [],
"free_form_answer": "20 annotatos from author's institution",
"highlighted_evidence": [
"20 evaluators were recruited from our institution and asked to each perform 20 annotations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"8076943aa9b52d605e8979ccf1f44c208f8f1856",
"9d148d876c7d7dcf2e3393f6193c3abdb59b4dcd"
],
"answer": [
{
"evidence": [
"Hypothesis 4 (H4) : The proposed template-based synthesis model outperforms a simple word replacement model.",
"To validate our next set of hypothesises (H3, H4. H5), we use the synthesized data in two settings: either using it for training (rows 7, 8 and 19, 20) or tuning (rows 10, 11 and 22, 23). Table TABREF13 supports H4 by showing that the proposed synthesis model outperforms the WordNet baseline in training (rows 7, 8 and 19, 20) except Stat2016, and tuning (10, 11 and 22, 23) over all courses. It also shows that while adding synthetic data from the baseline is not always helpful, adding synthetic data from the template model helps to improve both the training and the tuning process. In both CS and ENGR courses, tuning with synthetic data enhances all ROUGE scores compared to tuning with only the original data. (rows 9 and 11). As for Stat2015, R-1 and R-$L$ improved, while R-2 decreased. For Stat2016, R-2 and R-$L$ improved, and R-1 decreased (rows 21 and 23). Training with both student reflection data and synthetic data compared to training with only student reflection data yields similar improvements, supporting H3 (rows 6, 8 and 18, 20). While the increase in ROUGE scores is small, our results show that enriching training data with synthetic data can benefit both the training and tuning of other models. In general, the best results are obtained when using data synthesis for both training and tuning (rows 11 and 23), supporting H5."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Hypothesis 4 (H4) : The proposed template-based synthesis model outperforms a simple word replacement model.",
"Table TABREF13 supports H4 by showing that the proposed synthesis model outperforms the WordNet baseline in training (rows 7, 8 and 19, 20) except Stat2016, and tuning (10, 11 and 22, 23) over all courses. It also shows that while adding synthetic data from the baseline is not always helpful, adding synthetic data from the template model helps to improve both the training and the tuning process. In both CS and ENGR courses, tuning with synthetic data enhances all ROUGE scores compared to tuning with only the original data. (rows 9 and 11). As for Stat2015, R-1 and R-$L$ improved, while R-2 decreased. For Stat2016, R-2 and R-$L$ improved, and R-1 decreased (rows 21 and 23). Training with both student reflection data and synthetic data compared to training with only student reflection data yields similar improvements, supporting H3 (rows 6, 8 and 18, 20)."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Finally, while the goal of our template model was to synthesize data, using it for summarization is surprisingly competitive, supporting H6. We believe that training the model with little data is doable due to the small number of parameters (logistic regression classifier only). While rows 12 and 24 are never the best results, they are close to the best involving tuning. This encourages us to enhance our template model and explore templates not so tailored to our data.",
"Human Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model increases summary coherence, by asking evaluators to select which of three summaries for the same document they like most: the PG-net model trained on CNN/DM; the model trained on student reflections; and finally the model trained on CNN/DM and tuned on student reflections. 20 evaluators were recruited from our institution and asked to each perform 20 annotations. Summaries are presented to evaluators in random order. Evaluators are then asked to select the summary they feel to be most readable and coherent. Unlike ROUGE, which measures the coverage of a generated summary relative to a reference summary, our evaluators don't read the reflections or reference summary. They choose the summary that is most coherent and readable, regardless of the source of the summary. For both courses, the majority of selected summaries were produced by the tuned model (49% for CS and 41% for Stat2015), compared to (31% for CS and 30.9% for Stat2015) for CNN/DM model, and (19.7% for CS and 28.5% for Stat2015) for student reflections model. These results again suggest that domain transfer can remedy the size of in-domain data and improve performance."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Finally, while the goal of our template model was to synthesize data, using it for summarization is surprisingly competitive, supporting H6. ",
"This encourages us to enhance our template model and explore templates not so tailored to our data.\n\nHuman Evaluation Results. While automated evaluation metrics like ROUGE measure lexical similarity between machine and human summaries, humans can better measure how coherent and readable a summary is. Our evaluation study investigates whether tuning the PG-net model inc",
"This encourages us to enhance our template model and explore templates not so tailored to our data."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"ca53056200efc8b1f0a61f221ffc90ba218eae4e",
"e9e185ce1e659c4b70dafe165e93a78f20b9d3aa"
],
"answer": [
{
"evidence": [
"To improve performance in low resource domains, we explore three directions. First, we explore domain transfer for abstractive summarization. While domain transfer is not new, compared to prior summarization studies BIBREF6, BIBREF7, our training (news) and tuning (student reflection) domains are quite dissimilar, and the in-domain data is small. Second, we propose a template-based synthesis method to create synthesized summaries, then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline. Lastly, we combine both directions. Evaluations of neural abstractive summarization method across four student reflection corpora show the utility of all three methods."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" While domain transfer is not new, compared to prior summarization studies BIBREF6, BIBREF7, our training (news) and tuning (student reflection) domains are quite dissimilar, and the in-domain data is small."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"To our knowledge, training such neural abstractive summarization models in low resource domains using domain transfer has not been thoroughly explored on domains different than news. For example, BIBREF4 reported the results of training on CNN/DM data while evaluating on DUC data without any tuning. Note that these two datasets are both in the news domain, and both consist of well written, structured documents. The domain transfer experiments of BIBREF1 similarly used two different news summarization datasets (CNN/DM and NYT). Our work differs in several ways from these two prior domain transfer efforts. First, our experiments involve two entirely different domains: news and student reflections. Unlike news, student reflection documents lack global structure, are repetitive, and contain many sentence fragments and grammatical mistakes. Second, the prior approaches either trained a part of the model using NYT data while retaining the other part of the model trained only on CNN/DM data BIBREF1, or didn't perform any tuning at all BIBREF4. In contrast, we do the training in two consecutive phases, pretraining and fine tuning. Finally, BIBREF1 reported that while training with domain transfer outperformed training only on out-of-domain data, it was not able to beat training only on in-domain data. This is likely because their in and out-of-domain data sizes are comparable, unlike in our case of scarce in-domain data."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" First, our experiments involve two entirely different domains: news and student reflections. Unlike news, student reflection documents lack global structure, are repetitive, and contain many sentence fragments and grammatical mistakes. Second, the prior approaches either trained a part of the model using NYT data while retaining the other part of the model trained only on CNN/DM data BIBREF1, or didn't perform any tuning at all BIBREF4. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"2ccb29603bb2bff2292bdd8f8cbbcabf1a826af2",
"3f6947c77d8a89f7737f4e64311bedce0574b918"
],
"answer": [
{
"evidence": [
"To overcome the size issue of the student reflection dataset, we first explore the effect of incorporating domain transfer into a recent abstractive summarization model: pointer networks with coverage mechanism (PG-net)BIBREF0. To experiment with domain transfer, the model was pretrained using the CNN/DM dataset, then fine tuned using the student reflection dataset (see the Experiments section). A second approach we explore to overcome the lack of reflection data is data synthesis. We first propose a template model for synthesizing new data, then investigate the performance impact of using this data when training the summarization model. The proposed model makes use of the nature of datasets such as ours, where the reference summaries tend to be close in structure: humans try to find the major points that students raise, then present the points in a way that marks their relative importance (recall the CS example in Table TABREF4). Our third explored approach is to combine domain transfer with data synthesis."
],
"extractive_spans": [
"pointer networks with coverage mechanism (PG-net)"
],
"free_form_answer": "",
"highlighted_evidence": [
"To overcome the size issue of the student reflection dataset, we first explore the effect of incorporating domain transfer into a recent abstractive summarization model: pointer networks with coverage mechanism (PG-net)BIBREF0. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To overcome the size issue of the student reflection dataset, we first explore the effect of incorporating domain transfer into a recent abstractive summarization model: pointer networks with coverage mechanism (PG-net)BIBREF0. To experiment with domain transfer, the model was pretrained using the CNN/DM dataset, then fine tuned using the student reflection dataset (see the Experiments section). A second approach we explore to overcome the lack of reflection data is data synthesis. We first propose a template model for synthesizing new data, then investigate the performance impact of using this data when training the summarization model. The proposed model makes use of the nature of datasets such as ours, where the reference summaries tend to be close in structure: humans try to find the major points that students raise, then present the points in a way that marks their relative importance (recall the CS example in Table TABREF4). Our third explored approach is to combine domain transfer with data synthesis."
],
"extractive_spans": [
" pointer networks with coverage mechanism (PG-net)BIBREF0"
],
"free_form_answer": "",
"highlighted_evidence": [
"To overcome the size issue of the student reflection dataset, we first explore the effect of incorporating domain transfer into a recent abstractive summarization model: pointer networks with coverage mechanism (PG-net)BIBREF0. To experiment with domain transfer, the model was pretrained using the CNN/DM dataset, then fine tuned using the student reflection dataset (see the Experiments section)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is the interannotator agreement for the human evaluation?",
"Who were the human evaluators used?",
"Is the template-based model realistic? ",
"Is the student reflection data very different from the newspaper data? ",
"What is the recent abstractive summarization method in this paper?"
],
"question_id": [
"fb2b536dc8e442dffab408db992b971e86548158",
"31735ec3d83c40b79d11df5c34154849aeb3fb47",
"10d450960907091f13e0be55f40bcb96f44dd074",
"b5608076d91450b0d295ad14c3e3a90d7e168d0e",
"c21b87c97d1afac85ece2450ee76d01c946de668"
],
"question_writer": [
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Sample data from the CS course.",
"Table 2: Dataset summary (n=370 documents).",
"Table 3: Summaries generated by the three variants of PGnet for the same CS reflection document, and synthetic sample generated by the proposed template model.",
"Table 4: ROUGE results. Italics indicates outperforms baselines. Boldface indicates best result over all models. Underlining"
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"4-Table3-1.png",
"6-Table4-1.png"
]
} | [
"Who were the human evaluators used?"
] | [
[
"2002.03407-Results-5"
]
] | [
"20 annotatos from author's institution"
] | 10 |
1605.06083 | Stereotyping and Bias in the Flickr30K Dataset | An untested assumption behind the crowdsourced descriptions of the images in the Flickr30K dataset (Young et al., 2014) is that they"focus only on the information that can be obtained from the image alone"(Hodosh et al., 2013, p. 859). This paper presents some evidence against this assumption, and provides a list of biases and unwarranted inferences that can be found in the Flickr30K dataset. Finally, it considers methods to find examples of these, and discusses how we should deal with stereotype-driven descriptions in future applications. | {
"paragraphs": [
[
"The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ). An untested assumption behind the dataset is that the descriptions are based on the images, and nothing else. Here are the authors (about the Flickr8K dataset, a subset of Flickr30K):",
"“By asking people to describe the people, objects, scenes and activities that are shown in a picture without giving them any further information about the context in which the picture was taken, we were able to obtain conceptual descriptions that focus only on the information that can be obtained from the image alone.” BIBREF1 ",
"What this assumption overlooks is the amount of interpretation or recontextualization carried out by the annotators. Let us take a concrete example. Figure FIGREF1 shows an image from the Flickr30K dataset.",
"This image comes with the five descriptions below. All but the first one contain information that cannot come from the image alone. Relevant parts are highlighted in bold:",
"We need to understand that the descriptions in the Flickr30K dataset are subjective descriptions of events. This can be a good thing: the descriptions tell us what are the salient parts of each image to the average human annotator. So the two humans in Figure FIGREF1 are relevant, but the two soap dispensers are not. But subjectivity can also result in stereotypical descriptions, in this case suggesting that the male is more likely to be the manager, and the female is more likely to be the subordinate. rashtchian2010collecting do note that some descriptions are speculative in nature, which they say hurts the accuracy and the consistency of the descriptions. But the problem is not with the lack of consistency here. Quite the contrary: the problem is that stereotypes may be pervasive enough for the data to be consistently biased. And so language models trained on this data may propagate harmful stereotypes, such as the idea that women are less suited for leadership positions.",
"This paper aims to give an overview of linguistic bias and unwarranted inferences resulting from stereotypes and prejudices. I will build on earlier work on linguistic bias in general BIBREF3 , providing examples from the Flickr30K data, and present a taxonomy of unwarranted inferences. Finally, I will discuss several methods to analyze the data in order to detect biases."
],
[
"Stereotypes are ideas about how other (groups of) people commonly behave and what they are likely to do. These ideas guide the way we talk about the world. I distinguish two kinds of verbal behavior that result from stereotypes: (i) linguistic bias, and (ii) unwarranted inferences. The former is discussed in more detail by beukeboom2014mechanisms, who defines linguistic bias as “a systematic asymmetry in word choice as a function of the social category to which the target belongs.” So this bias becomes visible through the distribution of terms used to describe entities in a particular category. Unwarranted inferences are the result of speculation about the image; here, the annotator goes beyond what can be glanced from the image and makes use of their knowledge and expectations about the world to provide an overly specific description. Such descriptions are directly identifiable as such, and in fact we have already seen four of them (descriptions 2–5) discussed earlier."
],
[
"Generally speaking, people tend to use more concrete or specific language when they have to describe a person that does not meet their expectations. beukeboom2014mechanisms lists several linguistic `tools' that people use to mark individuals who deviate from the norm. I will mention two of them.",
"[leftmargin=0cm]",
"One well-studied example BIBREF4 , BIBREF5 is sexist language, where the sex of a person tends to be mentioned more frequently if their role or occupation is inconsistent with `traditional' gender roles (e.g. female surgeon, male nurse). Beukeboom also notes that adjectives are used to create “more narrow labels [or subtypes] for individuals who do not fit with general social category expectations” (p. 3). E.g. tough woman makes an exception to the `rule' that women aren't considered to be tough.",
"can be used when prior beliefs about a particular social category are violated, e.g. The garbage man was not stupid. See also BIBREF6 .",
"These examples are similar in that the speaker has to put in additional effort to mark the subject for being unusual. But they differ in what we can conclude about the speaker, especially in the context of the Flickr30K data. Negations are much more overtly displaying the annotator's prior beliefs. When one annotator writes that A little boy is eating pie without utensils (image 2659046789), this immediately reveals the annotator's normative beliefs about the world: pie should be eaten with utensils. But when another annotator talks about a girls basketball game (image 8245366095), this cannot be taken as an indication that the annotator is biased about the gender of basketball players; they might just be helpful by providing a detailed description. In section 3 I will discuss how to establish whether or not there is any bias in the data regarding the use of adjectives."
],
[
"Unwarranted inferences are statements about the subject(s) of an image that go beyond what the visual data alone can tell us. They are based on additional assumptions about the world. After inspecting a subset of the Flickr30K data, I have grouped these inferences into six categories (image examples between parentheses):",
"[leftmargin=0cm]",
"We've seen an example of this in the introduction, where the `manager' was said to be talking about job performance and scolding [a worker] in a stern lecture (8063007).",
"Many dark-skinned individuals are called African-American regardless of whether the picture has been taken in the USA or not (4280272). And people who look Asian are called Chinese (1434151732) or Japanese (4834664666).",
"In image 4183120 (Figure FIGREF16 ), people sitting at a gym are said to be watching a game, even though there could be any sort of event going on. But since the location is so strongly associated with sports, crowdworkers readily make the assumption.",
"Quite a few annotations focus on explaining the why of the situation. For example, in image 3963038375 a man is fastening his climbing harness in order to have some fun. And in an extreme case, one annotator writes about a picture of a dancing woman that the school is having a special event in order to show the american culture on how other cultures are dealt with in parties (3636329461). This is reminiscent of the Stereotypic Explanatory Bias BIBREF7 , which refers to “the tendency to provide relatively more explanations in descriptions of stereotype inconsistent, compared to consistent behavior” BIBREF6 . So in theory, odd or surprising situations should receive more explanations, since a description alone may not make enough sense in those cases, but it is beyond the scope of this paper to test whether or not the Flickr30K data suffers from the SEB.",
"Older people with children around them are commonly seen as parents (5287405), small children as siblings (205842), men and women as lovers (4429660), groups of young people as friends (36979).",
"Annotators will often guess the status or occupation of people in an image. Sometimes these guesses are relatively general (e.g. college-aged people being called students in image 36979), but other times these are very specific (e.g. a man in a workshop being called a graphics designer, 5867606)."
],
[
"In order to get an idea of the kinds of stereotype-driven descriptions that are in the Flickr30K dataset, I made a browser-based annotation tool that shows both the images and their associated descriptions. You can simply leaf through the images by clicking `Next' or `Random' until you find an interesting pattern."
],
[
"One interesting pattern is that the ethnicity/race of babies doesn't seem to be mentioned unless the baby is black or asian. In other words: white seems to be the default, and others seem to be marked. How can we tell whether or not the data is actually biased?",
"We don't know whether or not an entity belongs to a particular social class (in this case: ethnic group) until it is marked as such. But we can approximate the proportion by looking at all the images where the annotators have used a marker (in this case: adjectives like black, white, asian), and for those images count how many descriptions (out of five) contain a marker. This gives us an upper bound that tells us how often ethnicity is indicated by the annotators. Note that this upper bound lies somewhere between 20% (one description) and 100% (5 descriptions). Figure TABREF22 presents count data for the ethnic marking of babies. It includes two false positives (talking about a white baby stroller rather than a white baby). In the Asian group there is an additional complication: sometimes the mother gets marked rather than the baby. E.g. An Asian woman holds a baby girl. I have counted these occurrences as well.",
"The numbers in Table TABREF22 are striking: there seems to be a real, systematic difference in ethnicity marking between the groups. We can take one step further and look at all the 697 pictures with the word `baby' in it. If there turn out to be disproportionately many white babies, this strengthens the conclusion that the dataset is biased.",
"I have manually categorized each of the baby images. There are 504 white, 66 asian, and 36 black babies. 73 images do not contain a baby, and 18 images do not fall into any of the other categories. While this does bring down the average number of times each category was marked, it also increases the contrast between white babies (who get marked in less than 1% of the images) and asian/black babies (who get marked much more often). A next step would be to see whether these observations also hold for other age groups, i.e. children and adults. INLINEFORM0 "
],
[
"It may be difficult to spot patterns by just looking at a collection of images. Another method is to tag all descriptions with part-of-speech information, so that it becomes possible to see e.g. which adjectives are most commonly used for particular nouns. One method readers may find particularly useful is to leverage the structure of Flickr30K Entities BIBREF8 . This dataset enriches Flickr30K by adding coreference annotations, i.e. which phrase in each description refers to the same entity in the corresponding image. I have used this data to create a coreference graph by linking all phrases that refer to the same entity. Following this, I applied Louvain clustering BIBREF9 to the coreference graph, resulting in clusters of expressions that refer to similar entities. Looking at those clusters helps to get a sense of the enormous variation in referring expressions. To get an idea of the richness of this data, here is a small sample of the phrases used to describe beards (cluster 268): a scruffy beard; a thick beard; large white beard; a bubble beard; red facial hair; a braided beard; a flaming red beard. In this case, `red facial hair' really stands out as a description; why not choose the simpler `beard' instead?"
],
[
"In the previous section, I have outlined several methods to manually detect stereotypes, biases, and odd phrases. Because there are many ways in which a phrase can be biased, it is difficult to automatically detect bias from the data. So how should we deal with stereotype-driven descriptions?"
],
[
"This paper provided a taxonomy of stereotype-driven descriptions in the Flickr30K dataset. I have divided these descriptions into two classes: linguistic bias and unwarranted inferences. The former corresponds to the annotators' choice of words when confronted with an image that may or may not match their stereotypical expectancies. The latter corresponds to the tendency of annotators to go beyond what the physical data can tell us, and expand their descriptions based on their past experiences and knowledge of the world. Acknowledging these phenomena is important, because on the one hand it helps us think about what is learnable from the data, and on the other hand it serves as a warning: if we train and evaluate language models on this data, we are effectively teaching them to be biased.",
"I have also looked at methods to detect stereotype-driven descriptions, but due to the richness of language it is difficult to find an automated measure. Depending on whether your goal is production or interpretation, it may either be useful to suppress or to emphasize biases in human language. Finally, I have discussed stereotyping behavior as the addition of a contextual layer on top of a more basic description. This raises the question what kind of descriptions we would like our models to produce."
],
[
"Thanks to Piek Vossen and Antske Fokkens for discussion, and to Desmond Elliott and an anonymous reviewer for comments on an earlier version of this paper. This research was supported by the Netherlands Organization for Scientific Research (NWO) via the Spinoza-prize awarded to Piek Vossen (SPI 30-673, 2014-2019)."
]
],
"section_name": [
"Introduction",
"Stereotype-driven descriptions",
"Linguistic bias",
"Unwarranted inferences",
"Detecting stereotype-driven descriptions",
"Ethnicity/race",
"Other methods",
"Discussion",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"2d223cd8194b667700c910fd257fe773359d8759",
"5d5d019eeb671f00ab3cca1fc93337924533f2b7"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"325e88002a3f399d48de49d6d72ecbbd7a9d4590",
"3a214409fb215ee04e0f19d80a4d20954f874469"
],
"answer": [
{
"evidence": [
"The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ). An untested assumption behind the dataset is that the descriptions are based on the images, and nothing else. Here are the authors (about the Flickr8K dataset, a subset of Flickr30K):",
"This paper aims to give an overview of linguistic bias and unwarranted inferences resulting from stereotypes and prejudices. I will build on earlier work on linguistic bias in general BIBREF3 , providing examples from the Flickr30K data, and present a taxonomy of unwarranted inferences. Finally, I will discuss several methods to analyze the data in order to detect biases."
],
"extractive_spans": [
"30,000"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ).",
"This paper aims to give an overview of linguistic bias and unwarranted inferences resulting from stereotypes and prejudices. I will build on earlier work on linguistic bias in general BIBREF3 , providing examples from the Flickr30K data, and present a taxonomy of unwarranted inferences. Finally, I will discuss several methods to analyze the data in order to detect biases."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ). An untested assumption behind the dataset is that the descriptions are based on the images, and nothing else. Here are the authors (about the Flickr8K dataset, a subset of Flickr30K):"
],
"extractive_spans": [
"collection of over 30,000 images with 5 crowdsourced descriptions each"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"d0545866b03f2634ae1f6362241178bb21375f96",
"edf7c7d7f774c87bf1fa33088ad560eb88322ce8"
],
"answer": [
{
"evidence": [
"It may be difficult to spot patterns by just looking at a collection of images. Another method is to tag all descriptions with part-of-speech information, so that it becomes possible to see e.g. which adjectives are most commonly used for particular nouns. One method readers may find particularly useful is to leverage the structure of Flickr30K Entities BIBREF8 . This dataset enriches Flickr30K by adding coreference annotations, i.e. which phrase in each description refers to the same entity in the corresponding image. I have used this data to create a coreference graph by linking all phrases that refer to the same entity. Following this, I applied Louvain clustering BIBREF9 to the coreference graph, resulting in clusters of expressions that refer to similar entities. Looking at those clusters helps to get a sense of the enormous variation in referring expressions. To get an idea of the richness of this data, here is a small sample of the phrases used to describe beards (cluster 268): a scruffy beard; a thick beard; large white beard; a bubble beard; red facial hair; a braided beard; a flaming red beard. In this case, `red facial hair' really stands out as a description; why not choose the simpler `beard' instead?"
],
"extractive_spans": [
"spot patterns by just looking at a collection of images",
"tag all descriptions with part-of-speech information",
"I applied Louvain clustering"
],
"free_form_answer": "",
"highlighted_evidence": [
"It may be difficult to spot patterns by just looking at a collection of images. Another method is to tag all descriptions with part-of-speech information, so that it becomes possible to see e.g. which adjectives are most commonly used for particular nouns. One method readers may find particularly useful is to leverage the structure of Flickr30K Entities BIBREF8 .",
"Following this, I applied Louvain clustering BIBREF9 to the coreference graph, resulting in clusters of expressions that refer to similar entities."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We don't know whether or not an entity belongs to a particular social class (in this case: ethnic group) until it is marked as such. But we can approximate the proportion by looking at all the images where the annotators have used a marker (in this case: adjectives like black, white, asian), and for those images count how many descriptions (out of five) contain a marker. This gives us an upper bound that tells us how often ethnicity is indicated by the annotators. Note that this upper bound lies somewhere between 20% (one description) and 100% (5 descriptions). Figure TABREF22 presents count data for the ethnic marking of babies. It includes two false positives (talking about a white baby stroller rather than a white baby). In the Asian group there is an additional complication: sometimes the mother gets marked rather than the baby. E.g. An Asian woman holds a baby girl. I have counted these occurrences as well.",
"One interesting pattern is that the ethnicity/race of babies doesn't seem to be mentioned unless the baby is black or asian. In other words: white seems to be the default, and others seem to be marked. How can we tell whether or not the data is actually biased?",
"It may be difficult to spot patterns by just looking at a collection of images. Another method is to tag all descriptions with part-of-speech information, so that it becomes possible to see e.g. which adjectives are most commonly used for particular nouns. One method readers may find particularly useful is to leverage the structure of Flickr30K Entities BIBREF8 . This dataset enriches Flickr30K by adding coreference annotations, i.e. which phrase in each description refers to the same entity in the corresponding image. I have used this data to create a coreference graph by linking all phrases that refer to the same entity. Following this, I applied Louvain clustering BIBREF9 to the coreference graph, resulting in clusters of expressions that refer to similar entities. Looking at those clusters helps to get a sense of the enormous variation in referring expressions. To get an idea of the richness of this data, here is a small sample of the phrases used to describe beards (cluster 268): a scruffy beard; a thick beard; large white beard; a bubble beard; red facial hair; a braided beard; a flaming red beard. In this case, `red facial hair' really stands out as a description; why not choose the simpler `beard' instead?"
],
"extractive_spans": [],
"free_form_answer": "Looking for adjectives marking the noun \"baby\" and also looking for most-common adjectives related to certain nouns using POS-tagging",
"highlighted_evidence": [
"We don't know whether or not an entity belongs to a particular social class (in this case: ethnic group) until it is marked as such. But we can approximate the proportion by looking at all the images where the annotators have used a marker (in this case: adjectives like black, white, asian), and for those images count how many descriptions (out of five) contain a marker. This gives us an upper bound that tells us how often ethnicity is indicated by the annotators.",
"One interesting pattern is that the ethnicity/race of babies doesn't seem to be mentioned unless the baby is black or asian. In other words: white seems to be the default, and others seem to be marked. How can we tell whether or not the data is actually biased?",
"Another method is to tag all descriptions with part-of-speech information, so that it becomes possible to see e.g. which adjectives are most commonly used for particular nouns. One method readers may find particularly useful is to leverage the structure of Flickr30K Entities BIBREF8 . This dataset enriches Flickr30K by adding coreference annotations, i.e. which phrase in each description refers to the same entity in the corresponding image. I have used this data to create a coreference graph by linking all phrases that refer to the same entity. Following this, I applied Louvain clustering BIBREF9 to the coreference graph, resulting in clusters of expressions that refer to similar entities. Looking at those clusters helps to get a sense of the enormous variation in referring expressions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"7a9b3dfad10c27de46ee718227d9322f90697aff",
"d8a2b721f36d8ed2333a5430a355f363fc208aae"
],
"answer": [
{
"evidence": [
"Ethnicity/race",
"One interesting pattern is that the ethnicity/race of babies doesn't seem to be mentioned unless the baby is black or asian. In other words: white seems to be the default, and others seem to be marked. How can we tell whether or not the data is actually biased?",
"The numbers in Table TABREF22 are striking: there seems to be a real, systematic difference in ethnicity marking between the groups. We can take one step further and look at all the 697 pictures with the word `baby' in it. If there turn out to be disproportionately many white babies, this strengthens the conclusion that the dataset is biased."
],
"extractive_spans": [],
"free_form_answer": "Ethnic bias",
"highlighted_evidence": [
"Ethnicity/race\nOne interesting pattern is that the ethnicity/race of babies doesn't seem to be mentioned unless the baby is black or asian. In other words: white seems to be the default, and others seem to be marked. How can we tell whether or not the data is actually biased?\n\n",
"The numbers in Table TABREF22 are striking: there seems to be a real, systematic difference in ethnicity marking between the groups. We can take one step further and look at all the 697 pictures with the word `baby' in it. If there turn out to be disproportionately many white babies, this strengthens the conclusion that the dataset is biased."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"One well-studied example BIBREF4 , BIBREF5 is sexist language, where the sex of a person tends to be mentioned more frequently if their role or occupation is inconsistent with `traditional' gender roles (e.g. female surgeon, male nurse). Beukeboom also notes that adjectives are used to create “more narrow labels [or subtypes] for individuals who do not fit with general social category expectations” (p. 3). E.g. tough woman makes an exception to the `rule' that women aren't considered to be tough."
],
"extractive_spans": [
"adjectives are used to create “more narrow labels [or subtypes] for individuals who do not fit with general social category expectations”"
],
"free_form_answer": "",
"highlighted_evidence": [
"One well-studied example BIBREF4 , BIBREF5 is sexist language, where the sex of a person tends to be mentioned more frequently if their role or occupation is inconsistent with `traditional' gender roles (e.g. female surgeon, male nurse).",
"Beukeboom also notes that adjectives are used to create “more narrow labels [or subtypes] for individuals who do not fit with general social category expectations” (p. 3). E.g. tough woman makes an exception to the `rule' that women aren't considered to be tough."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What evaluations methods do they take?",
"What is the size of the dataset?",
"Which methods are considered to find examples of biases and unwarranted inferences??",
"What biases are found in the dataset?"
],
"question_id": [
"71e4ba4e87e6596aeca187127c0d088df6570c57",
"7561a968470a8936d10e1ba722d2f38b5a9a4d38",
"6d4400f45bd97b812e946b8a682b018826e841f1",
"26c2e1eb12143d985e4fb50543cf0d1eb4395e67"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Image 8063007 from the Flickr30K dataset.",
"Figure 2: Image 4183120 from the Flickr30K dataset.",
"Table 1: Number of times ethnicity/race was mentioned per category, per image. The average is expressed as a percentage of the number of descriptions. Counts in the last column correspond to the number of descriptions containing an ethnic/racial marker. Images were found by looking for descriptions matching"
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png"
]
} | [
"Which methods are considered to find examples of biases and unwarranted inferences??",
"What biases are found in the dataset?"
] | [
[
"1605.06083-Other methods-0",
"1605.06083-Ethnicity/race-1",
"1605.06083-Ethnicity/race-0"
],
[
"1605.06083-Ethnicity/race-0",
"1605.06083-Linguistic bias-2",
"1605.06083-Ethnicity/race-2"
]
] | [
"Looking for adjectives marking the noun \"baby\" and also looking for most-common adjectives related to certain nouns using POS-tagging",
"Ethnic bias"
] | 12 |
1804.05918 | Improving Implicit Discourse Relation Classification by Modeling Inter-dependencies of Discourse Units in a Paragraph | We argue that semantic meanings of a sentence or clause can not be interpreted independently from the rest of a paragraph, or independently from all discourse relations and the overall paragraph-level discourse structure. With the goal of improving implicit discourse relation classification, we introduce a paragraph-level neural networks that model inter-dependencies between discourse units as well as discourse relation continuity and patterns, and predict a sequence of discourse relations in a paragraph. Experimental results show that our model outperforms the previous state-of-the-art systems on the benchmark corpus of PDTB. | {
"paragraphs": [
[
"PDTB-style discourse relations, mostly defined between two adjacent text spans (i.e., discourse units, either clauses or sentences), specify how two discourse units are logically connected (e.g., causal, contrast). Recognizing discourse relations is one crucial step in discourse analysis and can be beneficial for many downstream NLP applications such as information extraction, machine translation and natural language generation.",
"Commonly, explicit discourse relations were distinguished from implicit ones, depending on whether a discourse connective (e.g., “because” and “after”) appears between two discourse units BIBREF0 . While explicit discourse relation detection can be framed as a discourse connective disambiguation problem BIBREF1 , BIBREF2 and has achieved reasonable performance (F1 score $>$ 90%), implicit discourse relations have no discourse connective and are especially difficult to identify BIBREF3 , BIBREF2 , BIBREF4 . To fill the gap, implicit discourse relation prediction has drawn significant research interest recently and progress has been made BIBREF5 , BIBREF6 by modeling compositional meanings of two discourse units and exploiting word interactions between discourse units using neural tensor networks or attention mechanisms in neural nets. However, most of existing approaches ignore wider paragraph-level contexts beyond the two discourse units that are examined for predicting a discourse relation in between.",
"To further improve implicit discourse relation prediction, we aim to improve discourse unit representations by positioning a discourse unit (DU) in its wider context of a paragraph. The key observation is that semantic meaning of a DU can not be interpreted independently from the rest of the paragraph that contains it, or independently from the overall paragraph-level discourse structure that involve the DU. Considering the following paragraph with four discourse relations, one relation between each two adjacent DUs:",
"(1): [The Butler, Wis., manufacturer went public at $15.75 a share in August 1987,] $_{DU1}$ and (Explicit-Expansion) [Mr. Sim's goal then was a $29 per-share price by 1992.] $_{DU2}$ (Implicit-Expansion) [Strong earnings growth helped achieve that price far ahead of schedule, in August 1988.] $_{DU3}$ (Implicit-Comparison) [The stock has since softened, trading around $25 a share last week and closing yesterday at $23 in national over-the-counter trading.] $_{DU4}$ But (Explicit-Comparison) [Mr. Sim has set a fresh target of $50 a share by the end of reaching that goal.] $_{DU5}$ ",
"Clearly, each DU is an integral part of the paragraph and not independent from other units. First, predicting a discourse relation may require understanding wider paragraph-level contexts beyond two relevant DUs and the overall discourse structure of a paragraph. For example, the implicit “Comparison” discourse relation between DU3 and DU4 is difficult to identify without the background information (the history of per-share price) introduced in DU1 and DU2. Second, a DU may be involved in multiple discourse relations (e.g., DU4 is connected with both DU3 and DU5 with a “Comparison” relation), therefore the pragmatic meaning representation of a DU should reflect all the discourse relations the unit was involved in. Third, implicit discourse relation prediction should benefit from modeling discourse relation continuity and patterns in a paragraph that involve easy-to-identify explicit discourse relations (e.g., “Implicit-Comparison” relation is followed by “Explicit-Comparison” in the above example).",
"Following these observations, we construct a neural net model to process a paragraph each time and jointly build meaning representations for all DUs in the paragraph. The learned DU representations are used to predict a sequence of discourse relations in the paragraph, including both implicit and explicit relations. Although explicit relations are not our focus, predicting an explicit relation will help to reveal the pragmatic roles of its two DUs and reconstruct their representations, which will facilitate predicting neighboring implicit discourse relations that involve one of the DUs.",
"In addition, we introduce two novel designs to further improve discourse relation classification performance of our paragraph-level neural net model. First, previous work has indicated that recognizing explicit and implicit discourse relations requires different strategies, we therefore untie parameters in the discourse relation prediction layer of the neural networks and train two separate classifiers for predicting explicit and implicit discourse relations respectively. This unique design has improved both implicit and explicit discourse relation identification performance. Second, we add a CRF layer on top of the discourse relation prediction layer to fine-tune a sequence of predicted discourse relations by modeling discourse relation continuity and patterns in a paragraph.",
"Experimental results show that the intuitive paragraph-level discourse relation prediction model achieves improved performance on PDTB for both implicit discourse relation classification and explicit discourse relation classification."
],
[
"Since the PDTB BIBREF7 corpus was created, a surge of studies BIBREF8 , BIBREF3 , BIBREF9 , BIBREF10 have been conducted for predicting discourse relations, primarily focusing on the challenging task of implicit discourse relation classification when no explicit discourse connective phrase was presented. Early studies BIBREF11 , BIBREF3 , BIBREF2 , BIBREF12 focused on extracting linguistic and semantic features from two discourse units. Recent research BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 tried to model compositional meanings of two discourse units by exploiting interactions between words in two units with more and more complicated neural network models, including the ones using neural tensor BIBREF5 , BIBREF17 , BIBREF18 and attention mechanisms BIBREF6 , BIBREF19 , BIBREF20 . Another trend is to alleviate the shortage of annotated data by leveraging related external data, such as explicit discourse relations in PDTB BIBREF9 , BIBREF19 , BIBREF21 and unlabeled data obtained elsewhere BIBREF12 , BIBREF19 , often in a multi-task joint learning framework.",
"However, nearly all the previous works assume that a pair of discourse units is independent from its wider paragraph-level contexts and build their discourse relation prediction models based on only two relevant discourse units. In contrast, we model inter-dependencies of discourse units in a paragraph when building discourse unit representations; in addition, we model global continuity and patterns in a sequence of discourse relations, including both implicit and explicit relations.",
"Hierarchical neural network models BIBREF22 , BIBREF23 have been applied to RST-style discourse parsing BIBREF24 mainly for the purpose of generating text-level hierarchical discourse structures. In contrast, we use hierarchical neural network models to build context-aware sentence representations in order to improve implicit discourse relation prediction."
],
[
"Abstracting latent representations from a long sequence of words, such as a paragraph, is a challenging task. While several novel neural network models BIBREF25 , BIBREF26 have been introduced in recent years for encoding a paragraph, Recurrent Neural Network (RNN)-based methods remain the most effective approaches. RNNs, especially the long-short term memory (LSTM) BIBREF27 models, have been widely used to encode a paragraph for machine translation BIBREF28 , dialogue systems BIBREF29 and text summarization BIBREF30 because of its ability in modeling long-distance dependencies between words. In addition, among four typical pooling methods (sum, mean, last and max) for calculating sentence representations from RNN-encoded hidden states for individual words, max-pooling along with bidirectional LSTM (Bi-LSTM) BIBREF31 yields the current best universal sentence representation method BIBREF32 . We adopted a similar neural network architecture for paragraph encoding."
],
[
"Figure 1 illustrates the overall architecture of the discourse-level neural network model that consists of two Bi-LSTM layers, one max-pooling layer in between and one softmax prediction layer. The input of the neural network model is a paragraph containing a sequence of discourse units, while the output is a sequence of discourse relations with one relation between each pair of adjacent discourse units.",
"Given the words sequence of one paragraph as input, the lower Bi-LSTM layer will read the whole paragraph and calculate hidden states as word representations, and a max-pooling layer will be applied to abstract the representation of each discourse unit based on individual word representations. Then another Bi-LSTM layer will run over the sequence of discourse unit representations and compute new representations by further modeling semantic dependencies between discourse units within paragraph. The final softmax prediction layer will concatenate representations of two adjacent discourse units and predict the discourse relation between them.",
"Word Vectors as Input: The input of the paragraph-level discourse relation prediction model is a sequence of word vectors, one vector per word in the paragraph. In this work, we used the pre-trained 300-dimension Google English word2vec embeddings. For each word that is not in the vocabulary of Google word2vec, we will randomly initialize a vector with each dimension sampled from the range $[-0.25, 0.25]$ . In addition, recognizing key entities and discourse connective phrases is important for discourse relation recognition, therefore, we concatenate the raw word embeddings with extra linguistic features, specifically one-hot Part-Of-Speech tag embeddings and one-hot named entity tag embeddings.",
"Building Discourse Unit Representations: We aim to build discourse unit (DU) representations that sufficiently leverage cues for discourse relation prediction from paragraph-wide contexts, including the preceding and following discourse units in a paragraph. To process long paragraph-wide contexts, we take a bottom-up two-level abstraction approach and progressively generate a compositional representation of each word first (low level) and then generate a compositional representation of each discourse unit (high level), with a max-pooling operation in between. At both word-level and DU-level, we choose Bi-LSTM as our basic component for generating compositional representations, mainly considering its capability to capture long-distance dependencies between words (discourse units) and to incorporate influences of context words (discourse units) in each side.",
"Given a variable-length words sequence $X = (x_1,x_2,...,x_L)$ in a paragraph, the word-level Bi-LSTM will process the input sequence by using two separate LSTMs, one process the word sequence from the left to right while the other follows the reversed direction. Therefore, at each word position $t$ , we obtain two hidden states $\\overrightarrow{h_t}, \\overleftarrow{h_t}$ . We concatenate them to get the word representation $h_t = [\\overrightarrow{h_t}, \\overleftarrow{h_t}]$ . Then we apply max-pooling over the sequence of word representations for words in a discourse unit in order to get the discourse unit embedding: ",
"$$MP_{DU}[j] = \\max _{i=DU\\_start}^{DU\\_end}h_i[j]\\quad \\\\\nwhere, 1 \\le j \\le hidden\\_node\\_size$$ (Eq. 8) ",
"Next, the DU-level Bi-LSTM will process the sequence of discourse unit embeddings in a paragraph and generate two hidden states $\\overrightarrow{hDU_t}$ and $\\overleftarrow{hDU_t}$ at each discourse unit position. We concatenate them to get the discourse unit representation $hDU_t = [\\overrightarrow{hDU_t}, \\overleftarrow{hDU_t}]$ .",
"The Softmax Prediction Layer: Finally, we concatenate two adjacent discourse unit representations $hDU_{t-1}$ and $hDU_t$ and predict the discourse relation between them using a softmax function:",
"$$y_{t-1} = softmax(W_y*[hDU_{t-1},hDU_t]+b_y)$$ (Eq. 9) "
],
[
"Previous work BIBREF1 , BIBREF2 , BIBREF10 has revealed that recognizing explicit vs. implicit discourse relations requires different strategies. Note that in the PDTB dataset, explicit discourse relations were distinguished from implicit ones, depending on whether a discourse connective exists between two discourse units. Therefore, explicit discourse relation detection can be simplified as a discourse connective phrase disambiguation problem BIBREF1 , BIBREF2 . On the contrary, predicting an implicit discourse relation should rely on understanding the overall contents of its two discourse units BIBREF2 , BIBREF10 .",
"Considering the different natures of explicit vs. implicit discourse relation prediction, we decide to untie parameters at the final discourse relation prediction layer and train two softmax classifiers, as illustrated in Figure 2 . The two classifiers have different sets of parameters, with one classifier for only implicit discourse relations and the other for only explicit discourse relations.",
"$$y_{t-1} =\n{\\left\\lbrace \\begin{array}{ll}\nsoftmax(W_{exp}[hDU_{t-1},hDU_t]+b_{exp}),&exp\\\\\nsoftmax(W_{imp}[hDU_{t-1},hDU_t]+b_{imp}),&imp\n\\end{array}\\right.}$$ (Eq. 12) ",
"The loss function used for the neural network training considers loss induced by both implicit relation prediction and explicit relation prediction:",
"$$Loss = Loss_{imp} + \\alpha *Loss_{exp}$$ (Eq. 13) ",
"The $\\alpha $ , in the full system, is set to be 1, which means that minimizing the loss in predicting either type of discourse relations is equally important. In the evaluation, we will also evaluate a system variant, where we will set $\\alpha = 0$ , which means that the neural network will not attempt to predict explicit discourse relations and implicit discourse relation prediction will not be influenced by predicting neighboring explicit discourse relations."
],
[
"Data analysis and many linguistic studies BIBREF11 , BIBREF33 , BIBREF34 , BIBREF35 have repeatedly shown that discourse relations feature continuity and patterns (e.g., a temporal relation is likely to be followed by another temporal relation). Especially, BIBREF11 firstly reported that patterns exist between implicit discourse relations and their neighboring explicit discourse relations.",
"Motivated by these observations, we aim to improve implicit discourse relation detection by making use of easily identifiable explicit discourse relations and taking into account global patterns of discourse relation distributions. Specifically, we add an extra CRF layer at the top of the softmax prediction layer (shown in figure 3 ) to fine-tune predicted discourse relations by considering their inter-dependencies.",
"The Conditional Random Fields BIBREF36 (CRF) layer updates a state transition matrix, which can effectively adjust the current label depending on proceeding and following labels. Both training and decoding of the CRF layer can be solved efficiently by using the Viterbi algorithm. With the CRF layer, the model jointly assigns a sequence of discourse relations between each two adjacent discourse units in a paragraph, including both implicit and explicit relations, by considering relevant discourse unit representations as well as global discourse relation patterns."
],
[
"The Penn Discourse Treebank (PDTB): We experimented with PDTB v2.0 BIBREF7 which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles. In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp). We followed the same PDTB section partition BIBREF12 as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set. Table 1 presents the data distributions we collected from PDTB.",
"Preprocessing: The PDTB dataset documents its annotations as a list of discourse relations, with each relation associated with its two discourse units. To recover the paragraph context for a discourse relation, we match contents of its two annotated discourse units with all paragraphs in corresponding raw WSJ article. When all the matching was completed, each paragraph was split into a sequence of discourse units, with one discourse relation (implicit or explicit) between each two adjacent discourse units. Following this method, we obtained 14,309 paragraphs in total, each contains 3.2 discourse units on average. Table 2 shows the distribution of paragraphs based on the number of discourse units in a paragraph."
],
[
"We tuned the parameters based on the best performance on the development set. We fixed the weights of word embeddings during training. All the LSTMs in our neural network use the hidden state size of 300. To avoid overfitting, we applied dropout BIBREF37 with dropout ratio of 0.5 to both input and output of LSTM layers. To prevent the exploding gradient problem in training LSTMs, we adopt gradient clipping with gradient L2-norm threshold of 5.0. These parameters remain the same for all our proposed models as well as our own baseline models.",
"We chose the standard cross-entropy loss function for training our neural network model and adopted Adam BIBREF38 optimizer with the initial learning rate of 5e-4 and a mini-batch size of 128. If one instance is annotated with two labels (4% of all instances), we use both of them in loss calculation and regard the prediction as correct if model predicts one of the annotated labels. All the proposed models were implemented with Pytorch and converged to the best performance within 20-40 epochs.",
"To alleviate the influence of randomness in neural network model training and obtain stable experimental results, we ran each of the proposed models and our own baseline models ten times and report the average performance of each model instead of the best performance as reported in many previous works."
],
[
"We compare the performance of our neural network model with several recent discourse relation recognition systems that only consider two relevant discourse units.",
" BIBREF12 : improves implicit discourse relation prediction by creating more training instances from the Gigaword corpus utilizing explicitly mentioned discourse connective phrases.",
" BIBREF5 : a gated relevance network (GRN) model with tensors to capture semantic interactions between words from two discourse units.",
" BIBREF9 : a convolutional neural network model that leverages relations between different styles of discourse relations annotations (PDTB and RST BIBREF24 ) in a multi-task joint learning framework.",
" BIBREF6 : a multi-level attention-over-attention model to dynamically exploit features from two discourse units for recognizing an implicit discourse relation.",
" BIBREF21 : a novel pipelined adversarial framework to enable an adaptive imitation competition between the implicit network and a rival feature discriminator with access to connectives.",
" BIBREF18 : a Simple Word Interaction Model (SWIM) with tensors that captures both linear and quadratic relations between words from two discourse units.",
" BIBREF19 : an attention-based LSTM neural network that leverages explicit discourse relations in PDTB and unannotated external data in a multi-task joint learning framework."
],
[
"On the PDTB corpus, both binary classification and multi-way classification settings are commonly used to evaluate the implicit discourse relation recognition performance. We noticed that all the recent works report class-wise implicit relation prediction performance in the binary classification setting, while none of them report detailed performance in the multi-way classification setting. In the binary classification setting, separate “one-versus-all” binary classifiers were trained, and each classifier is to identify one class of discourse relations. Although separate classifiers are generally more flexible in combating with imbalanced distributions of discourse relation classes and obtain higher class-wise prediction performance, one pair of discourse units may be tagged with all four discourse relations without proper conflict resolution. Therefore, the multi-way classification setting is more appropriate and natural in evaluating a practical end-to-end discourse parser, and we mainly evaluate our proposed models using the four-way multi-class classification setting.",
"Since none of the recent previous work reported class-wise implicit relation classification performance in the multi-way classification setting, for better comparisons, we re-implemented the neural tensor network architecture (so-called SWIM in BIBREF18 ) which is essentially a Bi-LSTM model with tensors and report its detailed evaluation result in the multi-way classification setting. As another baseline, we report the performance of a Bi-LSTM model without tensors as well. Both baseline models take two relevant discourse units as the only input.",
"For additional comparisons, We also report the performance of our proposed models in the binary classification setting."
],
[
"Multi-way Classification: The first section of table 3 shows macro average F1-scores and accuracies of previous works. The second section of table 3 shows the multi-class classification results of our implemented baseline systems. Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance. However, the performance on the three small classes (Comp, Cont and Temp) remains low.",
"The third section of table 3 shows the multi-class classification results of our proposed paragraph-level neural network models that capture inter-dependencies among discourse units. The first row shows the performance of a variant of our basic model, where we only identify implicit relations and ignore identifying explicit relations by setting the $\\alpha $ in equation (5) to be 0. Compared with the baseline Bi-LSTM model, the only difference is that this model considers paragraph-wide contexts and model inter-dependencies among discourse units when building representation for individual DU. We can see that this model has greatly improved implicit relation classification performance across all the four relations and improved the macro-average F1-score by over 7 percents. In addition, compared with the baseline Bi-LSTM model with tensor, this model improved implicit relation classification performance across the three small classes, with clear performance gains of around 2 and 8 percents on contingency and temporal relations respectively, and overall improved the macro-average F1-score by 2.2 percents.",
"The second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph. Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations. Especially on the contingency relation, the classification performance was improved by another 1.42 percents. Moreover, the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).",
"After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent.",
"Binary Classification: From table 4 , we can see that compared against the best previous systems, our paragraph-level model with untied parameters in the prediction layer achieves F1-score improvements of 6 points on Comparison and 7 points on Temporal, which demonstrates that paragraph-wide contexts are important in detecting minority discourse relations. Note that the CRF layer of the model is not suitable for binary classification."
],
[
"As we explained in section 4.2, we ran our models for 10 times to obtain stable average performance. Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. Furthermore, the ensemble model achieves the best performance for predicting both implicit and explicit discourse relations simultaneously."
],
[
"To understand the influence of paragraph lengths to our paragraph-level models, we divide paragraphs in the PDTB test set into several subsets based on the number of DUs in a paragraph, and then evaluate our proposed models on each subset separately. From Figure 4 , we can see that our paragraph-level models (the latter three) overall outperform DU-pair baselines across all the subsets. As expected, the paragraph-level models achieve clear performance gains on long paragraphs (with more than 5 DUs) by extensively modeling mutual influences of DUs in a paragraph. But somewhat surprisingly, the paragraph-level models achieve noticeable performance gains on short paragraphs (with 2 or 3 DUs) as well. We hypothesize that by learning more appropriate discourse-aware DU representations in long paragraphs, our paragraph-level models reduce bias of using DU representations in predicting discourse relations, which benefits discourse relation prediction in short paragraphs as well."
],
[
"For the example ( \"Implicit Discourse Relation Recognition\" ), the baseline neural tensor model predicted both implicit relations wrongly (“Implicit-Contingency” between DU2 and DU3; “Implicit-Expansion” between DU3 and DU4), while our paragraph-level model predicted all the four discourse relations correctly, which indicates that paragraph-wide contexts play a key role in implicit discourse relation prediction.",
"For another example:",
"(2): [Marshall came clanking in like Marley's ghost dragging those chains of brigades and air wings and links with Arab despots.] $_{DU1}$ (Implicit-Temporal) [He wouldn't leave] $_{DU2}$ until (Explicit-Temporal) [Mr. Cheney promised to do whatever the Pentagon systems analysts told him.] $_{DU3}$ ",
"Our basic paragraph-level model wrongly predicted the implicit discourse relation between DU1 and DU2 to be “Implicit-Comparison”, without being able to effectively use the succeeding “Explicit-Temporal” relation. On the contrary, the full model corrected this mistake by modeling discourse relation patterns with the CRF layer."
],
[
"We have presented a paragraph-level neural network model that takes a sequence of discourse units as input, models inter-dependencies between discourse units as well as discourse relation continuity and patterns, and predicts a sequence of discourse relations in a paragraph. By building wider-context informed discourse unit representations and capturing the overall discourse structure, the paragraph-level neural network model outperforms the best previous models for implicit discourse relation recognition on the PDTB dataset."
],
[
"We acknowledge the support of NVIDIA Corporation for their donation of one GeForce GTX TITAN X GPU used for this research."
]
],
"section_name": [
"Introduction",
"Implicit Discourse Relation Recognition",
"Paragraph Encoding",
"The Basic Model Architecture",
"Untie Parameters in the Softmax Prediction Layer (Implicit vs. Explicit)",
"Fine-tune Discourse Relation Predictions Using a CRF Layer",
"Dataset and Preprocessing",
"Parameter Settings and Model Training",
"Baseline Models and Systems",
"Evaluation Settings",
"Experimental Results",
"Ensemble Model",
"Impact of Paragraph Length",
"Example Analysis",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"d0057a1eb49771d549f30d8207d89be615af81b8",
"ec493dbae682cabf5b545141baca7322c3aa547d"
],
"answer": [
{
"evidence": [
"The second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph. Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations. Especially on the contingency relation, the classification performance was improved by another 1.42 percents. Moreover, the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).",
"After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent.",
"As we explained in section 4.2, we ran our models for 10 times to obtain stable average performance. Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. Furthermore, the ensemble model achieves the best performance for predicting both implicit and explicit discourse relations simultaneously."
],
"extractive_spans": [
"explicit discourse relations"
],
"free_form_answer": "",
"highlighted_evidence": [
"the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).",
"After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved.",
"Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Multi-class Classification Results on PDTB. We report accuracy (Acc) and macro-average F1scores for both explicit and implicit discourse relation predictions. We also report class-wise F1 scores.",
"The Penn Discourse Treebank (PDTB): We experimented with PDTB v2.0 BIBREF7 which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles. In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp). We followed the same PDTB section partition BIBREF12 as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set. Table 1 presents the data distributions we collected from PDTB.",
"Multi-way Classification: The first section of table 3 shows macro average F1-scores and accuracies of previous works. The second section of table 3 shows the multi-class classification results of our implemented baseline systems. Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance. However, the performance on the three small classes (Comp, Cont and Temp) remains low."
],
"extractive_spans": [],
"free_form_answer": "Best: Expansion (Exp). Worst: Comparison (Comp).",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Multi-class Classification Results on PDTB. We report accuracy (Acc) and macro-average F1scores for both explicit and implicit discourse relation predictions. We also report class-wise F1 scores.",
"In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp).",
"However, the performance on the three small classes (Comp, Cont and Temp) remains low."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"ca2a4695129d0180768a955fb5910d639f79aa34"
]
},
{
"annotation_id": [
"6f551ad62c2a7def2b2bf4e992cd9ab8bded9d89",
"b7e37c24b10414781313c8d438d730cea36b76f8"
],
"answer": [
{
"evidence": [
"The second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph. Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations. Especially on the contingency relation, the classification performance was improved by another 1.42 percents. Moreover, the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).",
"After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent.",
"As we explained in section 4.2, we ran our models for 10 times to obtain stable average performance. Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. Furthermore, the ensemble model achieves the best performance for predicting both implicit and explicit discourse relations simultaneously."
],
"extractive_spans": [
"the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).",
"full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent.",
"Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. "
],
"free_form_answer": "",
"highlighted_evidence": [
"the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).",
"In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent.",
"Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent."
],
"extractive_spans": [
"1 percent"
],
"free_form_answer": "",
"highlighted_evidence": [
"In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"ca2a4695129d0180768a955fb5910d639f79aa34"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"What discourse relations does it work best/worst for?",
"How much does this model improve state-of-the-art?"
],
"question_id": [
"f17ca24b135f9fe6bb25dc5084b13e1637ec7744",
"bd5bd1765362c2d972a762ca12675108754aa437"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The Basic Model Architecture for Paragraph-level Discourse Relations Sequence Prediction.",
"Figure 2: Untie Parameters in the Prediction Layer",
"Figure 3: Fine-tune Discourse Relations with a CRF layer.",
"Table 1: Distributions of Four Top-level Discourse Relations in PDTB.",
"Table 2: Distributions of Paragraphs.",
"Table 3: Multi-class Classification Results on PDTB. We report accuracy (Acc) and macro-average F1scores for both explicit and implicit discourse relation predictions. We also report class-wise F1 scores.",
"Table 4: Binary Classification Results on PDTB. We report F1-scores for implicit discourse relations.",
"Table 5: Multi-class Classification Results of Ensemble Models on PDTB.",
"Figure 4: Impact of Paragraph Length. We plot the macro-average F1-score of implicit discourse relation classification on instances with different paragraph length."
],
"file": [
"4-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"8-Table5-1.png",
"9-Figure4-1.png"
]
} | [
"What discourse relations does it work best/worst for?"
] | [
[
"1804.05918-Ensemble Model-0",
"1804.05918-7-Table3-1.png",
"1804.05918-Experimental Results-3",
"1804.05918-Experimental Results-0",
"1804.05918-Experimental Results-2",
"1804.05918-Dataset and Preprocessing-0"
]
] | [
"Best: Expansion (Exp). Worst: Comparison (Comp)."
] | 13 |
1910.10781 | Hierarchical Transformers for Long Document Classification | BERT, which stands for Bidirectional Encoder Representations from Transformers, is a recently introduced language representation model based upon the transfer learning paradigm. We extend its fine-tuning procedure to address one of its major limitations - applicability to inputs longer than a few hundred words, such as transcripts of human call conversations. Our method is conceptually simple. We segment the input into smaller chunks and feed each of them into the base model. Then, we propagate each output through a single recurrent layer, or another transformer, followed by a softmax activation. We obtain the final classification decision after the last segment has been consumed. We show that both BERT extensions are quick to fine-tune and converge after as little as 1 epoch of training on a small, domain-specific data set. We successfully apply them in three different tasks involving customer call satisfaction prediction and topic classification, and obtain a significant improvement over the baseline models in two of them. | {
"paragraphs": [
[
"Bidirectional Encoder Representations from Transformers (BERT) is a novel Transformer BIBREF0 model, which recently achieved state-of-the-art performance in several language understanding tasks, such as question answering, natural language inference, semantic similarity, sentiment analysis, and others BIBREF1. While well-suited to dealing with relatively short sequences, Transformers suffer from a major issue that hinders their applicability in classification of long sequences, i.e. they are able to consume only a limited context of symbols as their input BIBREF2.",
"There are several natural language (NLP) processing tasks that involve such long sequences. Of particular interest are topic identification of spoken conversations BIBREF3, BIBREF4, BIBREF5 and call center customer satisfaction prediction BIBREF6, BIBREF7, BIBREF8, BIBREF9. Call center conversations, while usually quite short and to the point, often involve agents trying to solve very complex issues that the customers experience, resulting in some calls taking even an hour or more. For speech analytics purposes, these calls are typically transcribed using an automatic speech recognition (ASR) system, and processed in textual representations further down the NLP pipeline. These transcripts sometimes exceed the length of 5000 words. Furthermore, temporal information might play an important role in tasks like CSAT. For example, a customer may be angry at the beginning of the call, but after her issue is resolved, she would be very satisfied with the way it was handled. Therefore, simple bag of words models, or any model that does not include temporal dependencies between the inputs, may not be well-suited to handle this category of tasks. This motivates us to employ model such as BERT in this task.",
"In this paper, we propose a method that builds upon BERT's architecture. We split the input text sequence into shorter segments in order to obtain a representation for each of them using BERT. Then, we use either a recurrent LSTM BIBREF10 network, or another Transformer, to perform the actual classification. We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT). Given that these models introduce a hierarchy of representations (segment-wise and document-wise), we refer to them as Hierarchical Transformers. To the best of our knowledge, no attempt has been done before to use the Transformer architecture for classification of such long sequences.",
"Our novel contributions are:",
"Two extensions - RoBERT and ToBERT - to the BERT model, which enable its application in classification of long texts by performing segmentation and using another layer on top of the segment representations.",
"State-of-the-art results on the Fisher topic classification task.",
"Significant improvement on the CSAT prediction task over the MS-CNN model."
],
[
"Several dimensionality reduction algorithms such as RBM, autoencoders, subspace multinomial models (SMM) are used to obtain a low dimensional representation of documents from a simple BOW representation and then classify it using a simple linear classifiers BIBREF11, BIBREF12, BIBREF13, BIBREF4. In BIBREF14 hierarchical attention networks are used for document classification. They evaluate their model on several datasets with average number of words around 150. Character-level CNN are explored in BIBREF15 but it is prohibitive for very long documents. In BIBREF16, dataset collected from arXiv papers is used for classification. For classification, they sample random blocks of words and use them together for classification instead of using full document which may work well as arXiv papers are usually coherent and well written on a well defined topic. Their method may not work well on spoken conversations as random block of words usually do not represent topic of full conversation.",
"Several researchers addressed the problem of predicting customer satisfaction BIBREF6, BIBREF7, BIBREF8, BIBREF9. In most of these works, logistic regression, SVM, CNN are applied on different kinds of representations.",
"In BIBREF17, authors use BERT for document classification but the average document length is less than BERT maximum length 512. TransformerXL BIBREF2 is an extension to the Transformer architecture that allows it to better deal with long inputs for the language modelling task. It relies on the auto-regressive property of the model, which is not the case in our tasks."
],
[
"Because our work builds heavily upon BERT, we provide a brief summary of its features. BERT is built upon the Transformer architecture BIBREF0, which uses self-attention, feed-forward layers, residual connections and layer normalization as the main building blocks. It has two pre-training objectives:",
"Masked language modelling - some of the words in a sentence are being masked and the model has to predict them based on the context (note the difference from the typical autoregressive language model training objective);",
"Next sentence prediction - given two input sequences, decide whether the second one is the next sentence or not.",
"BERT has been shown to beat the state-of-the-art performance on 11 tasks with no modifications to the model architecture, besides adding a task-specific output layer BIBREF1. We follow same procedure suggested in BIBREF1 for our tasks. Fig. FIGREF8 shows the BERT model for classification. We obtain two kinds of representation from BERT: pooled output from last transformer block, denoted by H, and posterior probabilities, denoted by P. There are two variants of BERT - BERT-Base and BERT-Large. In this work we are using BERT-Base for faster training and experimentation, however, our methods are applicable to BERT-Large as well. BERT-Base and BERT-Large are different in model parameters such as number of transformer blocks, number of self-attention heads. Total number of parameters in BERT-Base are 110M and 340M in BERT-Large.",
"BERT suffers from major limitations in terms of handling long sequences. Firstly, the self-attention layer has a quadratic complexity $O(n^2)$ in terms of the sequence length $n$ BIBREF0. Secondly, BERT uses a learned positional embeddings scheme BIBREF1, which means that it won't likely be able to generalize to positions beyond those seen in the training data.",
"To investigate the effect of fine-tuning BERT on task performance, we use either the pre-trained BERT weights, or the weights from a BERT fine-tuned on the task-specific dataset on a segment-level (i.e. we preserve the original label but fine-tune on each segment separately instead of on the whole text sequence). We compare these results to using the fine-tuned segment-level BERT predictions directly as inputs to the next layer."
],
[
"Given that BERT is limited to a particular input length, we split the input sequence into segments of a fixed size with overlap. For each of these segments, we obtain H or P from BERT model. We then stack these segment-level representations into a sequence, which serves as input to a small (100-dimensional) LSTM layer. Its output serves as a document embedding. Finally, we use two fully connected layers with ReLU (30-dimensional) and softmax (the same dimensionality as the number of classes) activations to obtain the final predictions.",
"With this approach, we overcome BERT's computational complexity, reducing it to $O(n/k * k^2) = O(nk)$ for RoBERT, with $k$ denoting the segment size (the LSTM component has negligible linear complexity $O(k)$). The positional embeddings are also no longer an issue."
],
[
"Given that Transformers' edge over recurrent networks is their ability to effectively capture long distance relationships between words in a sequence BIBREF0, we experiment with replacing the LSTM recurrent layer in favor of a small Transformer model (2 layers of transformer building block containing self-attention, fully connected, etc.). To investigate if preserving the information about the input sequence order is important, we also build a variant of ToBERT which learns positional embeddings at the segment-level representations (but is limited to sequences of length seen during the training).",
"ToBERT's computational complexity $O(\\frac{n^2}{k^2})$ is asymptotically inferior to RoBERT, as the top-level Transformer model again suffers from quadratic complexity in the number of segments. However, in practice this number is much smaller than the input sequence length (${\\frac{n}{k}} << n$), so we haven't observed performance or memory issues with our datasets."
],
[
"We evaluated our models on 3 different datasets:",
"CSAT dataset for CSAT prediction, consisting of spoken transcripts (automatic via ASR).",
"20 newsgroups for topic identification task, consisting of written text;",
"Fisher Phase 1 corpus for topic identification task, consisting of spoken transcripts (manual);"
],
[
"CSAT dataset consists of US English telephone speech from call centers. For each call in this dataset, customers participated in that call gave a rating on his experience with agent. Originally, this dataset has labels rated on a scale 1-9 with 9 being extremely satisfied and 1 being extremely dissatisfied. Fig. FIGREF16 shows the histogram of ratings for our dataset. As the distribution is skewed towards extremes, we choose to do binary classification with ratings above 4.5 as satisfied and below 4.5 as dissatisfied. Quantization of ratings also helped us to create a balanced dataset. This dataset contains 4331 calls and we split them into 3 sets for our experiments: 2866 calls for training, 362 calls for validation and, finally, 1103 calls for testing.",
"We obtained the transcripts by employing an ASR system. The ASR system uses TDNN-LSTM acoustic model trained on Fisher and Switchboard datasets with lattice-free maximum mutual information criterion BIBREF18. The word error rates using four-gram language models were 9.2% and 17.3% respectively on Switchboard and CallHome portions of Eval2000 dataset."
],
[
"20 newsgroups data set is one of the frequently used datasets in the text processing community for text classification and text clustering. This data set contains approximately 20,000 English documents from 20 topics to be identified, with 11314 documents for training and 7532 for testing. In this work, we used only 90% of documents for training and the remaining 10% for validation. For fair comparison with other publications, we used 53160 words vocabulary set available in the datasets website."
],
[
"Fisher Phase 1 US English corpus is often used for automatic speech recognition in speech community. In this work, we used it for topic identification as in BIBREF3. The documents are 10-minute long telephone conversations between two people discussing a given topic. We used same training and test splits as BIBREF3 in which 1374 and 1372 documents are used for training and testing respectively. For validation of our model, we used 10% of training dataset and the remaining 90% was used for actual model training. The number of topics in this data set is 40."
],
[
"Table TABREF22 shows statistics of our datasets. It can be observed that average length of Fisher is much higher than 20 newsgroups and CSAT. Cumulative distribution of document lengths for each dataset is shown in Fig. FIGREF21. It can be observed that almost all of the documents in Fisher dataset have length more than 1000 words. For CSAT, more than 50% of the documents have length greater than 500 and for 20newsgroups only 10% of the documents have length greater than 500. Note that, for CSAT and 20newsgroups, there are few documents with length more than 5000."
],
[
"In this work, we split document into segments of 200 tokens with a shift of 50 tokens to extract features from BERT model. For RoBERT, LSTM model is trained to minimize cross-entropy loss with Adam optimizer BIBREF19. The initial learning rate is set to $0.001$ and is reduced by a factor of $0.95$ if validation loss does not decrease for 3-epochs. For ToBERT, the Transformer is trained with the default BERT version of Adam optimizer BIBREF1 with an initial learning rate of $5e$-5. We report accuracy in all of our experiments. We chose a model with the best validation accuracy to calculate accuracy on the test set. To accomodate for non-determinism of some TensorFlow GPU operations, we report accuracy averaged over 5 runs."
],
[
"Table TABREF25 presents results using pre-trained BERT features. We extracted features from the pooled output of final transformer block as these were shown to be working well for most of the tasks BIBREF1. The features extracted from a pre-trained BERT model without any fine-tuning lead to a sub-par performance. However, We also notice that ToBERT model exploited the pre-trained BERT features better than RoBERT. It also converged faster than RoBERT. Table TABREF26 shows results using features extracted after fine-tuning BERT model with our datasets. Significant improvements can be observed compared to using pre-trained BERT features. Also, it can be noticed that ToBERT outperforms RoBERT on Fisher and 20newsgroups dataset by 13.63% and 0.81% respectively. On CSAT, ToBERT performs slightly worse than RoBERT but it is not statistically significant as this dataset is small.",
"Table TABREF27 presents results using fine-tuned BERT predictions instead of the pooled output from final transformer block. For each document, having obtained segment-wise predictions we can obtain final prediction for the whole document in three ways:",
"Compute the average of all segment-wise predictions and find the most probable class;",
"Find the most frequently predicted class;",
"Train a classification model.",
"It can be observed from Table TABREF27 that a simple averaging operation or taking most frequent predicted class works competitively for CSAT and 20newsgroups but not for the Fisher dataset. We believe the improvements from using RoBERT or ToBERT, compared to simple averaging or most frequent operations, are proportional to the fraction of long documents in the dataset. CSAT and 20newsgroups have (on average) significantly shorter documents than Fisher, as seen in Fig. FIGREF21. Also, significant improvements for Fisher could be because of less confident predictions from BERT model as this dataset has 40 classes. Fig. FIGREF31 presents the comparison of average voting and ToBERT for various document length ranges for Fisher dataset. We used fine-tuned BERT segment-level predictions (P) for this analysis. It can be observed that ToBERT outperforms average voting in every interval. To the best of our knowledge, this is a state-of-the-art result reported on the Fisher dataset.",
"Table TABREF32 presents the effect of position embeddings on the model performance. It can be observed that position embeddings did not significantly affect the model performance for Fisher and 20newsgroups, but they helped slightly in CSAT prediction (an absolute improvement of 0.64% F1-score). We think that this is explained by the fact that Fisher and 20newsgroups are topic identification tasks, and the topic does not change much throughout these documents. However, CSAT may vary during the call, and in some cases a naive assumption that the sequential nature of the transcripts is irrelevant may lead to wrong conclusions.",
"Table TABREF33 compares our results with previous works. It can be seen that our model ToBERT outperforms CNN based experiments by significant margin on CSAT and Fisher datasets. For CSAT dataset, we used multi-scale CNN (MS-CNN) as the baseline, given its strong results on Fisher and 20newsgroups. The setup was replicated from BIBREF5 for comparison. We also see that our result on 20 newsgroups is 0.6% worse than the state-of-the-art."
],
[
"In this paper, we presented two methods for long documents using BERT model: RoBERT and ToBERT. We evaluated our experiments on two classification tasks - customer satisfaction prediction and topic identification - using 3 datasets: CSAT, 20newsgroups and Fisher. We observed that ToBERT outperforms RoBERT on pre-trained BERT features and fine-tuned BERT features for all our tasks. Also, we noticed that fine-tuned BERT performs better than pre-trained BERT. We have shown that both RoBERT and ToBERT improved the simple baselines of taking an average (or the most frequent) of segment-wise predictions for long documents to obtain final prediction. Position embeddings did not significantly affect our models performance, but slightly improved the accuracy on the CSAT task. We obtained the best results on Fisher dataset and good improvements for CSAT task compared to the CNN baseline. It is interesting to note that the longer the average input in a given task, the bigger improvement we observe w.r.t. the baseline for that task. Our results confirm that both RoBERT and ToBERT can be used for long sequences with competitive performance and quick fine-tuning procedure. For future work, we shall focus on training models on long documents directly (i.e. in an end-to-end manner)."
]
],
"section_name": [
"Introduction",
"Related work",
"Method ::: BERT",
"Method ::: Recurrence over BERT",
"Method ::: Transformer over BERT",
"Experiments",
"Experiments ::: CSAT",
"Experiments ::: 20 newsgroups",
"Experiments ::: Fisher",
"Experiments ::: Dataset Statistics",
"Experiments ::: Architecture and Training Details",
"Results",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"31512f80f4e90081e2398c721eba931ddb60f904",
"e58a9356a548f4cbc2e5dfa508e89689a651b301"
],
"answer": [
{
"evidence": [
"We evaluated our models on 3 different datasets:",
"CSAT dataset for CSAT prediction, consisting of spoken transcripts (automatic via ASR).",
"20 newsgroups for topic identification task, consisting of written text;",
"Fisher Phase 1 corpus for topic identification task, consisting of spoken transcripts (manual);"
],
"extractive_spans": [
"CSAT dataset",
"20 newsgroups",
"Fisher Phase 1 corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluated our models on 3 different datasets:\n\nCSAT dataset for CSAT prediction, consisting of spoken transcripts (automatic via ASR).\n\n20 newsgroups for topic identification task, consisting of written text;\n\nFisher Phase 1 corpus for topic identification task, consisting of spoken transcripts (manual);"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluated our models on 3 different datasets:",
"CSAT dataset for CSAT prediction, consisting of spoken transcripts (automatic via ASR).",
"20 newsgroups for topic identification task, consisting of written text;",
"Fisher Phase 1 corpus for topic identification task, consisting of spoken transcripts (manual);",
"CSAT dataset consists of US English telephone speech from call centers. For each call in this dataset, customers participated in that call gave a rating on his experience with agent. Originally, this dataset has labels rated on a scale 1-9 with 9 being extremely satisfied and 1 being extremely dissatisfied. Fig. FIGREF16 shows the histogram of ratings for our dataset. As the distribution is skewed towards extremes, we choose to do binary classification with ratings above 4.5 as satisfied and below 4.5 as dissatisfied. Quantization of ratings also helped us to create a balanced dataset. This dataset contains 4331 calls and we split them into 3 sets for our experiments: 2866 calls for training, 362 calls for validation and, finally, 1103 calls for testing.",
"20 newsgroups data set is one of the frequently used datasets in the text processing community for text classification and text clustering. This data set contains approximately 20,000 English documents from 20 topics to be identified, with 11314 documents for training and 7532 for testing. In this work, we used only 90% of documents for training and the remaining 10% for validation. For fair comparison with other publications, we used 53160 words vocabulary set available in the datasets website.",
"Fisher Phase 1 US English corpus is often used for automatic speech recognition in speech community. In this work, we used it for topic identification as in BIBREF3. The documents are 10-minute long telephone conversations between two people discussing a given topic. We used same training and test splits as BIBREF3 in which 1374 and 1372 documents are used for training and testing respectively. For validation of our model, we used 10% of training dataset and the remaining 90% was used for actual model training. The number of topics in this data set is 40."
],
"extractive_spans": [
"CSAT dataset ",
"20 newsgroups",
"Fisher Phase 1 corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluated our models on 3 different datasets:\n\nCSAT dataset for CSAT prediction, consisting of spoken transcripts (automatic via ASR).\n\n20 newsgroups for topic identification task, consisting of written text;\n\nFisher Phase 1 corpus for topic identification task, consisting of spoken transcripts (manual);",
"CSAT dataset consists of US English telephone speech from call centers. For each call in this dataset, customers participated in that call gave a rating on his experience with agent. Originally, this dataset has labels rated on a scale 1-9 with 9 being extremely satisfied and 1 being extremely dissatisfied. Fig. FIGREF16 shows the histogram of ratings for our dataset.",
"20 newsgroups data set is one of the frequently used datasets in the text processing community for text classification and text clustering. This data set contains approximately 20,000 English documents from 20 topics to be identified, with 11314 documents for training and 7532 for testing. ",
"Fisher Phase 1 US English corpus is often used for automatic speech recognition in speech community. In this work, we used it for topic identification as in BIBREF3. The documents are 10-minute long telephone conversations between two people discussing a given topic."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"8d6cde4b85d1f225fcc85f18a5c2b7784486dae1",
"975b8c01abdb1f1fbb5a1843cf35515493fc3a81"
],
"answer": [
{
"evidence": [
"In this paper, we propose a method that builds upon BERT's architecture. We split the input text sequence into shorter segments in order to obtain a representation for each of them using BERT. Then, we use either a recurrent LSTM BIBREF10 network, or another Transformer, to perform the actual classification. We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT). Given that these models introduce a hierarchy of representations (segment-wise and document-wise), we refer to them as Hierarchical Transformers. To the best of our knowledge, no attempt has been done before to use the Transformer architecture for classification of such long sequences.",
"In this paper, we presented two methods for long documents using BERT model: RoBERT and ToBERT. We evaluated our experiments on two classification tasks - customer satisfaction prediction and topic identification - using 3 datasets: CSAT, 20newsgroups and Fisher. We observed that ToBERT outperforms RoBERT on pre-trained BERT features and fine-tuned BERT features for all our tasks. Also, we noticed that fine-tuned BERT performs better than pre-trained BERT. We have shown that both RoBERT and ToBERT improved the simple baselines of taking an average (or the most frequent) of segment-wise predictions for long documents to obtain final prediction. Position embeddings did not significantly affect our models performance, but slightly improved the accuracy on the CSAT task. We obtained the best results on Fisher dataset and good improvements for CSAT task compared to the CNN baseline. It is interesting to note that the longer the average input in a given task, the bigger improvement we observe w.r.t. the baseline for that task. Our results confirm that both RoBERT and ToBERT can be used for long sequences with competitive performance and quick fine-tuning procedure. For future work, we shall focus on training models on long documents directly (i.e. in an end-to-end manner)."
],
"extractive_spans": [
"Transformer over BERT (ToBERT)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT).",
"We observed that ToBERT outperforms RoBERT on pre-trained BERT features and fine-tuned BERT features for all our tasks. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we propose a method that builds upon BERT's architecture. We split the input text sequence into shorter segments in order to obtain a representation for each of them using BERT. Then, we use either a recurrent LSTM BIBREF10 network, or another Transformer, to perform the actual classification. We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT). Given that these models introduce a hierarchy of representations (segment-wise and document-wise), we refer to them as Hierarchical Transformers. To the best of our knowledge, no attempt has been done before to use the Transformer architecture for classification of such long sequences.",
"Table TABREF25 presents results using pre-trained BERT features. We extracted features from the pooled output of final transformer block as these were shown to be working well for most of the tasks BIBREF1. The features extracted from a pre-trained BERT model without any fine-tuning lead to a sub-par performance. However, We also notice that ToBERT model exploited the pre-trained BERT features better than RoBERT. It also converged faster than RoBERT. Table TABREF26 shows results using features extracted after fine-tuning BERT model with our datasets. Significant improvements can be observed compared to using pre-trained BERT features. Also, it can be noticed that ToBERT outperforms RoBERT on Fisher and 20newsgroups dataset by 13.63% and 0.81% respectively. On CSAT, ToBERT performs slightly worse than RoBERT but it is not statistically significant as this dataset is small."
],
"extractive_spans": [],
"free_form_answer": "The transformer layer",
"highlighted_evidence": [
"Then, we use either a recurrent LSTM BIBREF10 network, or another Transformer, to perform the actual classification. We call these techniques Recurrence over BERT (RoBERT) and Transformer over BERT (ToBERT).",
"Also, it can be noticed that ToBERT outperforms RoBERT on Fisher and 20newsgroups dataset by 13.63% and 0.81% respectively. On CSAT, ToBERT performs slightly worse than RoBERT but it is not statistically significant as this dataset is small."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"What datasets did they use for evaluation?",
"On top of BERT does the RNN layer work better or the transformer layer?"
],
"question_id": [
"12c50dea84f9a8845795fa8b8c1679328bd66246",
"0810b43404686ddfe4ca84783477ae300fdd2ea4"
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61"
],
"search_query": [
"transformers",
"transformers"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. BERT model for classification. H denotes BERT segment representations from last transformer block, P denotes segment posterior probabilities. Figure inspired from [2]",
"Fig. 2. Histogram of customer ratings. Rating 9 corresponds to extremely satisfied and 1 to extremely dissatisfied",
"Table 2. Results using segment representations (H) from a pre-trained BERT (without fine-tuning).",
"Table 3. Results using segment representations (H) from a fine-tuned BERT.",
"Fig. 3. Cumulative distribution of document lengths.",
"Table 1. Dataset statistics. C indicates number of Classes, N the Number of documents, AW the Average number of Words per document and L the Longest document length.",
"Table 4. Comparison of models using fine-tuned BERT segment-level predictions (P) instead of segment representations (H).",
"Table 5. The effect of including positional embeddings in ToBERT model. Fine-tuned BERT segment representations were used for these results.",
"Fig. 4. Comparison of average voting and ToBERT for various document length ranges for Fisher dataset.",
"Table 6. Comparison of our results with previous works."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"4-Figure3-1.png",
"4-Table1-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"5-Figure4-1.png",
"5-Table6-1.png"
]
} | [
"On top of BERT does the RNN layer work better or the transformer layer?"
] | [
[
"1910.10781-Results-0",
"1910.10781-Conclusions-0",
"1910.10781-Introduction-2"
]
] | [
"The transformer layer"
] | 18 |
1603.09631 | Data Collection for Interactive Learning through the Dialog | This paper presents a dataset collected from natural dialogs which enables to test the ability of dialog systems to learn new facts from user utterances throughout the dialog. This interactive learning will help with one of the most prevailing problems of open domain dialog system, which is the sparsity of facts a dialog system can reason about. The proposed dataset, consisting of 1900 collected dialogs, allows simulation of an interactive gaining of denotations and questions explanations from users which can be used for the interactive learning. | {
"paragraphs": [
[
"Nowadays, dialog systems are usually designed for a single domain BIBREF0 . They store data in a well-defined format with a fixed number of attributes for entities that the system can provide. Because data in this format can be stored as a two-dimensional table within a relational database, we call the data flat. This data representation allows the system to query the database in a simple and efficient way. It also allows to keep the dialog state in the form of slots (which usually correspond to columns in the table) and track it through the dialog using probabilistic belief tracking BIBREF1 , BIBREF2 .",
"However, the well-defined structure of the database of a typical dialog system comes with a high cost of extending it as every piece of new information has to fit the format. This is especially a problem when we one is adapting the system for a new domain because its entities could have different attributes.",
"A dialog system based on knowledge bases offers many advantages. First, the knowledge base, which can be represented as knowledge graph containing entities connected by relations, is much more flexible than the relational database. Second, freely available knowledge bases, such as Freebase, Wikidata, etc. contain an enormous amount of structured information, and are still growing. A dialog system which is capable of working with this type of information would be therefore very useful.",
"In this paper we propose a dataset aiming to help develop and evaluate dialog systems based on knowledge bases by interactive learning motivated in Section \"Motivation\" Section \"Dialog policies\" describes policies that can be used for retrieving information from knowledge bases. In Section \"Dialog Simulation\" is introduced a dialog simulation from natural conversations which we use for evaluation of interactive learning. The dataset collection process allowing the dialog simulation is described in Section \"Dataset Collection Process\" and is followed by properties of the resulting dataset in Section \"Dataset Properties\" Evaluation guidelines with proposed metrics can be found in Section \"Interactive Learning Evaluation\" The planned future work is summarized in Section \"Future Work\" We conclude the paper with Section \"Conclusion\" "
],
[
"From the point of view of dialog systems providing general information from a knowledge base, the most limiting factor is that a large portion of the questions is understood poorly.",
"Current approaches BIBREF3 , BIBREF4 can only achieve around 50% accuracy on some question answering datasets. Therefore, we think that there is a room for improvements which can be achieved by interactively asking for additional information in conversational dialogs with users. This extra information can be used for improving policies of dialog systems. We call this approach the interactive learning from dialogs.",
"We can improve dialog systems in several aspects through interactive learning in a direct interaction with users. First, the most straightforward way obviously is getting the correct answer for questions that the system does not know. We can try to ask users for answers on questions that the system encountered in a conversation with a different user and did not understand it. Second, the system can ask the user for a broader explanation of a question. This explanation could help the system to understand the question and provide the correct answer. In addition, the system can learn correct policy for the question which allows providing answers without asking any extra information for similar questions next time. We hypothesize that users are willing to give such explanations because it could help them to find answers for their own questions. The last source of information that we consider for interactive learning is rephrasing, which could help when the system does know the concept but does not know the correct wording. This area is extensively studied for the purposes of information retrieval BIBREF5 , BIBREF6 .",
"The main purpose of the collected dataset is to enable interactive learning using the steps proposed above and potentially to evaluate how different systems perform on this task."
],
[
"The obvious difficulty when developing a dialog system is finding a way how to identify the piece of information that the user is interested in. This is especially a problem for dialog systems based on knowledge graphs containing a large amount of complex structured information. While a similar problem is being solved in a task of question answering, dialog systems have more possibilities of identifying the real intention of the user. For example, a dialog system can ask for additional information during the dialog.",
"We distinguish three different basic approaches to requesting knowledge bases:",
"A combination of the above approaches is also possible. For example, we can imagine scenarios where the dialog system starts with hand-crafted rules, which are subsequently interactively improved through dialogs with its users. With a growing demand for open domain dialog systems, it shows that creating hand-crafted policies does not scale well - therefore, machine learning approaches are gaining on popularity. Many public datasets for offline learning have been published BIBREF8 , BIBREF7 . However, to our knowledge, no public datasets for interactive learning are available. To fill this gap, we collected a dataset which enables to train interactively learned policies through a simulated interaction with users."
],
[
"Offline evaluation of interactive dialogs on real data is difficult because different policies can lead to different variants of the dialog. Our solution to this issue is to collect data in a way that allows us to simulate all dialog variants possible according to any policy.",
"The dialog variants we are considering for interactive learning differ only in presence of several parts of the dialog. Therefore, we can collect dialogs containing all information used for interactive learning and omit those parts that were not requested by the policy.",
"We collected the dataset (see Section \"Dataset Collection Process\" ) that enables simulation where the policy can decide how much extra information to the question it requests. If the question is clear to the system it can attempt to answer the question without any other information. It can also ask for a broader explanation with a possibility to answer the question afterwards. If the system decides not to answer the question, we can simulate rerouting the question to another user, to try to obtain the answer from them. The principle of simulated user's answer is shown in the Figure 1 .",
"Note that the simulated user’s answer can be incorrect because human users naturally made mistakes. We intentionally keep these mistakes in the dataset because real systems must address them as well."
],
[
"A perfect data collection scenario for our dataset would use real running dialog system providing general information from the knowledge base to real users. This system could then ask for explanations and answers for questions which it is not able to answer.",
"However, getting access to systems with real users is usually hard. Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection.",
"A CF worker gets a task instructing them to use our chat-like interface to help the system with a question which is randomly selected from training examples of Simple questions BIBREF7 dataset. To complete the task user has to communicate with the system through the three phase dialog discussing question paraphrase (see Section \"Interactive Learning Evaluation\" ), explanation (see Section \"Future Work\" ) and answer of the question (see Section \"Conclusion\" ). To avoid poor English level of dialogs we involved CF workers from English speaking countries only. The collected dialogs has been annotated (see Section \"Acknowledgments\" ) by expert annotators afterwards.",
"The described procedure leads to dialogs like the one shown in the Figure 2 ."
],
[
"At beginning of the dialog, the system is requesting the user to paraphrase question that the system does not understand. The main goal of this first phase is to let the user get familiar with the presented question and to get alternative wordings of the posed question."
],
[
"In the second phase, the user is asked for an explanation of the question. We expect the explanation to be different enough from the original question (in terms of the number of common words between the question and the explanation). If the explanation is too similar to the question, the user is notified that their explanation is not broad enough and they must provide a better one."
],
[
"With the valid explanation the dialog turns into the last phase where the user is asked for a correct answer to the original question. The system requires the user to answer with a full sentence. In practical experiments this has shown as a useful decision because it improves system's ability to reveal cheaters. We can simply measure the connection (in terms of common words ) between question and the answer sentence. This allows to reject completely irrelevant answers."
],
[
"The correct answer for question in each dialog is available from Simple questions dataset. Answers are in form of Freebase entities identified by unique id. For evaluation purposes we need information whether dialog contains the answer which is consistent with the entity from Simple questions, the answer with another entity or whether the dialog does not contain any answer. While the annotation process is quite simple, we did not need crowdsourcing for the process."
],
[
"The collection system needs to recognize following dialog acts from user utterances during all phases of the dialog:",
"– user does not want to provide requested information,",
"– user agrees to provide requested information,",
"– user does not know the requested information,",
"– user tries chit chat with the system (hello, bye, who are you...),",
"– none of the above, interpreted as user is giving information requested by the system.",
"Parsing of the dialog acts is made by hand written rules using templates and keyword spotting. The templates and keywords were manually collected from frequent expressions used by CF workers during preparation runs of the dataset collection process (google it, check wikipedia, I would need... $\\rightarrow $ Negate)."
],
[
"We collected the dataset with 1900 dialogs and 8533 turns. Topics discussed in dialogs are questions randomly chosen from training examples of Simple questions BIBREF7 dataset. From this dataset we also took the correct answers in form of Freebase entities.",
"Our dataset consists of standard data split into training, development and test files. The basic properties of those files are as follows:",
"Each file contains complete dialogs enriched by outputs of NLU (see Section \"Natural Language Understanding (NLU)\" ) that were used during the data collection. On top of that, each dialog is labeled by the correct answer for the question and expert annotation of the user answer hint which tells whether the hint points to the correct answer, incorrect answer, or no answer at all.",
"351 of all collected dialogs contain correct answer provided by users and 702 dialogs have incorrect answer. In the remaining 847 dialogs users did not want to answer the question. The collected dialogs also contain 1828 paraphrases and 1539 explanations for 1870 questions.",
"An answer for a question was labeled as correct by annotators only when it was evident to them that the answer points to the same Freebase entity that was present in Simple questions dataset for that particular question. However, a large amount of questions from that dataset is quite general - with many possible answers. Therefore lot of answers from users were labeled as incorrect even though those answers perfectly fit the question. Our annotators identified that 285 of the incorrect answers were answers for such general questions. Example of this situation can be demonstrated by question 'Name an actor' which was correctly answered by 'Brad Pitt is an actor', however, to be consistent with Simple questions annotation, which is 'Kelly Atwood', annotators were forced to mark it as an incorrect answer."
],
[
"A perfect interactive learning model would be able to learn anything interactively from test dialogs during testing, which would allow us to measure progress of the model from scratch over the course of time. However, a development of such model would be unnecessarily hard, therefore we provide training dialogs which can be used for feature extraction and other engineering related to interactive learning from dialogs in natural language. Model development is further supported with labeled validation data for parameter tuning.",
"We propose two evaluation metrics for comparing interactive learning models. First metric (see Section \"Efficiency Score\" ) scores amount of information required by the model, second metric (see Section \"Answer Extraction Accuracy\" ) is accuracy of answer extraction from user utterances. All models must base their answers only on information gained from training dialogs and testing dialogs seen during the simulation so far, to ensure that the score will reflect the interactive learning of the model instead of general question answering."
],
[
"The simulation of dialogs from our dataset allows to evaluate how efficient a dialog system is in using information gained from users. The dialog system should maximize the number of correctly answered questions without requesting too many explanations and answers from users. To evaluate different systems using the collected data, we propose the following evaluation measure: ",
"$$ \nS_D = \\frac{n_c - w_i n_i - w_e n_e - w_a n_a}{|D|}$$ (Eq. 20) ",
"Here, $n_c$ denotes the number of correctly answered questions, $n_i$ denotes the number of incorrectly answered questions, $n_e$ denotes the number of requested explanations, $n_a$ denotes the number of requested answers and $|D|$ denotes the number of simulated dialogs in the dataset. $w_i$ , $w_e$ , $w_a$ are penalization weights.",
"The penalization weights are used to compensate for different costs of obtaining different types of information from the user. For example, gaining broader explanation from the user is relatively simple because it is in their favor to cooperate with the system on a question they are interested in. However, obtaining correct answers from users is significantly more difficult because the system does not always have the chance to ask the question and the user does not have to know the correct answer for it.",
"To make the evaluations comparable between different systems we recommend using our evaluation scripts included with the dataset with following penalization weights that reflect our intuition for gaining information from users:",
"– incorrect answers are penalized significantly,",
"– explanations are quite cheap; therefore, we will penalize them just slightly,",
"– gaining question’s answer from users is harder than gaining explanations."
],
[
"It is quite challenging to find appropriate entity in the knowledge base even though the user provided the correct answer. Therefore, we propose another metric relevant to our dataset. This metric is the accuracy of entity extraction which measures how many times was extracted a correct answer from answer hints provided by the user in dialogs annotated as correctly answered."
],
[
"Our future work will be mainly focused on providing a baseline system for interactive learning which will be evaluated on the dataset. We are also planning improvements for dialog management that is used to gain explanations during the data collection. We believe that with conversation about specific aspects of the discussed question it will be possible to gain even more interesting information from users. The other area of our interest is in possibilities to improve question answering accuracy on test questions of Simple question dataset with the extra information contained in the collected dialogs."
],
[
"In this paper, we presented a novel way how to evaluate different interactive learning approaches for dialog models. The evaluation covers two challenging aspects of interactive learning. First, it scores efficiency of using information gained from users in simulated question answering dialogs. Second, it measures accuracy on answer hints understanding.",
"For purposes of evaluation we collected a dataset from conversational dialogs with workers on crowdsourcing platform CrowdFlower. Those dialogs were annotated with expert annotators and published under Creative Commons 4.0 BY-SA license on lindat. We also provide evaluation scripts with the dataset that should ensure comparable evaluation of different interactive learning approaches."
],
[
"This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 224, and GAUK grant 1170516 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071)."
]
],
"section_name": [
"Introduction",
"Motivation",
"Dialog policies",
"Dialog Simulation",
"Dataset Collection Process",
"Question Paraphrasing",
"Question Explanation",
"Question Answer",
"Annotation",
"Natural Language Understanding (NLU)",
"Dataset Properties",
"Interactive Learning Evaluation",
"Efficiency Score",
"Answer Extraction Accuracy",
"Future Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"7c29e584be8349749a3e366af7fbf88f5d9e090d",
"d830334b32de7b2210395e5dd3f59bb86a5f18f3"
],
"answer": [
{
"evidence": [
"However, getting access to systems with real users is usually hard. Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection."
],
"extractive_spans": [
"CrowdFlower"
],
"free_form_answer": "",
"highlighted_evidence": [
"Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"However, getting access to systems with real users is usually hard. Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection.",
"A CF worker gets a task instructing them to use our chat-like interface to help the system with a question which is randomly selected from training examples of Simple questions BIBREF7 dataset. To complete the task user has to communicate with the system through the three phase dialog discussing question paraphrase (see Section \"Interactive Learning Evaluation\" ), explanation (see Section \"Future Work\" ) and answer of the question (see Section \"Conclusion\" ). To avoid poor English level of dialogs we involved CF workers from English speaking countries only. The collected dialogs has been annotated (see Section \"Acknowledgments\" ) by expert annotators afterwards."
],
"extractive_spans": [],
"free_form_answer": "The crowdsourcing platform CrowdFlower was used to obtain natural dialog data that prompted the user to paraphrase, explain, and/or answer a question from a Simple questions BIBREF7 dataset. The CrowdFlower users were restricted to English-speaking countries to avoid dialogs with poor English.",
"highlighted_evidence": [
"Therefore, we used the crowdsourcing platform CrowdFlower (CF) for our data collection.\n\nA CF worker gets a task instructing them to use our chat-like interface to help the system with a question which is randomly selected from training examples of Simple questions BIBREF7 dataset. To complete the task user has to communicate with the system through the three phase dialog discussing question paraphrase (see Section \"Interactive Learning Evaluation\" ), explanation (see Section \"Future Work\" ) and answer of the question (see Section \"Conclusion\" ). To avoid poor English level of dialogs we involved CF workers from English speaking countries only. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"f320efb1fbb744616e420aaf8da0f9622b75b2ed",
"ea4394112c1549185e6b763d6f36733a9f2ed794"
]
},
{
"annotation_id": [
"316a0507040c06a18e68201ee4e3172e4270f3cf",
"b288957fd12a59ddeebd39ca2cf5d61e3c09be88"
],
"answer": [
{
"evidence": [
"We collected the dataset with 1900 dialogs and 8533 turns. Topics discussed in dialogs are questions randomly chosen from training examples of Simple questions BIBREF7 dataset. From this dataset we also took the correct answers in form of Freebase entities."
],
"extractive_spans": [],
"free_form_answer": "4.49 turns",
"highlighted_evidence": [
"We collected the dataset with 1900 dialogs and 8533 turns. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We collected the dataset with 1900 dialogs and 8533 turns. Topics discussed in dialogs are questions randomly chosen from training examples of Simple questions BIBREF7 dataset. From this dataset we also took the correct answers in form of Freebase entities."
],
"extractive_spans": [],
"free_form_answer": "4.5 turns per dialog (8533 turns / 1900 dialogs)",
"highlighted_evidence": [
"We collected the dataset with 1900 dialogs and 8533 turns. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"f320efb1fbb744616e420aaf8da0f9622b75b2ed",
"ea4394112c1549185e6b763d6f36733a9f2ed794"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"How was this data collected?",
"What is the average length of dialog?"
],
"question_id": [
"455d4ef8611f62b1361be4f6387b222858bb5e56",
"bc16ce6e9c61ae13d46970ebe6c4728a47f8f425"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"dialog",
"dialog"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Unknown questions can be rerouted between users. We can, for example, use chitchat to get correct answers. The challenge is in generalizing the collected question-answer pairs using the knowledge base in order to apply them to previously unseen questions.",
"Table 1: Table of turn and dialog counts for dataset splits."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png"
]
} | [
"How was this data collected?",
"What is the average length of dialog?"
] | [
[
"1603.09631-Dataset Collection Process-1",
"1603.09631-Dataset Collection Process-2"
],
[
"1603.09631-Dataset Properties-0"
]
] | [
"The crowdsourcing platform CrowdFlower was used to obtain natural dialog data that prompted the user to paraphrase, explain, and/or answer a question from a Simple questions BIBREF7 dataset. The CrowdFlower users were restricted to English-speaking countries to avoid dialogs with poor English.",
"4.5 turns per dialog (8533 turns / 1900 dialogs)"
] | 19 |
2001.02284 | Multipurpose Intelligent Process Automation via Conversational Assistant | Intelligent Process Automation (IPA) is an emerging technology with a primary goal to assist the knowledge worker by taking care of repetitive, routine and low-cognitive tasks. Conversational agents that can interact with users in a natural language are potential application for IPA systems. Such intelligent agents can assist the user by answering specific questions and executing routine tasks that are ordinarily performed in a natural language (i.e., customer support). In this work, we tackle a challenge of implementing an IPA conversational assistant in a real-world industrial setting with a lack of structured training data. Our proposed system brings two significant benefits: First, it reduces repetitive and time-consuming activities and, therefore, allows workers to focus on more intelligent processes. Second, by interacting with users, it augments the resources with structured and to some extent labeled training data. We showcase the usage of the latter by re-implementing several components of our system with Transfer Learning (TL) methods. | {
"paragraphs": [
[
"Robotic Process Automation (RPA) is a type of software bots that simulates hand-operated human activities like entering data into a system, registering into accounts, and accomplishing straightforward but repetitive workflows BIBREF0. However, one of the drawbacks of RPA-bots is their susceptibility to changes in defined scenarios: being designed for a particular task, the RPA-bot is usually not adaptable to other domains or even light modifications in a workflow BIBREF0. This inability to readjust to shifting conditions gave rise to Intelligent Process Automation (IPA) systems. IPA-bots combine RPA with Artificial Intelligence (AI) and thus are able to execute more cognitively demanding tasks that require i.a. reasoning and language understanding. Hence, IPA-bots advanced beyond automating shallow “click tasks” and can perform jobs more intelligently – by means of machine learning algorithms. Such IPA-systems undertake time-consuming and routine tasks, and thus enable smart workflows and free up skilled workers to accomplish higher-value activities.",
"One of the potential applications of Natural Language Processing (NLP) within the IPA domain are conversational interfaces that enable human-to-machine interaction. The main benefit of conversational systems is their ability to give attention to several users simultaneously while supporting natural communication. A conventional dialogue system comprises multiple stages and involves different types of NLP subtasks, starting with Natural Language Understanding (NLU) (e.g., intent classification, named entity extraction) and going towards dialogue management (i.e., determining the next possible bot action, considering the dialogue history) and response generation (e.g., converting the semantic representation of the next system action into a natural language utterance). A typical dialogue system for IPA purposes undertakes shallow customer support requests (e.g., answering of FAQs), allowing human workers to focus on more sophisticated inquiries.",
"Recent research in the dialogue generation domain is conducted by employing AI-techniques like machine and deep learning BIBREF1, BIBREF2. However, conventional supervised methods have limitations when applied to real-world data and industrial tasks. The primary challenge here refers to a training phase since a robust model requires an extensive amount of structured and labeled data, that is often not available for domain-specific problems. Especially if it concerns dialogue data, which has to be appropriately structured as well as labeled and annotated with additional information. Therefore, despite the popularity of deep learning end-to-end models, one still needs to rely on conventional pipelines in practical dialogue engineering, especially while setting a new domain. However, with few structured data available, transfer learning methods can be used. Such algorithms enable training of the systems with less or even a minimal amount of data, and are able to transfer the knowledge obtained during the training on existing data to the unseen domain."
],
[
"This paper addresses the challenge of implementing a dialogue system for IPA purposes within the practical e-learning domain with the initial absence of training data. Our contributions within this work are as follows:",
"We implemented a robust dialogue system for IPA purposes within the practical e-learning domain and within the conditions of missing training (dialogue) data (see Section SECREF3 – Section SECREF4). The system is currently deployed at the e-learning platform.",
"The system has two purposes:",
"First, it reduces repetitive and time-consuming activities and, therefore, allows workers of the e-learning platform to focus solely on complex questions;",
"Second, by interacting with users, it augments the resources with structured and to some extent labeled training data for further possible implementation of learnable dialogue components (see Section SECREF5);",
"We showcased that even a small amount of structured dialogues could be successfully used for re-training of dialogue units by means of Transfer Learning techniques (see Section SECREF6)."
],
[
"OMB+ is a German e-learning platform that assists students who are preparing for an engineering or computer science study at a university. The central purpose of the course is to support students in reviving their mathematical skills so that they can follow the upcoming university courses. The platform is thematically segmented into 13 sections and includes free mathematical classes with theoretical and practical content. Besides that, OMB+ provides a possibility to get assistance from a human tutor via a chat interface. Usually, the students and tutors interact in written form, and the language of communication is German. The current problem of the OMB+ platform is that the number of students grows every year, but to hire more qualified human tutors is challenging and expensive. This results in a more extended waiting period for students until their problems can be considered.",
"In general, student questions can be grouped into three main categories: organizational questions (e.g., course certificate), contextual questions (e.g., content, theorem) and mathematical questions (e.g., exercises, solutions). To assist a student with a mathematical question, a tutor has to know the following regular information: What kind of topic (or sub-topic) a student has a problem with. At which examination mode (i.e., quiz, chapter level training or exercise, section level training or exercise, or final examination) the student is working right now. And finally, the exact question number and exact problem formulation. This means that a tutor has to request the same information every time a new dialogue opens, which is very time consuming and could be successfully solved by means of an IPA dialogue bot."
],
[
"The main objective of the proposed system is to interact with students at the beginning of every conversation and gather information on the topic (and sub-topic), examination mode and level, question number and exact problem formulation. Therefore, the system saves time for tutors and allows them to handle solely complex mathematical questions. Besides that, the system is implemented in a way such that it accumulates labeled dialogues in the background and stores them in a structured form."
],
[
"Figure FIGREF82 (see the Appendix) displays the entire dialogue flow. In a nutshell, the system receives a user input, analyzes it and extracts information, if provided. If some of the required information is missing, the system asks the student to provide it. When all the information is collected, it will be automatically validated and subsequently forwarded to a human tutor, who then can directly proceed with the assistance. In the following we will describe the central components of the system.",
"",
"OMB+ Design: Figure FIGREF52 (see Section SECREF11 of the Appendix) illustrates the internal structure and design of the OMB+ platform. It has topics and sub-topics, as well as four examination modes. Each topic (Figure FIGREF52, tag 1) corresponds to a chapter level and always has sub-topics (Figure FIGREF52, tag 2), which correspond to a section level. Examination modes training and exercise are ambiguous, because they correspond to either a chapter (Figure FIGREF52, tag 3) or a section (Figure FIGREF52, tag 5) level, and it is important to differentiate between them, since they contain different types of content. The mode final examination (Figure FIGREF52, tag 4) always corresponds to a chapter level, whereas quiz (Figure FIGREF52, tag 5) can belong only to a section level. According to the design of the OMB+ platform, there are several ways of how a possible dialogue flow can proceed.",
"",
"Preprocessing: In a natural language dialogue, a user may respond in many different ways, thus, the extraction of any data from user-generated text is a challenging task due to a number of misspellings or confusable spellings (e.g., Exercise 1.a, Exercise 1 (a)). Therefore, to enable a reliable extraction of entities, we preprocessed and normalized (e.g., misspellings, synonyms) every user input before it was sent to the Natural Language Understanding (NLU) module. The preprocessing includes following steps:",
"lowercasing and stemming of all words in the input;",
"removal of German stop words and punctuation;",
"all mentions of $x$ in mathematical formulas were removed to avoid confusion with roman number 10 (“$X$”);",
"in a combination of the type: word “Chapter/Exercise” + digit written as a word (i.e “first”, “second”), word was replaced with a digit (“in first Chapter” $\\rightarrow $ “in Chapter 1”), roman numbers were replaced with digits as well (“Chapter IV” $\\rightarrow $ “Chapter 4”).",
"detected ambiguities were normalized (e.g., “Trainingsaufgabe” $\\rightarrow $ “Training”);",
"recognized misspellings resp. typos were corrected (e.g., “Difeernzialrechnung” $\\rightarrow $ “Differentialrechnung')'",
"permalinks were parsed and analyzed. From each permalink it is possible to extract topic, examination mode and question number;",
"",
"Natural Language Understanding (NLU): We implemented an NLU unit utilizing handcrafted rules, Regular Expressions (RegEx) and Elasticsearch (ES) API. The NLU module contains following functionalities:",
"Intent classification: As we mentioned above, student questions can be grouped into three main categories: Organizational questions, contextual questions and mathematical questions. To classify the input message by its category or so-called intent, we utilized key-word information predefined by handcrafted rules. We assumed that particular words are explicit and associated with a corresponding intent. If no intent could be classified, then it is assumed that the NLU unit was not capable of understanding and the intent is interpreted as unknown. In this case, the system requests the user to provide an intent manually (by picking one from the mentioned three options). The questions from organizational and theoretical categories are directly delivered to a human tutor, while mathematical questions are processed by the automated system for further analysis.",
"Entity Extraction: Next, the system attempts to retrieve the entities from a user message on the topic (and sub-topic), examination mode and level, and question number. This part is implemented using Elasticsearch (ES) and RegEx. To enable the use of ES, we indexed the OMB+ site to an internal database. Besides indexing the topics and titles, we also provided information on possible synonyms or writing styles. We additionally filed OMB+ permalinks, which direct to the site pages. To query the resulting database, we utilized the internal Elasticsearch multi match function and set the minimum should match parameter to $20\\%$. This parameter defines the number of terms that must match for a document to be considered relevant. Besides that, we adopted fuzziness with the maximum edit distance set to 2 characters. The fuzzy query uses similarity based on Levenshtein edit distance BIBREF3. Finally, the system generates a ranked list of possible matching entries found in the database within the predefined relevance threshold (we set it to $\\theta $=$1.5$). We pick the most probable entry as the correct one and extract the corresponding entity from the user input.",
"To summarize, the NLU module receives the user input as a preprocessed text and checks it across all predefined RegEx statements and for a match in the Elasticsearch database. Every time the entity is extracted, it is entered in the Information Dictionary (ID). The ID has the following six slots to be filled in: topic, sub-topic, examination level, examination mode, question number, and exact problem formulation.",
"",
"Dialogue Manager consists of the Dialogue State Tracker (DST), that maintains a representation of the current dialog state, and of the Policy Learner (PL) that defines the next system action. In our model, the system's next action is defined by the state of the previously obtained information stored in the Information Dictionary. For instance, if the system recognizes that the student works on the final examination, it also understands (defined by the logic in the predefined rules) that there is no need to ask for sub-topic because the final examination always corresponds to a chapter level (due to the design of OMB+ platform). If the system identifies that the user has difficulties in solving a quiz, it has to ask for the corresponding topic and sub-topic if not yet provided by a user (because the quiz always refers to a section level). To determine all of the potential dialogue flows, we implemented Mutually Exclusive Rules (MER), which indicate that two events $e_{1}$ and $e_{2}$ are mutually exclusive or disjoint if they cannot both occur at the same time (thus, the intersection of these events is empty: $P(A \\cap B) = 0$). Additionally, we defined transition and mapping rules. The formal explanation of rules can be found in Section SECREF12 of the Appendix. Following the rules, we generated 56 state transitions, which define next system actions. Being on a new dialogue state, the system compares the extracted (i.e., updated) information in the ID with the valid dialogue states (see Section SECREF12 of the Appendix for the explanation of the validness) and picks the mapped action as the next system's action.",
"The abovementioned rules are intended to support the current design of the OMB+ learning platform. However, additional MERs could be added to generate new transitions. Exemplifying this, we conducted experiments with the former design of the platform, where all the topics, except for the first one, had only sub-topics, whereas the first topic had both sub-topics and sub-sub-topics. We could effortlessly generate the missing transitions with our approach. The number of possible transitions, in this case, increased from 56 to 117.",
"",
"Meta Policy: Since the system is intended to operate in a real-world scenario, we had to implement additional policies that control the dialogue flow and validate the system's accuracy. Below we describe these policies:",
"Completeness of the Information Dictionary: In this step, the system validates the completeness of the ID, which is defined by the number of obligatory slots filled in the information dictionary. There are 6 distinct cases when the ID is considered to be complete (see Section SECREF13 of the Appendix). For instance, if a user works on a final examination, the system does not has to request a sub-topic or examination level. Thus, the ID has to be filled only with data for a topic, examination mode, and question number, whereas, if the user works on a quiz, the system has to gather information about the topic, sub-topic, examination mode, and the question number. Once the ID is complete, it is provided to the verification step. Otherwise, the system proceeds according to the next action. The system extracts the information in each dialogue step, and thus if the user provides updated information on any subject later in the dialogue, the corresponding slot will be updated in the ID.",
"Verification Step: Once the system has obtained all the necessary information (i.e., ID is complete), it proceeds to the final verification. In that step, the collected data is shown to the student in the session. The student is asked to verify the correctness of the collected data and if some entries are wrong, to correct them. The ID is, where necessary, updated with the user-provided data. This procedure repeats until the user confirms the correctness of the assembled data.",
"Fallback Policy: In some cases, the system fails to derive information from a student query, even if the student provides it. It is due to the Elasticsearch functionality and previously unseen RegEx patterns. In these cases, the system re-asks a user and attempts to retrieve information from a follow-up query. The maximum number of re-ask attempts is set to three times ($r=3$). If the system is unable to extract information after three times, the user input is considered as the ground truth and saved to the appropriate slot in ID. An exception to this rule applies where the user has to specify the intent manually. In this case, after three unclassified attempts, a session is directly handed over to a human tutor.",
"Human Request: In each dialogue state, a user can switch to a human tutor. For this, a user can enter the human key-word. Hence, every user message is additionally analyzed for the presence of this key-word.",
"",
"Response Generation: In this module, the semantic representation of the system's next action is transformed into natural language. Hence, each possible action is mapped to precisely one utterance, which is stored in the templates. Some of the predefined responses are fixed (i.e., “Welches Kapitel bearbeitest du gerade?”), others have placeholders for system values. In the latter case, the utterance can be formulated dependent on the actual ID. The dialogue showcases can be found in Section SECREF14 of the Appendix."
],
[
"In order to get the feedback on the quality, functionality, and usefulness of the introduced model, we evaluated it in two ways: first, with an automated method using 130 manually annotated dialogues, to prove the robustness of the system, and second – with human tutors from OMB+ – to investigate the user experience. We describe the details as well as the most common errors below."
],
[
"To conduct an automated evaluation, we manually created a dataset with 130 dialog scenarios. We took real first user questions and predefined possible user responses, as well as gold system answers. These scenarios cover most frequent dialogues that we previously saw in human-to-human dialogue data. We tested our system by implementing a self-chatting evaluation bot. The evaluation cycle could be described as follows:",
"The system receives the first question from a predefined dialogue via an API request, preprocesses and analyzes it in order to extract entities.",
"Then it estimates the applicable next action, and responds according to it.",
"This response is then compared to the system's gold answer: If the predicted answer is correct, then the system receives the next predefined user input from the dialog and responds again, as defined above. This procedure continues until the dialog terminates (i.e., ID is complete). Otherwise, the system fails, reporting the unsuccessful case number.",
"Our final system successfully passed this evaluation for all 130 cases."
],
[
"To evaluate our system on irregular examples, we conducted experiments with human tutors from the OMB+ platform. The tutors are experienced regarding rare or complex questions, ambiguous answers, misspellings and other infrequent but still very relevant problems, which occur during a dialogue. In the following, we investigate some common errors and make additional observations.",
"",
"Misspellings and confusable spellings occur quite often in the user-generated text, and since we attempt to let the conversation remain very natural from the user side and thus, cannot require formal writing, we have to deal with various writing issues. One of the most frequent problems is misspellings. German words are generally long and can be complicated, and since users type quickly, this often leads to the wrong order of characters within words. To tackle this challenge, we used fuzzy match within ES. However, the maximum allowed edit distance in Elasticsearch is set to 2 characters. This means, that all the misspellings beyond this threshold could not be accurately recognized by ES (e.g., Differentialrechnung vs Differnezialrechnung). Another characteristic example would be the writing of the section or question number. The equivalent information can be written in several distinct ways, which has to be considered in our RegEx unit (e.g., Exercise 5 a, Exercise V a, Exercise 5 (a)). A similar problem occurs with confusable spelling (i.e.: Differentialrechnung vs Differentialgleichung). We analyzed the cases mentioned above and added some of the most common issues to the ES database or handled them with RegEx during the preprocessing step.",
"",
"Elasticsearch Threshold: In some cases, the system failed to extract information, although the user provided it. In other cases, ES extracts information that was not mentioned in a user query at all. That occurs due to the relevancy scoring algorithm of Elasticsearch, where a document's score is a combination of textual similarity and other metadata based scores. Our analysis revealed that ES mostly fails to extract the information if the sentence (i.e., user message) is quite short (e.g., 5 words). To overcome this difficulty, we combined the current input $u_{t}$ with the dialog history. This step eliminated the problem and improved the retrieval quality. To solve the case where Elasticsearch extracts incorrect information (or information that was not mentioned in a query) is more challenging. We discovered that the problem comes from short words or sub-words (e.g., suffixes, prefixes), which ES considers to be credible enough. The Elasticsearch documentation suggests getting rid of stop words to eliminate this behavior. However, this did not improve the search in our case. Also, fine-tuning of ES parameters such as the relevance threshold, prefix length and minimum should match parameter did not bring significant improvements. To cope with this problem, we implemented a verification step, where a user is given a chance to correct the erroneously retrieved information.",
"The overall feedback from the tutors included reduced repetitive activities as well as reduced waiting times for students until their questions were processed. Also, tutors reported that the rate of cancelled sessions (switching to a human tutor) is rather low."
],
[
"As we already mentioned, our system attempts to support the human tutor by assisting students, but it also collects structured and labeled training data in the background. In a trial run of the rule-based system, we were able to accumulate a toy-dataset with training dialogues. The assembled dialogues have the following format:",
"Plain dialogues with unique dialogue indexes;",
"Plain Information Dictionary information (e.g., extracted entities) collected for the whole dialogue;",
"Pairs of questions (i.e., user requests) and responses (i.e., bot responses) with the unique dialogue- and turn-indexes;",
"Triples in the form of (User Request, Next Action, Response). Information on the next system's action could be employed to train a Dialogue Manager unit with (deep-) machine learning algorithms;",
"For each state in the dialogue, we saved the entities that the system was able to extract from the provided user query, along with their position in the utterance. This information could be used to train a custom, domain specific Named Entity Recognition model."
],
[
"As we mentioned before, there are many cases, especially in the industry, where the labeled and structured data is not directly available. Collecting and labeling such data is often a tedious and time-consuming task. Thus, algorithms that enable training of the systems with less or even a minimal amount of data are highly required. Such algorithms can transfer the knowledge obtained during the training on existing data to the unseen domain. They are, therefore, one of the potential solutions for industrial problems.",
"Once we assembled a dataset of structured data via our rule-based system, we re-implemented two out of three central dialogue components in our conversational assistant with deep learning methods. Since the available data was collected in a trial-run and thus the obtained dataset was rather small to train a machine learning model from scratch, we utilized the Transfer Learning approach, and fine-tuned the existing pre-trained model (i.e., BERT) for our target domain and data.",
"For the experiments, we defined two tasks:",
"First, we studied the Named Entity Recognition problem in a custom domain setting. We defined a sequence labeling task and employed the BERT model BIBREF4. We applied the model to our dataset and fine-tuned it for six (6) domain-specific (i.e., e-learning) entities and one (1) “unknown” label.",
"Second, we investigated the effectiveness of BERT for the dialogue manager core. For that experiment, we defined a classification task and applied the model to predict the system's Next Action for every given user utterance in a conversation. We then computed the macro F-score for 13 possible actions and an average dialogue accuracy.",
"Finally, we verified that the applied model performed well on both tasks: We achieved the performance of $0.93$ macro F1 points for Named Entity Recognition (NER) and $0.75$ macro F1 points for the Next Action Prediction (NAP) task. We, therefore, conclude that both NER and NAP components could be employed to substitute or extend the existing rule-based modules.",
"",
"Data & Descriptive Statistics: The dataset that we collected during the trial-run consists of 300 structured dialogues with the average length of a dialogue being six (6) utterances. Communication with students was performed in the German language. Detailed general statistics can be found in Table TABREF41.",
"",
"",
"Named Entity Recognition: We defined a sequence labeling task to extract custom entities from user input. We assumed seven (7) possible entities (see Table TABREF43) to be recognized by the model: topic, subtopic, examination mode and level, question number, intent, as well as the entity other for remaining words in the utterance. Since the data obtained from the rule-based system already contains information on the entities extracted from each user query (i.e., by means of Elasticsearch), we could use it to train a domain-specific NER unit. However, since the user-input was informal, the same information could be provided in different writing styles. That means that a single entity could have different surface forms (e.g., synonyms, writing styles) (although entities that we extracted from the rule-based system were all converted to a universal standard, e.g., official chapter names). To consider all of the variable entity forms while post-labeling the original dataset, we defined generic entity names (e.g., chapter, question nr.) and mapped variations of entities from the user input (e.g., Chapter = [Elementary Calculus, Chapter $I$, ...]) to them.",
"",
"Next Action Prediction: We defined a classification problem to predict the system's next action according to the given user input. We assumed 13 custom actions (see Table TABREF42) that we considered being our labels. In the conversational dataset, each input was automatically labeled by the rule-based system with the corresponding next action and the dialogue-id. Thus, no additional post-labeling was required. We investigated two settings:",
"Default Setting: Using only a user input and the corresponding label (i.e., next action) without additional context. By default, we run all of our experiments in this setting.",
"Extended Setting: Using a user input, a corresponding next action, and a previous system action as a source of additional context. For this setting, we run an experiment with the best performing model from the default setting.",
"The overall dataset consists of 300 labeled dialogues, where 200 (with 1161 utterances) of them were employed for training, and 100 for evaluation and test sets (50 dialogues with about 300 utterances for each set respectively).",
"",
"Model Settings: For the NER task we conducted experiments with German and multilingual BERT implementations. Since in the German language the capitalization of words plays a significant role, we run our tests on the capitalized input, while keeping the original punctuation. Hence, we employed the available base model for both multilingual and German BERT implementations in the cased version. We set the learning rate for both models to $1e-4$ and the maximum length of the tokenized input was set to 128 tokens. We run the experiments multiple times with different seeds for a maximum of 50 epochs with the training batch size set to 32. We utilized AdamW as the optimizer and employed early stopping, if the performance did not change significantly after 5 epochs.",
"For the NAP task we conducted experiments with German and multilingual BERT implementations as well. Here, we investigated the performance of both capitalized and lowercased input, as well as plain and preprocessed data. For the multilingual BERT, we employed the base model in both cased and uncased variations. For the German BERT, we utilized the base model in the cased variation only. For both models, we set the learning rate to $4e-5$, and the maximum length of the tokenized input was set to 128 tokens. We run the experiments multiple times with different seeds for a maximum of 300 epochs with the training batch size set to 32. We utilized AdamW as the optimizer and employed early stopping, if the performance did not change significantly after 15 epochs.",
"",
"Evaluation and Discussion: For the evaluation, we computed word-level macro F1 score for the NER task and utterance-level macro F1 score for the NAP task. The word-level F1 is estimated as the average of the F1 scores per class, each computed from all words in the evaluation and test sets. The results for the NER task are depicted in Table TABREF49. For utterance-level F1, a single label (i.e., next action) is obtained for the whole utterance. The results for the NAP task are presented in Table TABREF48. We additionally computed average dialogue accuracy for the best performing NAP models. This score denotes how well the predicted next actions match the gold next actions and thus form the dialogue flow within each conversation. The average dialogue accuracy was computed for 50 dialogues in the evaluation and test sets respectively. The results are displayed in Table TABREF50.",
"The obtained results for the NER task revealed that German BERT performed significantly better than the multilingual BERT model. The performance of the custom NER unit is at $0.93$ macro F1 points for all possible named entities (see Table TABREF49). In contrast, for the NAP task, the multilingual BERT model obtained better performance than the German BERT model. Here, the best performing system in the default setting achieved a macro F1 of $0.677$ points for 14 possible labels, whereas the model in the extended setting performed better – its highest macro F1 score is $0.752$ for the same amount of labels (see Table TABREF48). Considering the dialogue accuracy, the extended system trained with multilingual BERT achieved better results than the default one with $0.801$ accuracy points compared to $0.724$ accuracy points for the test set (see Table TABREF50). The overall observation for the NAP is that the capitalized setting improved the performance of the model, whereas the inclusion of punctuation has not positively influenced the results."
],
[
"After the evaluation step, we analyzed the cases, where the model failed to predict the correct action or labeled the named entity span erroneously. Below we describe the most common errors for both tasks.",
"Next Action Prediction: One of the most frequent errors in the default model was the mismatch between two consecutive actions – namely, the action Question Number and Subtopic. That is due to the order of these actions in the conversational flow: Occurrence of both actions in the dialogue is not strict and substantially depends on the previous system action. However, the analysis of the extended model revealed that the introduction of additional context in the form of the previous action improved the performance of the system in this particular case by about $60\\%$.",
"Named Entity Recognition: The failing cases include mismatches between the tags “chapter” and “other”, and the tags “question number” and “other”. This type of error arose due to the imperfectly labeled span of a multi-word named entity. In such cases, the first or last word in the named entity was excluded from the span and erroneously labeled with the tag “other”."
],
[
"Individual components of a particular dialogue system could be implemented using a different kind of approach, starting with entirely rule- and template-based methods, and going towards hybrid approaches (using learnable components along with handcrafted units) and end-to-end trainable machine learning methods.",
"",
"Rule-based Approaches: Though many of the latest research approaches handle NLU and NLG units by using statistical NLP models BIBREF5, BIBREF6, BIBREF7, most of the industrially deployed dialogue systems still use manual features or handcrafted rules for the state and action prediction, intent classification, and slot filling tasks BIBREF8, BIBREF9. The rule-based approach ensures robustness and stable performance that is crucial for industrial systems that interact with a large number of users simultaneously. However, it is highly expensive and time-consuming to deploy a real dialogue system built in this manner. The major disadvantage is that the usage of handcrafted systems is restricted to a specific domain, and possible domain adaptation requires extensive manual engineering.",
"",
"End-to-End Learning Approaches: Due to the recent advance of end-to-end neural generative models BIBREF10, many efforts have been made to build an end-to-end trainable architecture for dialogue systems. Rather than using the traditional pipeline, an end-to-end model is conceived as a single module BIBREF8. Despite having better adaptability compared to any rule-based system and being easy to train, end-to-end approaches remain unattainable for commercial conversational agents operating on real-world data. A well and carefully constructed task-oriented dialogue system in a known domain using handcrafted rules and predefined responses, still outperforms the end-to-end systems due to its robustness BIBREF11, BIBREF12.",
"",
"Hybrid Approaches: Though end-to-end learning is an attractive solution for dialogue systems, current techniques are data-intensive and require large amounts of dialogues to learn simple actions. To overcome this difficulty, BIBREF13 (BIBREF13) introduce Hybrid Code Networks (HCNs), which is an ensemble of retrieval and trainable units. The authors report, that compared to existing end-to-end methods, their approach considerably reduces the amount of data required for training BIBREF13. Hybrid models appear to replace the established rule- and template-based approaches which are currently utilized in an industrial setting."
],
[
"In this work, we implemented a dialogue system for Intelligent Process Automation purposes that simultaneously solves two problems: First, it reduces repetitive and time-consuming activities and, therefore, allows workers of the e-learning platform to focus on solely mathematical and hence more cognitively demanding questions. Second, by interacting with users, it augments the resources with structured and labeled training data for further possible implementation of learnable dialogue components. The realization of such a system was connected with many challenges. Among others were missing structured data, ambiguous or erroneous user-generated text and the necessity to deal with already existing corporate tools and their design. The introduced model allowed us to accumulate structured and to some extent labeled data without any special efforts from the human (i.e., tutors) side (e.g., manual annotation of existing dialogues, change of the conversational structure). Once we collected structured dialogues, we were able to re-train specific components of the system with deep learning methods and achieved reasonable performance for all proposed tasks.",
"We believe the obtained results are rather good, considering a relatively small amount of data we utilized to fine-tune the pre-trained model. We, therefore, conclude that both Next Action Prediction and Named Entity Recognition components could be employed to substitute or extend the existing rule-based modules. Rule-based units are restricted in their capabilities and could be hardly adaptable to novel patterns, whereas the trainable units generalize better, which we believe could reduce the number of erroneous predictions in case of unexpected dialogue behavior. Furthermore, to increase the overall robustness, both rule-based and trainable components could be used synchronously as a hybrid model: in case when one system fails, the dialogue proceeds on the prediction obtained from the other model."
],
[
"The core of the rule-based model is a dialogue manager that determines the current state of the conversation and the possible next action. Rule-based systems are generally considered to be hardly adaptable to new domains; however, our dialogue manager proved to be flexible to slight modifications in a workflow. One of the possible directions of future work would be the investigation of the general adaptability of the dialogue manager core to other scenarios and domains (e.g., different course). Further investigation could be towards the multi-language modality for the re-implemented units. Since the OMB+ platform also supports English and Chinese, it would be interesting to examine whether the simple translation from target language (i.e., English, Chinese) to source language (i.e., German) would be sufficient to employ already-assembled dataset and pre-trained units."
],
[
"We gratefully acknowledge the OMB+ team for the collaboration and especially thank Ruedi Seiler and Michael Heimann for their helpful feedback and technical support. We are indebted to the tutors for the evaluation of the system, as well as to the anonymous reviewers for their valuable comments."
],
[
"Figure FIGREF52 presents an example of the OMB+ Online Learning Platform."
],
[
"Assume a list of all theoretically possible dialogue states: $S$ = [topic, sub-topic, training, exercise, chapter level, section level, quiz, final examination, question number] and for each element $s_{n}$ in $S$ is true that:",
"This would give us all general (resp. possible) dialogue states without reference to the design of the OMB+ platform. However, to make the dialogue states fully suitable for the OMB+, from the general states, we take only those, which are valid. To define the validness of the state, we specify the following five Mutually Exclusive Rules (MER):",
"Rule ($R_{1}$) in Table TABREF54 denotes admissible configurations for topic and means that we are either given a topic ($T$) or not ($\\lnot T$).",
"Rule ($R_{2}$) in Table TABREF55 denotes that either no information on the examination mode is given, or examination mode is Training ($TR$) or Exercise ($E$) or Quiz ($Q$) or Final Examination ($FE$), but not more than one mode at the same time.",
"Rule ($R_{3}$) in Table TABREF56 indicates that either no level information is provided, or the level corresponds to chapter level ($CL$) or to section level ($SL$), but not to both at the same time.",
"Rule ($R_{4}$) in Table TABREF57 means that Training ($TR$) and Exercise ($E$) examination modes can either belong to chapter level ($CL$) or to section level ($SL$), but not to both at the same time.",
"Rule ($R_{5}$) in Table TABREF58 symbolizes that we could be either given only a topic ($T$) or the combination of topic and sub-topic ($ST$) at the same time, or only sub-topic, or no information on this point at all.",
"We then define a valid dialogue state, as a dialogue state that meets all requirements of the abovementioned rules:",
"After we get the valid states for our dialogues, we want to make a mapping from each valid dialogue state to the next possible systems action. For that, we first define five transition rules :",
"means that no topic ($T$) is found in the ID (i.e., could not be extracted from user input).",
"indicates that no examination mode ($EM$) is found in the ID.",
"denotes that the extracted examination mode ($EM$) is either Training ($TR$) or Exercise ($E$).",
"means that no sub-topic ($ST$) is provided by a user, but ID either already contains the combination of topic ($T$), training ($TR$) and section level ($SL$), or the combination of topic, exercise ($E$) and section level, or the combination of topic and quiz ($Q$).",
"indicates that no question number ($QNR$) was provided by a student (or could not be successfully extracted).",
"Finally, we assumed the list of possible next actions for the system:",
"Following the transition rules, we mapped each valid dialogue state to the possible next action $a_{m}$ in $A$:",
"in the case where we do not have any topic provided, the next action is to ask for the topic ($T$).",
"if no examination mode is provided by a user (or it could not be successfully extracted from the user query), the next action is defined as ask for examination mode ($EM$).",
"in case where we know the examination mode $EM$ $\\in $ [Training, Exercise], we have to ask about the level (i.e., training at chapter level or training at section level), thus the next action is ask for level ($L$).",
"if no sub-topic is provided, but the examination mode $EM$ $\\in $ [Training, Exercise] at section level, the next action is defined as ask for sub-topic ($ST$).",
"if no question number is provided by a user, then the next action is ask for question number ($QNR$)."
],
[
"Below are examples of two final cases (out of six), where ID is considered to be complete:"
],
[
"Below are five sample dialogues with variable flows.",
""
]
],
"section_name": [
"Introduction",
"Introduction ::: Outline and Contributions",
"Target Domain & Task Definition",
"Model",
"Model ::: Dialogue Modules",
"Evaluation",
"Evaluation ::: Automated Evaluation",
"Evaluation ::: Human Evaluation and Error Analysis",
"Structured Dialogue Acquisition",
"Re-implementation of units with BERT",
"Error Analysis:",
"Related Work",
"Conclusions",
"Future Work",
"Future Work ::: Acknowledgments.",
"OMB+ Design",
"Mutually Exclusive Rules",
"Completeness of ID: Example Cases",
"Interaction Showcases"
]
} | {
"answers": [
{
"annotation_id": [
"402f97781494b143549fcbad61943b9d02068b19",
"670e2fbce9b7b0b1437a44e970e6f2660fbf97e9"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"Natural Language Understanding (NLU): We implemented an NLU unit utilizing handcrafted rules, Regular Expressions (RegEx) and Elasticsearch (ES) API. The NLU module contains following functionalities:",
"Dialogue Manager consists of the Dialogue State Tracker (DST), that maintains a representation of the current dialog state, and of the Policy Learner (PL) that defines the next system action. In our model, the system's next action is defined by the state of the previously obtained information stored in the Information Dictionary. For instance, if the system recognizes that the student works on the final examination, it also understands (defined by the logic in the predefined rules) that there is no need to ask for sub-topic because the final examination always corresponds to a chapter level (due to the design of OMB+ platform). If the system identifies that the user has difficulties in solving a quiz, it has to ask for the corresponding topic and sub-topic if not yet provided by a user (because the quiz always refers to a section level). To determine all of the potential dialogue flows, we implemented Mutually Exclusive Rules (MER), which indicate that two events $e_{1}$ and $e_{2}$ are mutually exclusive or disjoint if they cannot both occur at the same time (thus, the intersection of these events is empty: $P(A \\cap B) = 0$). Additionally, we defined transition and mapping rules. The formal explanation of rules can be found in Section SECREF12 of the Appendix. Following the rules, we generated 56 state transitions, which define next system actions. Being on a new dialogue state, the system compares the extracted (i.e., updated) information in the ID with the valid dialogue states (see Section SECREF12 of the Appendix for the explanation of the validness) and picks the mapped action as the next system's action."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We implemented an NLU unit utilizing handcrafted rules, Regular Expressions (RegEx) and Elasticsearch (ES) API.",
"To determine all of the potential dialogue flows, we implemented Mutually Exclusive Rules (MER), which indicate that two events $e_{1}$ and $e_{2}$ are mutually exclusive or disjoint if they cannot both occur at the same time (thus, the intersection of these events is empty: $P(A \\cap B) = 0$). Additionally, we defined transition and mapping rules."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7ed14a49bacde3a0c5b447f887353ed40e48621f",
"9e49aa7380ed96927e555fc98baa1249e4e6efd2"
],
"answer": [
{
"evidence": [
"Named Entity Recognition: We defined a sequence labeling task to extract custom entities from user input. We assumed seven (7) possible entities (see Table TABREF43) to be recognized by the model: topic, subtopic, examination mode and level, question number, intent, as well as the entity other for remaining words in the utterance. Since the data obtained from the rule-based system already contains information on the entities extracted from each user query (i.e., by means of Elasticsearch), we could use it to train a domain-specific NER unit. However, since the user-input was informal, the same information could be provided in different writing styles. That means that a single entity could have different surface forms (e.g., synonyms, writing styles) (although entities that we extracted from the rule-based system were all converted to a universal standard, e.g., official chapter names). To consider all of the variable entity forms while post-labeling the original dataset, we defined generic entity names (e.g., chapter, question nr.) and mapped variations of entities from the user input (e.g., Chapter = [Elementary Calculus, Chapter $I$, ...]) to them.",
"Next Action Prediction: We defined a classification problem to predict the system's next action according to the given user input. We assumed 13 custom actions (see Table TABREF42) that we considered being our labels. In the conversational dataset, each input was automatically labeled by the rule-based system with the corresponding next action and the dialogue-id. Thus, no additional post-labeling was required. We investigated two settings:"
],
"extractive_spans": [],
"free_form_answer": "It defined a sequence labeling task to extract custom entities from user input and label the next action (out of 13 custom actions defined).",
"highlighted_evidence": [
" We defined a sequence labeling task to extract custom entities from user input. We assumed seven (7) possible entities (see Table TABREF43) to be recognized by the model: topic, subtopic, examination mode and level, question number, intent, as well as the entity other for remaining words in the utterance. ",
" We defined a classification problem to predict the system's next action according to the given user input. We assumed 13 custom actions (see Table TABREF42) that we considered being our labels. In the conversational dataset, each input was automatically labeled by the rule-based system with the corresponding next action and the dialogue-id. Thus, no additional post-labeling was required. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Plain dialogues with unique dialogue indexes;",
"Plain Information Dictionary information (e.g., extracted entities) collected for the whole dialogue;",
"Pairs of questions (i.e., user requests) and responses (i.e., bot responses) with the unique dialogue- and turn-indexes;",
"Triples in the form of (User Request, Next Action, Response). Information on the next system's action could be employed to train a Dialogue Manager unit with (deep-) machine learning algorithms;"
],
"extractive_spans": [
"Plain dialogues with unique dialogue indexes",
"Plain Information Dictionary information (e.g., extracted entities) collected for the whole dialogue",
"Pairs of questions (i.e., user requests) and responses (i.e., bot responses)",
"Triples in the form of (User Request, Next Action, Response)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Plain dialogues with unique dialogue indexes;\n\nPlain Information Dictionary information (e.g., extracted entities) collected for the whole dialogue;\n\nPairs of questions (i.e., user requests) and responses (i.e., bot responses) with the unique dialogue- and turn-indexes;\n\nTriples in the form of (User Request, Next Action, Response). Information on the next system's action could be employed to train a Dialogue Manager unit with (deep-) machine learning algorithms;"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"33deb5f385f54dc0056e57bca8e193eb1c21ebf0",
"bae4d74a70dae5fe1315813e2191f237dfc9b2d0"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The overall feedback from the tutors included reduced repetitive activities as well as reduced waiting times for students until their questions were processed."
],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"In general, student questions can be grouped into three main categories: organizational questions (e.g., course certificate), contextual questions (e.g., content, theorem) and mathematical questions (e.g., exercises, solutions). To assist a student with a mathematical question, a tutor has to know the following regular information: What kind of topic (or sub-topic) a student has a problem with. At which examination mode (i.e., quiz, chapter level training or exercise, section level training or exercise, or final examination) the student is working right now. And finally, the exact question number and exact problem formulation. This means that a tutor has to request the same information every time a new dialogue opens, which is very time consuming and could be successfully solved by means of an IPA dialogue bot."
],
"extractive_spans": [
" What kind of topic (or sub-topic) a student has a problem with",
"At which examination mode (i.e., quiz, chapter level training or exercise, section level training or exercise, or final examination) the student is working right now",
" the exact question number and exact problem formulation"
],
"free_form_answer": "",
"highlighted_evidence": [
"To assist a student with a mathematical question, a tutor has to know the following regular information: What kind of topic (or sub-topic) a student has a problem with. At which examination mode (i.e., quiz, chapter level training or exercise, section level training or exercise, or final examination) the student is working right now. And finally, the exact question number and exact problem formulation. This means that a tutor has to request the same information every time a new dialogue opens, which is very time consuming and could be successfully solved by means of an IPA dialogue bot."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they use off-the-shelf NLP systems to build their assitant?",
"How does the IPA label data after interacting with users?",
"What kind of repetitive and time-consuming activities does their assistant handle?"
],
"question_id": [
"ee417fea65f9b1029455797671da0840c8c1abbe",
"ca5a82b54cb707c9b947aa8445aac51ea218b23a",
"da55bd769721b878dd17f07f124a37a0a165db02"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: General statistics for conversational dataset.",
"Table 2: Detailed statistics on possible systems actions. Column “Count” denotes the number of occurrences of each action in the entire dataset.",
"Table 3: Detailed statistics on possible named entities. Column “Count” denotes the number of occurrences of each entity in the entire dataset.",
"Table 4: Utterance-level F1 for the NAP task. Underlined: best performance for evaluation and test sets for default setting (without previous action context). In bold: best performance for evaluation and test sets on extended setting (with previous action context).",
"Table 5: Word-level F1 for the NER task. In bold: best performance for evaluation and test sets.",
"Table 6: Average dialogue accuracy computed for the NAP task for best performing models. In bold: best performance for evaluation and test sets.",
"Table 10: Rule 4 – Admissible examination mode (only for Training and Exercise) and corresponding level configurations.",
"Table 11: Rule 5 – Admissible topic and corresponding subtopic configurations.",
"Table 8: Rule 2 – Admissible examination mode configurations.",
"Figure 1: OMB+ Online Learning Platform, where 1 is the Topic (corresponds to a chapter level), 2 is a Sub-Topic (corre-",
"Table 17: Showcase 4 – Contextual Question. Underlined are the key-words which point on the contextual intent.",
"Table 18: Showcase 5 – Long Flow. Correction of entries. Underlined are the extracted entities.",
"Figure 2: Dialogue Flow. Abbreviations: UNK - unknown; ID - information dictionary; RegEx - regular expressions."
],
"file": [
"5-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"9-Table10-1.png",
"9-Table11-1.png",
"9-Table8-1.png",
"10-Figure1-1.png",
"12-Table17-1.png",
"12-Table18-1.png",
"13-Figure2-1.png"
]
} | [
"How does the IPA label data after interacting with users?"
] | [
[
"2001.02284-Structured Dialogue Acquisition-3",
"2001.02284-Structured Dialogue Acquisition-4",
"2001.02284-Re-implementation of units with BERT-12",
"2001.02284-Re-implementation of units with BERT-10",
"2001.02284-Structured Dialogue Acquisition-1",
"2001.02284-Structured Dialogue Acquisition-2"
]
] | [
"It defined a sequence labeling task to extract custom entities from user input and label the next action (out of 13 custom actions defined)."
] | 21 |
2002.01664 | Identification of Indian Languages using Ghost-VLAD pooling | In this work, we propose a new pooling strategy for language identification by considering Indian languages. The idea is to obtain utterance level features for any variable length audio for robust language recognition. We use the GhostVLAD approach to generate an utterance level feature vector for any variable length input audio by aggregating the local frame level features across time. The generated feature vector is shown to have very good language discriminative features and helps in getting state of the art results for language identification task. We conduct our experiments on 635Hrs of audio data for 7 Indian languages. Our method outperforms the previous state of the art x-vector [11] method by an absolute improvement of 1.88% in F1-score and achieves 98.43% F1-score on the held-out test data. We compare our system with various pooling approaches and show that GhostVLAD is the best pooling approach for this task. We also provide visualization of the utterance level embeddings generated using Ghost-VLAD pooling and show that this method creates embeddings which has very good language discriminative features. | {
"paragraphs": [
[
"The idea of language identification is to classify a given audio signal into a particular class using a classification algorithm. Commonly language identification task was done using i-vector systems [1]. A very well known approach for language identification proposed by N. Dahek et al. [1] uses the GMM-UBM model to obtain utterance level features called i-vectors. Recent advances in deep learning [15,16] have helped to improve the language identification task using many different neural network architectures which can be trained efficiently using GPUs for large scale datasets. These neural networks can be configured in various ways to obtain better accuracy for language identification task. Early work on using Deep learning for language Identification was published by Pavel Matejka et al. [2], where they used stacked bottleneck features extracted from deep neural networks for language identification task and showed that the bottleneck features learned by Deep neural networks are better than simple MFCC or PLP features. Later the work by I. Lopez-Moreno et al. [3] from Google showed how to use Deep neural networks to directly map the sequence of MFCC frames into its language class so that we can apply language identification at the frame level. Speech signals will have both spatial and temporal information, but simple DNNs are not able to capture temporal information. Work done by J. Gonzalez-Dominguez et al. [4] by Google developed an LSTM based language identification model which improves the accuracy over the DNN based models. Work done by Alicia et al. [5] used CNNs to improve upon i-vector [1] and other previously developed systems. The work done by Daniel Garcia-Romero et al. [6] has used a combination of Acoustic model trained for speech recognition with Time-delay neural networks where they train the TDNN model by feeding the stacked bottleneck features from acoustic model to predict the language labels at the frame level. Recently X-vectors [7] is proposed for speaker identification task and are shown to outperform all the previous state of the art speaker identification algorithms and are also used for language identification by David Snyder et al. [8].",
"In this paper, we explore multiple pooling strategies for language identification task. Mainly we propose Ghost-VLAD based pooling method for language identification. Inspired by the recent work by W. Xie et al. [9] and Y. Zhong et al. [10], we use Ghost-VLAD to improve the accuracy of language identification task for Indian languages. We explore multiple pooling strategies including NetVLAD pooling [11], Average pooling and Statistics pooling( as proposed in X-vectors [7]) and show that Ghost-VLAD pooling is the best pooling strategy for language identification. Our model obtains the best accuracy of 98.24%, and it outperforms all the other previously proposed pooling methods. We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\\textbf {All India Radio}$ news channel. The paper is organized as follows. In section 2, we explain the proposed pooling method for language identification. In section 3, we explain our dataset. In section 4, we describe the experiments, and in section 5, we describe the results."
],
[
"In any language identification model, we want to obtain utterance level representation which has very good language discriminative features. These representations should be compact and should be easily separable by a linear classifier. The idea of any pooling strategy is to pool the frame-level representations into a single utterance level representation. Previous works by [7] have used simple mean and standard deviation aggregation to pool the frame-level features from the top layer of the neural network to obtain the utterance level features. Recently [9] used VLAD based pooling strategy for speaker identification which is inspired from [10] proposed for face recognition. The NetVLAD [11] and Ghost-VLAD [10] methods are proposed for Place recognition and face recognition, respectively, and in both cases, they try to aggregate the local descriptors into global features. In our case, the local descriptors are features extracted from ResNet [15], and the global utterance level feature is obtained by using GhostVLAD pooling. In this section, we explain different pooling methods, including NetVLAD, Ghost-VLAD, Statistic pooling, and Average pooling."
],
[
"The NetVLAD pooling strategy was initially developed for place recognition by R. Arandjelovic et al. [11]. The NetVLAD is an extension to VLAD [18] approach where they were able to replace the hard assignment based clustering with soft assignment based clustering so that it can be trained with neural network in an end to end fashion. In our case, we use the NetVLAD layer to map N local features of dimension D into a fixed dimensional vector, as shown in Figure 1 (Left side).",
"The model takes spectrogram as an input and feeds into CNN based ResNet architecture. The ResNet is used to map the spectrogram into 3D feature map of dimension HxWxD. We convert this 3D feature map into 2D by unfolding H and W dimensions, creating a NxD dimensional feature map, where N=HxW. The NetVLAD layer is kept on top of the feature extraction layer of ResNet, as shown in Figure 1. The NetVLAD now takes N features vectors of dimension D and computes a matrix V of dimension KxD, where K is the number clusters in the NetVLAD layer, and D is the dimension of the feature vector. The matrix V is computed as follows.",
"Where $w_k$,$b_k$ and $c_k$ are trainable parameters for the cluster $k$ and V(j,k) represents a point in the V matrix for (j,k)th location. The matrix is constructed using the equation (1) where the first term corresponds to the soft assignment of the input $x_i$ to the cluster $c_k$, whereas the second term corresponds to the residual term which tells how far the input descriptor $x_i$ is from the cluster center $c_k$."
],
[
"GhostVLAD is an extension of the NetVLAD approach, which we discussed in the previous section. The GhostVLAD model was proposed for face recognition by Y. Zhong [10]. GhostVLAD works exactly similar to NetVLAD except it adds Ghost clusters along with the NetVLAD clusters. So, now we will have a K+G number of clusters instead of K clusters. Where G is the number of ghost clusters, we want to add (typically 2-4). The Ghost clusters are added to map any noisy or irrelevant content into ghost clusters and are not included during the feature aggregation stage, as shown in Figure 1 (Right side). Which means that we compute the matrix V for both normal cluster K and ghost clusters G, but we will not include the vectors belongs to ghost cluster from V during concatenation of the features. Due to which, during feature aggregation stage the contribution of the noisy and unwanted features to normal VLAD clusters are assigned less weights while Ghost clusters absorb most of the weight. We illustrate this in Figure 1(Right Side), where the ghost clusters are shown in red color. We use Ghost clusters when we are computing the V matrix, but they are excluded during the concatenation stage. These concatenated features are fed into the projection layer, followed by softmax to predict the language label."
],
[
"In statistic pooling, we compute the first and second order statistics of the local features from the top layer of the ResNet model. The 3-D feature map is unfolded to create N features of D dimensions, and then we compute the mean and standard deviation of all these N vectors and get two D dimensional vectors, one for mean and the other for standard deviation. We then concatenate these 2 features and feed it to the projection layer for predicting the language label.",
"In the Average pooling layer, we compute only the first-order statistics (mean) of the local features from the top layer of the CNN model. The feature map from the top layer of CNN is unfolded to create N features of D dimensions, and then we compute the mean of all these N vectors and get D dimensional representation. We then feed this feature to the projection layer followed by softmax for predicting the language label."
],
[
"In this section, we describe our dataset collection process. We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances. The amount of audio data for training and testing for each of the language is shown in the table bellow."
],
[
"In this section, we describe the feature extraction process and network architecture in detail. We use spectral features of 256 dimensions computed using 512 point FFT for every frame, and we add an energy feature for every frame giving us total 257 features for every frame. We use a window size of 25ms and frame shift of 10ms during feature computation. We crop random 5sec audio data from each utterance during training which results in a spectrogram of size 257x500 (features x number of features). We use these spectrograms as input to our CNN model during training. During testing, we compute the prediction score irrespective of the audio length.",
"For the network architecture, we use ResNet-34 architecture, as described in [9]. The model uses convolution layers with Relu activations to map the spectrogram of size 257x500 input into 3D feature map of size 1x32x512. This feature cube is converted into 2D feature map of dimension 32x512 and fed into Ghost-VLAD/NetVLAD layer to generate a representation that has more language discrimination capacity. We use Adam optimizer with an initial learning rate of 0.01 and a final learning rate of 0.00001 for training. Each model is trained for 15 epochs with early stopping criteria.",
"For the baseline, we train an i-vector model using GMM-UBM. We fit a small classifier on top of the generated i-vectors to measure the accuracy. This model is referred as i-vector+svm . To compare our model with the previous state of the art system, we set up the x-vector language identification system [8]. The x-vector model used time-delay neural networks (TDNN) along with statistic-pooling. We use 7 layer TDNN architecture similar to [8] for training. We refer to this model as tdnn+stat-pool . Finally, we set up a Deep LSTM based language identification system similar to [4] but with little modification where we add statistics pooling for the last layers hidden activities before classification. We use 3 layer Bi-LSTM with 256 hidden units at each layer. We refer to this model as LSTM+stat-pool. We train our i-vector+svm and TDNN+stat-pool using Kaldi toolkit. We train our NetVLAD and GhostVLAD experiments using Keras by modifying the code given by [9] for language identification. We train the LSTM+stat-pool and the remaining experiments using Pytorch [14] toolkit, and we will opensource all the codes and data soon."
],
[
"In this section, we compare the performance of our system with the recent state of the art language identification approaches. We also compare different pooling strategies and finally, compare the robustness of our system to the length of the input spectrogram during training. We visualize the embeddings generated by the GhostVLAD method and conclude that the GhostVLAD embeddings shows very good feature discrimination capabilities."
],
[
"We compare our system performance with the previous state of the art language identification approaches, as shown in Table 2. The i-vector+svm system is trained using GMM-UBM models to generate i-vectors as proposed in [1]. Once the i-vectors are extracted, we fit SVM classifier to classify the audio. The TDNN+stat-pool system is trained with a statistics pooling layer and is called the x-vector system as proposed by David Snyder et al. [11] and is currently the state of the art language identification approach as far as our knowledge. Our methods outperform the state of the art x-vector system by absolute 1.88% improvement in F1-score, as shown in Table 2."
],
[
"We compare our approach with different pooling strategies in Table 3. We use ResNet as our base feature extraction network. We keep the base network the same and change only the pooling layers to see which pooling approach performs better for language identification task. Our experiments show that GhostVLAD pooling outperforms all the other pooling methods by achieving 98.43% F1-Score."
],
[
"To observe the performance of our method with different input durations, we conducted an experiment where we train our model on different input durations. Since our model uses ResNet as the base feature extractor, we need to feed fixed-length spectrogram. We conducted 4 different experiments where we trained the model using 2sec, 3sec, 4sec and 5sec spectrograms containing 200,300,400 and 500 frames respectively. We observed that the model trained with a 5sec spectrogram is the best model, as shown in Table 4."
],
[
"We visualize the embeddings generated by our approach to see the effectiveness. We extracted 512-dimensional embeddings for our testing data and reduced the dimensionality using t-sne projection. The t-sne plot of the embeddings space is shown in Figure 3. The plot shows that the embeddings learned by our approach has very good discriminative properties"
],
[
"In this work, we use Ghost-VLAD pooling approach that was originally proposed for face recognition to improve language identification performance for Indian languages. We collected and curated 630 hrs audio data from news All India Radio news channel for 7 Indian languages. Our experimental results shows that our approach outperforms the previous state of the art methods by an absolute 1.88% F1-score. We have also conducted experiments with different pooling strategies proposed in the past, and the GhostVLAD pooling approach turns out to be the best approach for aggregating frame-level features into a single utterance level feature. Our experiments also prove that our approach works much better even if the input during training contains smaller durations. Finally, we see that the embeddings generated by our method has very good language discriminative features and helps to improve the performance of language identification."
]
],
"section_name": [
"INTRODUCTION",
"POOLING STRATEGIES",
"POOLING STRATEGIES ::: NetVLAD pooling",
"POOLING STRATEGIES ::: GhostVLAD pooling",
"POOLING STRATEGIES ::: Statistic and average pooling",
"DATASET",
"EXPERIMENTS",
"RESULTS",
"RESULTS ::: Comparison with different approaches",
"RESULTS ::: Comparison with different pooling techniques",
"RESULTS ::: Duration analysis",
"RESULTS ::: Visualization of embeddings",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"6bd56f3bb625bf8ff1ff215f9122f6e9c77b698c",
"875ea083ee331cf5dc434a77673d1bf758e9bd34"
],
"answer": [
{
"evidence": [
"In this paper, we explore multiple pooling strategies for language identification task. Mainly we propose Ghost-VLAD based pooling method for language identification. Inspired by the recent work by W. Xie et al. [9] and Y. Zhong et al. [10], we use Ghost-VLAD to improve the accuracy of language identification task for Indian languages. We explore multiple pooling strategies including NetVLAD pooling [11], Average pooling and Statistics pooling( as proposed in X-vectors [7]) and show that Ghost-VLAD pooling is the best pooling strategy for language identification. Our model obtains the best accuracy of 98.24%, and it outperforms all the other previously proposed pooling methods. We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\\textbf {All India Radio}$ news channel. The paper is organized as follows. In section 2, we explain the proposed pooling method for language identification. In section 3, we explain our dataset. In section 4, we describe the experiments, and in section 5, we describe the results.",
"In this section, we describe our dataset collection process. We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances. The amount of audio data for training and testing for each of the language is shown in the table bellow."
],
"extractive_spans": [],
"free_form_answer": "Through the All India Radio new channel where actors read news.",
"highlighted_evidence": [
"We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\\textbf {All India Radio}$ news channel. ",
"We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we explore multiple pooling strategies for language identification task. Mainly we propose Ghost-VLAD based pooling method for language identification. Inspired by the recent work by W. Xie et al. [9] and Y. Zhong et al. [10], we use Ghost-VLAD to improve the accuracy of language identification task for Indian languages. We explore multiple pooling strategies including NetVLAD pooling [11], Average pooling and Statistics pooling( as proposed in X-vectors [7]) and show that Ghost-VLAD pooling is the best pooling strategy for language identification. Our model obtains the best accuracy of 98.24%, and it outperforms all the other previously proposed pooling methods. We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\\textbf {All India Radio}$ news channel. The paper is organized as follows. In section 2, we explain the proposed pooling method for language identification. In section 3, we explain our dataset. In section 4, we describe the experiments, and in section 5, we describe the results."
],
"extractive_spans": [
" $\\textbf {All India Radio}$ news channel"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conduct all our experiments on 635hrs of audio data for 7 Indian languages collected from $\\textbf {All India Radio}$ news channel."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"77a0b87c981320ba5cf26dce6af586fd5e055540",
"95c284fd1d05a30c5ae3ef16191fc7ce86dff89a"
],
"answer": [
{
"evidence": [
"GhostVLAD is an extension of the NetVLAD approach, which we discussed in the previous section. The GhostVLAD model was proposed for face recognition by Y. Zhong [10]. GhostVLAD works exactly similar to NetVLAD except it adds Ghost clusters along with the NetVLAD clusters. So, now we will have a K+G number of clusters instead of K clusters. Where G is the number of ghost clusters, we want to add (typically 2-4). The Ghost clusters are added to map any noisy or irrelevant content into ghost clusters and are not included during the feature aggregation stage, as shown in Figure 1 (Right side). Which means that we compute the matrix V for both normal cluster K and ghost clusters G, but we will not include the vectors belongs to ghost cluster from V during concatenation of the features. Due to which, during feature aggregation stage the contribution of the noisy and unwanted features to normal VLAD clusters are assigned less weights while Ghost clusters absorb most of the weight. We illustrate this in Figure 1(Right Side), where the ghost clusters are shown in red color. We use Ghost clusters when we are computing the V matrix, but they are excluded during the concatenation stage. These concatenated features are fed into the projection layer, followed by softmax to predict the language label."
],
"extractive_spans": [
"extension of the NetVLAD",
"adds Ghost clusters along with the NetVLAD clusters"
],
"free_form_answer": "",
"highlighted_evidence": [
"GhostVLAD is an extension of the NetVLAD approach, which we discussed in the previous section.",
"GhostVLAD works exactly similar to NetVLAD except it adds Ghost clusters along with the NetVLAD clusters. So, now we will have a K+G number of clusters instead of K clusters.",
"The Ghost clusters are added to map any noisy or irrelevant content into ghost clusters and are not included during the feature aggregation stage, as shown in Figure 1 (Right side)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"GhostVLAD is an extension of the NetVLAD approach, which we discussed in the previous section. The GhostVLAD model was proposed for face recognition by Y. Zhong [10]. GhostVLAD works exactly similar to NetVLAD except it adds Ghost clusters along with the NetVLAD clusters. So, now we will have a K+G number of clusters instead of K clusters. Where G is the number of ghost clusters, we want to add (typically 2-4). The Ghost clusters are added to map any noisy or irrelevant content into ghost clusters and are not included during the feature aggregation stage, as shown in Figure 1 (Right side). Which means that we compute the matrix V for both normal cluster K and ghost clusters G, but we will not include the vectors belongs to ghost cluster from V during concatenation of the features. Due to which, during feature aggregation stage the contribution of the noisy and unwanted features to normal VLAD clusters are assigned less weights while Ghost clusters absorb most of the weight. We illustrate this in Figure 1(Right Side), where the ghost clusters are shown in red color. We use Ghost clusters when we are computing the V matrix, but they are excluded during the concatenation stage. These concatenated features are fed into the projection layer, followed by softmax to predict the language label.",
"The NetVLAD pooling strategy was initially developed for place recognition by R. Arandjelovic et al. [11]. The NetVLAD is an extension to VLAD [18] approach where they were able to replace the hard assignment based clustering with soft assignment based clustering so that it can be trained with neural network in an end to end fashion. In our case, we use the NetVLAD layer to map N local features of dimension D into a fixed dimensional vector, as shown in Figure 1 (Left side)."
],
"extractive_spans": [],
"free_form_answer": "An extension of NetVLAD which replaces hard assignment-based clustering with soft assignment-based clustering with the additon o fusing Ghost clusters to deal with noisy content.",
"highlighted_evidence": [
"GhostVLAD is an extension of the NetVLAD approach, which we discussed in the previous section. The GhostVLAD model was proposed for face recognition by Y. Zhong [10]. GhostVLAD works exactly similar to NetVLAD except it adds Ghost clusters along with the NetVLAD clusters. So, now we will have a K+G number of clusters instead of K clusters. Where G is the number of ghost clusters, we want to add (typically 2-4). The Ghost clusters are added to map any noisy or irrelevant content into ghost clusters and are not included during the feature aggregation stage, as shown in Figure 1 (Right side). ",
"The NetVLAD pooling strategy was initially developed for place recognition by R. Arandjelovic et al. [11]. The NetVLAD is an extension to VLAD [18] approach where they were able to replace the hard assignment based clustering with soft assignment based clustering so that it can be trained with neural network in an end to end fashion. In our case, we use the NetVLAD layer to map N local features of dimension D into a fixed dimensional vector, as shown in Figure 1 (Left side)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"340bfdd9d5d2901d2927018b70ab13cc3f3ca02f",
"de48e546c012c93ec2bdd7026463fb46a1596bdf"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Dataset"
],
"extractive_spans": [],
"free_form_answer": "Hindi, English, Kannada, Telugu, Assamese, Bengali and Malayalam",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Dataset"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section, we describe our dataset collection process. We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances. The amount of audio data for training and testing for each of the language is shown in the table bellow.",
"FLOAT SELECTED: Table 1: Dataset"
],
"extractive_spans": [],
"free_form_answer": "Kannada, Hindi, Telugu, Malayalam, Bengali, English and Assamese (in table, missing in text)",
"highlighted_evidence": [
"We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English.",
"The amount of audio data for training and testing for each of the language is shown in the table bellow.",
"FLOAT SELECTED: Table 1: Dataset"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How was the audio data gathered?",
"What is the GhostVLAD approach?",
"Which 7 Indian languages do they experiment with?"
],
"question_id": [
"feb448860918ef5b905bb25d7b855ba389117c1f",
"4bc2784be43d599000cb71d31928908250d4cef3",
"75df70ce7aa714ec4c6456d0c51f82a16227f2cb"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1: NetVLAD(Left side) and GhostVLAD(Right side)",
"Table 1: Dataset",
"Table 4: F1-scores for different input sizes of the spectrogram",
"Table 2: Comparison Previous methods",
"Fig. 2: t-sne plot of embeddings",
"Table 3: Comparison with different Pooling methods"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table4-1.png",
"4-Table2-1.png",
"4-Figure2-1.png",
"4-Table3-1.png"
]
} | [
"How was the audio data gathered?",
"What is the GhostVLAD approach?",
"Which 7 Indian languages do they experiment with?"
] | [
[
"2002.01664-INTRODUCTION-1",
"2002.01664-DATASET-0"
],
[
"2002.01664-POOLING STRATEGIES ::: GhostVLAD pooling-0",
"2002.01664-POOLING STRATEGIES ::: NetVLAD pooling-0"
],
[
"2002.01664-DATASET-0",
"2002.01664-3-Table1-1.png"
]
] | [
"Through the All India Radio new channel where actors read news.",
"An extension of NetVLAD which replaces hard assignment-based clustering with soft assignment-based clustering with the additon o fusing Ghost clusters to deal with noisy content.",
"Kannada, Hindi, Telugu, Malayalam, Bengali, English and Assamese (in table, missing in text)"
] | 22 |
1808.09111 | Unsupervised Learning of Syntactic Structure with Invertible Neural Projections | Unsupervised learning of syntactic structure is typically performed using generative models with discrete latent variables and multinomial parameters. In most cases, these models have not leveraged continuous word representations. In this work, we propose a novel generative model that jointly learns discrete syntactic structure and continuous word representations in an unsupervised fashion by cascading an invertible neural network with a structured generative prior. We show that the invertibility condition allows for efficient exact inference and marginal likelihood computation in our model so long as the prior is well-behaved. In experiments we instantiate our approach with both Markov and tree-structured priors, evaluating on two tasks: part-of-speech (POS) induction, and unsupervised dependency parsing without gold POS annotation. On the Penn Treebank, our Markov-structured model surpasses state-of-the-art results on POS induction. Similarly, we find that our tree-structured model achieves state-of-the-art performance on unsupervised dependency parsing for the difficult training condition where neither gold POS annotation nor punctuation-based constraints are available. | {
"paragraphs": [
[
"Data annotation is a major bottleneck for the application of supervised learning approaches to many problems. As a result, unsupervised methods that learn directly from unlabeled data are increasingly important. For tasks related to unsupervised syntactic analysis, discrete generative models have dominated in recent years – for example, for both part-of-speech (POS) induction BIBREF0 , BIBREF1 and unsupervised dependency parsing BIBREF2 , BIBREF3 , BIBREF4 . While similar models have had success on a range of unsupervised tasks, they have mostly ignored the apparent utility of continuous word representations evident from supervised NLP applications BIBREF5 , BIBREF6 . In this work, we focus on leveraging and explicitly representing continuous word embeddings within unsupervised models of syntactic structure.",
"Pre-trained word embeddings from massive unlabeled corpora offer a compact way of injecting a prior notion of word similarity into models that would otherwise treat words as discrete, isolated categories. However, the specific properties of language captured by any particular embedding scheme can be difficult to control, and, further, may not be ideally suited to the task at hand. For example, pre-trained skip-gram embeddings BIBREF7 with small context window size are found to capture the syntactic properties of language well BIBREF8 , BIBREF9 . However, if our goal is to separate syntactic categories, this embedding space is not ideal – POS categories correspond to overlapping interspersed regions in the embedding space, evident in Figure SECREF4 .",
"In our approach, we propose to learn a new latent embedding space as a projection of pre-trained embeddings (depicted in Figure SECREF5 ), while jointly learning latent syntactic structure – for example, POS categories or syntactic dependencies. To this end, we introduce a new generative model (shown in Figure FIGREF6 ) that first generates a latent syntactic representation (e.g. a dependency parse) from a discrete structured prior (which we also call the “syntax model”), then, conditioned on this representation, generates a sequence of latent embedding random variables corresponding to each word, and finally produces the observed (pre-trained) word embeddings by projecting these latent vectors through a parameterized non-linear function. The latent embeddings can be jointly learned with the structured syntax model in a completely unsupervised fashion.",
"By choosing an invertible neural network as our non-linear projector, and then parameterizing our model in terms of the projection's inverse, we are able to derive tractable exact inference and marginal likelihood computation procedures so long as inference is tractable in the underlying syntax model. In sec:learn-with-inv we show that this derivation corresponds to an alternate view of our approach whereby we jointly learn a mapping of observed word embeddings to a new embedding space that is more suitable for the syntax model, but include an additional Jacobian regularization term to prevent information loss.",
"Recent work has sought to take advantage of word embeddings in unsupervised generative models with alternate approaches BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . BIBREF9 build an HMM with Gaussian emissions on observed word embeddings, but they do not attempt to learn new embeddings. BIBREF10 , BIBREF11 , and BIBREF12 extend HMM or dependency model with valence (DMV) BIBREF2 with multinomials that use word (or tag) embeddings in their parameterization. However, they do not represent the embeddings as latent variables.",
"In experiments, we instantiate our approach using both a Markov-structured syntax model and a tree-structured syntax model – specifically, the DMV. We evaluate on two tasks: part-of-speech (POS) induction and unsupervised dependency parsing without gold POS tags. Experimental results on the Penn Treebank BIBREF13 demonstrate that our approach improves the basic HMM and DMV by a large margin, leading to the state-of-the-art results on POS induction, and state-of-the-art results on unsupervised dependency parsing in the difficult training scenario where neither gold POS annotation nor punctuation-based constraints are available.",
""
],
[
" As an illustrative example, we first present a baseline model for Markov syntactic structure (POS induction) that treats a sequence of pre-trained word embeddings as observations. Then, we propose our novel approach, again using Markov structure, that introduces latent word embedding variables and a neural projector. Lastly, we extend our approach to more general syntactic structures."
],
[
"We start by describing the Gaussian hidden Markov model introduced by BIBREF9 , which is a locally normalized model with multinomial transitions and Gaussian emissions. Given a sentence of length INLINEFORM0 , we denote the latent POS tags as INLINEFORM1 , observed (pre-trained) word embeddings as INLINEFORM2 , transition parameters as INLINEFORM3 , and Gaussian emission parameters as INLINEFORM4 . The joint distribution of data and latent variables factors as:",
" DISPLAYFORM0 ",
"where INLINEFORM0 is the multinomial transition probability and INLINEFORM1 is the multivariate Gaussian emission probability.",
"While the observed word embeddings do inform this model with a notion of word similarity – lacking in the basic multinomial HMM – the Gaussian emissions may not be sufficiently flexible to separate some syntactic categories in the complex pre-trained embedding space – for example the skip-gram embedding space as visualized in Figure SECREF4 where different POS categories overlap. Next we introduce a new approach that adds flexibility to the emission distribution by incorporating new latent embedding variables."
],
[
"To flexibly model observed embeddings and yield a new representation space that is more suitable for the syntax model, we propose to cascade a neural network as a projection function, deterministically transforming the simple space defined by the Gaussian HMM to the observed embedding space. We denote the latent embedding of the INLINEFORM0 word in a sentence as INLINEFORM1 , and the neural projection function as INLINEFORM2 , parameterized by INLINEFORM3 . In the case of sequential Markov structure, our new model corresponds to the following generative process:",
"",
"For each time step INLINEFORM0 ,",
"",
"[noitemsep, leftmargin=*]",
"Draw the latent state INLINEFORM0 ",
"Draw the latent embedding INLINEFORM0 ",
"Deterministically produce embedding",
" INLINEFORM0 ",
"",
"The graphical model is depicted in Figure FIGREF6 . The deterministic projection can also be viewed as sampling each observation from a point mass at INLINEFORM0 . The joint distribution of our model is: DISPLAYFORM0 ",
"where INLINEFORM0 is a conditional Gaussian distribution, and INLINEFORM1 is the Dirac delta function centered at INLINEFORM2 : DISPLAYFORM0 "
],
[
"Our approach can be applied to a broad family of structured syntax models. We denote latent embedding variables as INLINEFORM0 , discrete latent variables in the syntax model as INLINEFORM1 ( INLINEFORM2 ), where INLINEFORM3 are conditioned to generate INLINEFORM4 . The joint probability of our model factors as:",
" DISPLAYFORM0 ",
"where INLINEFORM0 represents the probability of the syntax model, and can encode any syntactic structure – though, its factorization structure will determine whether inference is tractable in our full model. As shown in Figure FIGREF6 , we focus on two syntax models for syntactic analysis in this paper. The first is Markov-structured, which we use for POS induction, and the second is DMV-structured, which we use to learn dependency parses without supervision.",
"The marginal data likelihood of our model is: DISPLAYFORM0 ",
"While the discrete variables INLINEFORM0 can be marginalized out with dynamic program in many cases, it is generally intractable to marginalize out the latent continuous variables, INLINEFORM1 , for an arbitrary projection INLINEFORM2 in Eq. ( EQREF17 ), which means inference and learning may be difficult. In sec:opt, we address this issue by constraining INLINEFORM3 to be invertible, and show that this constraint enables tractable exact inference and marginal likelihood computation."
],
[
"In this section, we introduce an invertibility condition for our neural projector to tackle the optimization challenge. Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists. Invertible transformations have been explored before in independent components analysis BIBREF14 , gaussianization BIBREF15 , and deep density models BIBREF16 , BIBREF17 , BIBREF18 , for unstructured data. Here, we generalize this style of approach to structured learning, and augment it with discrete latent variables ( INLINEFORM2 ). Under the invertibility condition, we derive a learning algorithm and give another view of our approach revealed by the objective function. Then, we present the architecture of a neural projector we use in experiments: a volume-preserving invertible neural network proposed by BIBREF16 for independent components estimation."
],
[
"For ease of exposition, we explain the learning algorithm in terms of Markov structure without loss of generality. As shown in Eq. ( EQREF17 ), the optimization challenge in our approach comes from the intractability of the marginalized emission factor INLINEFORM0 . If we can marginalize out INLINEFORM1 and compute INLINEFORM2 , then the posterior and marginal likelihood of our Markov-structured model can be computed with the forward-backward algorithm. We can apply Eq. ( EQREF14 ) and obtain : INLINEFORM3 ",
"By using the change of variable rule to the integration, which allows the integration variable INLINEFORM0 to be replaced by INLINEFORM1 , the marginal emission factor can be computed in closed-form when the invertibility condition is satisfied: DISPLAYFORM0 ",
"where INLINEFORM0 is a conditional Gaussian distribution, INLINEFORM1 is the Jacobian matrix of function INLINEFORM2 at INLINEFORM3 , and INLINEFORM4 represents the absolute value of its determinant. This Jacobian term is nonzero and differentiable if and only if INLINEFORM5 exists.",
"Eq. ( EQREF19 ) shows that we can directly calculate the marginal emission distribution INLINEFORM0 . Denote the marginal data likelihood of Gaussian HMM as INLINEFORM1 , then the log marginal data likelihood of our model can be directly written as: DISPLAYFORM0 ",
"where INLINEFORM0 represents the new sequence of embeddings after applying INLINEFORM1 to each INLINEFORM2 . Eq. ( EQREF20 ) shows that the training objective of our model is simply the Gaussian HMM log likelihood with an additional Jacobian regularization term. From this view, our approach can be seen as equivalent to reversely projecting the data through INLINEFORM3 to another manifold INLINEFORM4 that is directly modeled by the Gaussian HMM, with a regularization term. Intuitively, we optimize the reverse projection INLINEFORM5 to modify the INLINEFORM6 space, making it more appropriate for the syntax model. The Jacobian regularization term accounts for the volume expansion or contraction behavior of the projection. Maximizing it can be thought of as preventing information loss. In the extreme case, the Jacobian determinant is equal to zero, which means the projection is non-invertible and thus information is being lost through the projection. Such “information preserving” regularization is crucial during optimization, otherwise the trivial solution of always projecting data to the same single point to maximize likelihood is viable.",
"More generally, for an arbitrary syntax model the data likelihood of our approach is: DISPLAYFORM0 ",
"If the syntax model itself allows for tractable inference and marginal likelihood computation, the same dynamic program can be used to marginalize out INLINEFORM0 . Therefore, our joint model inherits the tractability of the underlying syntax model."
],
[
"For the projection we can use an arbitrary invertible function, and given the representational power of neural networks they seem a natural choice. However, calculating the inverse and Jacobian of an arbitrary neural network can be difficult, as it requires that all component functions be invertible and also requires storage of large Jacobian matrices, which is memory intensive. To address this issue, several recent papers propose specially designed invertible networks that are easily trainable yet still powerful BIBREF16 , BIBREF17 , BIBREF19 . Inspired by these works, we use the invertible transformation proposed by BIBREF16 , which consists of a series of “coupling layers”. This architecture is specially designed to guarantee a unit Jacobian determinant (and thus the invertibility property).",
"From Eq. ( EQREF22 ) we know that only INLINEFORM0 is required for accomplishing learning and inference; we never need to explicitly construct INLINEFORM1 . Thus, we directly define the architecture of INLINEFORM2 . As shown in Figure FIGREF24 , the nonlinear transformation from the observed embedding INLINEFORM3 to INLINEFORM4 represents the first coupling layer. The input in this layer is partitioned into left and right halves of dimensions, INLINEFORM5 and INLINEFORM6 , respectively. A single coupling layer is defined as: DISPLAYFORM0 ",
"where INLINEFORM0 is the coupling function and can be any nonlinear form. This transformation satisfies INLINEFORM1 , and BIBREF16 show that its Jacobian matrix is triangular with all ones on the main diagonal. Thus the Jacobian determinant is always equal to one (i.e. volume-preserving) and the invertibility condition is naturally satisfied.",
"To be sufficiently expressive, we compose multiple coupling layers as suggested in BIBREF16 . Specifically, we exchange the role of left and right half vectors at each layer as shown in Figure FIGREF24 . For instance, from INLINEFORM0 to INLINEFORM1 the left subset INLINEFORM2 is unchanged, while from INLINEFORM3 to INLINEFORM4 the right subset INLINEFORM5 remains the same. Also note that composing multiple coupling layers does not change the volume-preserving and invertibility properties. Such a sequence of invertible transformations from the data space INLINEFORM6 to INLINEFORM7 is also called normalizing flow BIBREF20 ."
],
[
"In this section, we first describe our datasets and experimental setup. We then instantiate our approach with Markov and DMV-structured syntax models, and report results on POS tagging and dependency grammar induction respectively. Lastly, we analyze the learned latent embeddings."
],
[
"For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank. To create the observed data embeddings, we train skip-gram word embeddings BIBREF7 that are found to capture syntactic properties well when trained with small context window BIBREF8 , BIBREF9 . Following BIBREF9 , the dimensionality INLINEFORM0 is set to 100, and the training context window size is set to 1 to encode more syntactic information. The skip-gram embeddings are trained on the one billion word language modeling benchmark dataset BIBREF21 in addition to the WSJ corpus."
],
[
"For the neural projector, we employ rectified networks as coupling function INLINEFORM0 following BIBREF16 . We use a rectified network with an input layer, one hidden layer, and linear output units, the number of hidden units is set to the same as the number of input units. The number of coupling layers are varied as 4, 8, 16 for both tasks. We optimize marginal data likelihood directly using Adam BIBREF22 . For both tasks in the fully unsupervised setting, we do not tune the hyper-parameters using supervised data."
],
[
"For unsupervised POS tagging, we use a Markov-structured syntax model in our approach, which is a popular structure for unsupervised tagging tasks BIBREF9 , BIBREF10 .",
"Following existing literature, we train and test on the entire WSJ corpus (49208 sentences, 1M tokens). We use 45 tag clusters, the number of POS tags that appear in WSJ corpus. We train the discrete HMM and the Gaussian HMM BIBREF9 as baselines. For the Gaussian HMM, mean vectors of Gaussian emissions are initialized with the empirical mean of all word vectors with an additive noise. We assume diagonal covariance matrix for INLINEFORM0 and initialize it with the empirical variance of the word vectors. Following BIBREF9 , the covariance matrix is fixed during training. The multinomial probabilities are initialized as INLINEFORM1 , where INLINEFORM2 . For our approach, we initialize the syntax model and Gaussian parameters with the pre-trained Gaussian HMM. The weights of layers in the rectified network are initialized from a uniform distribution with mean zero and a standard deviation of INLINEFORM3 , where INLINEFORM4 is the input dimension. We evaluate the performance of POS tagging with both Many-to-One (M-1) accuracy BIBREF23 and V-Measure (VM) BIBREF24 . Given a model we found that the tagging performance is well-correlated with the training data likelihood, thus we use training data likelihood as a unsupervised criterion to select the trained model over 10 random restarts after training 50 epochs. We repeat this process 5 times and report the mean and standard deviation of performance.",
"We compare our approach with basic HMM, Gaussian HMM, and several state-of-the-art systems, including sophisticated HMM variants and clustering techniques with hand-engineered features. The results are presented in Table TABREF32 . Through the introduced latent embeddings and additional neural projection, our approach improves over the Gaussian HMM by 5.4 points in M-1 and 5.6 points in VM. Neural HMM (NHMM) BIBREF10 is a baseline that also learns word representation jointly. Both their basic model and extended Conv version does not outperform the Gaussian HMM. Their best model incorporates another LSTM to model long distance dependency and breaks the Markov assumption, yet our approach still achieves substantial improvement over it without considering more context information. Moreover, our method outperforms the best published result that benefits from hand-engineered features BIBREF27 by 2.0 points on VM.",
"We found that most tagging errors happen in noun subcategories. Therefore, we do the one-to-one mapping between gold POS tags and induced clusters and plot the normalized confusion matrix of noun subcategories in Figure FIGREF35 . The Gaussian HMM fails to identify “NN” and “NNS” correctly for most cases, and it often recognizes “NNPS” as “NNP”. In contrast, our approach corrects these errors well."
],
[
"For the task of unsupervised dependency parse induction, we employ the Dependency Model with Valence (DMV) BIBREF2 as the syntax model in our approach. DMV is a generative model that defines a probability distribution over dependency parse trees and syntactic categories, generating tokens and dependencies in a head-outward fashion. While, traditionally, DMV is trained using gold POS tags as observed syntactic categories, in our approach, we treat each tag as a latent variable, as described in sec:general-neural.",
"Most existing approaches to this task are not fully unsupervised since they rely on gold POS tags following the original experimental setup for DMV. This is partially because automatically parsing from words is difficult even when using unsupervised syntactic categories BIBREF29 . However, inducing dependencies from words alone represents a more realistic experimental condition since gold POS tags are often unavailable in practice. Previous work that has trained from words alone often requires additional linguistic constraints (like sentence internal boundaries) BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , acoustic cues BIBREF33 , additional training data BIBREF4 , or annotated data from related languages BIBREF34 . Our approach is naturally designed to train on word embeddings directly, thus we attempt to induce dependencies without using gold POS tags or other extra linguistic information.",
"Like previous work we use sections 02-21 of WSJ corpus as training data and evaluate on section 23, we remove punctuations and train the models on sentences of length INLINEFORM0 , “head-percolation” rules BIBREF39 are applied to obtain gold dependencies for evaluation. We train basic DMV, extended DMV (E-DMV) BIBREF35 and Gaussian DMV (which treats POS tag as unknown latent variables and generates observed word embeddings directly conditioned on them following Gaussian distribution) as baselines. Basic DMV and E-DMV are trained with Viterbi EM BIBREF40 on unsupervised POS tags induced from our Markov-structured model described in sec:pos. Multinomial parameters of the syntax model in both Gaussian DMV and our model are initialized with the pre-trained DMV baseline. Other parameters are initialized in the same way as in the POS tagging experiment. The directed dependency accuracy (DDA) is used for evaluation and we report accuracy on sentences of length INLINEFORM1 and all lengths. We train the parser until training data likelihood converges, and report the mean and standard deviation over 20 random restarts.",
"Our model directly observes word embeddings and does not require gold POS tags during training. Thus, results from related work trained on gold tags are not directly comparable. However, to measure how these systems might perform without gold tags, we run three recent state-of-the-art systems in our experimental setting: UR-A E-DMV BIBREF36 , Neural E-DMV BIBREF11 , and CRF Autoencoder (CRFAE) BIBREF37 . We use unsupervised POS tags (induced from our Markov-structured model) in place of gold tags. We also train basic DMV on gold tags and include several state-of-the-art results on gold tags as reference points.",
"As shown in Table TABREF39 , our approach is able to improve over the Gaussian DMV by 4.8 points on length INLINEFORM0 and 4.8 points on all lengths, which suggests the additional latent embedding layer and neural projector are helpful. The proposed approach yields, to the best of our knowledge, state-of-the-art performance without gold POS annotation and without sentence-internal boundary information. DMV, UR-A E-DMV, Neural E-DMV, and CRFAE suffer a large decrease in performance when trained on unsupervised tags – an effect also seen in previous work BIBREF29 , BIBREF34 . Since our approach induces latent POS tags jointly with dependency trees, it may be able to learn POS clusters that are more amenable to grammar induction than the unsupervised tags. We observe that CRFAE underperforms its gold-tag counterpart substantially. This may largely be a result of the model's reliance on prior linguistic rules that become unavailable when gold POS tag types are unknown. Many extensions to DMV can be considered orthogonal to our approach – they essentially focus on improving the syntax model. It is possible that incorporating these more sophisticated syntax models into our approach may lead to further improvements."
],
[
"In the above experiments we initialize the structured syntax components with the pre-trained Gaussian or discrete baseline, which is shown as a useful technique to help train our deep models. We further study the results with fully random initialization. In the POS tagging experiment, we report the results in Table TABREF48 . While the performance with 4 layers is comparable to the pre-trained Gaussian initialization, deeper projections (8 or 16 layers) result in a dramatic drop in performance. This suggests that the structured syntax model with very deep projections is difficult to train from scratch, and a simpler projection might be a good compromise in the random initialization setting.",
"Different from the Markov prior in POS tagging experiments, our parsing model seems to be quite sensitive to the initialization. For example, directed accuracy of our approach on sentences of length INLINEFORM0 is below 40.0 with random initialization. This is consistent with previous work that has noted the importance of careful initialization for DMV-based models such as the commonly used harmonic initializer BIBREF2 . However, it is not straightforward to apply the harmonic initializer for DMV directly in our model without using some kind of pre-training since we do not observe gold POS.",
"We investigate the effect of the choice of pre-trained embedding on performance while using our approach. To this end, we additionally include results using fastText embeddings BIBREF41 – which, in contrast with skip-gram embeddings, include character-level information. We set the context windows size to 1 and the dimension size to 100 as in the skip-gram training, while keeping other parameters set to their defaults. These results are summarized in Table TABREF50 and Table TABREF51 . While fastText embeddings lead to reduced performance with our model, our approach still yields an improvement over the Gaussian baseline with the new observed embeddings space."
],
[
"We perform qualitative analysis to understand how the latent embeddings help induce syntactic structures. First we filter out low-frequency words and punctuations in WSJ, and visualize the rest words (10k) with t-SNE BIBREF42 under different embeddings. We assign each word with its most likely gold POS tags in WSJ and color them according to the gold POS tags.",
"For our Markov-structured model, we have displayed the embedding space in Figure SECREF5 , where the gold POS clusters are well-formed. Further, we present five example target words and their five nearest neighbors in terms of cosine similarity. As shown in Table TABREF53 , the skip-gram embedding captures both semantic and syntactic aspects to some degree, yet our embeddings are able to focus especially on the syntactic aspects of words, in an unsupervised fashion without using any extra morphological information.",
"In Figure FIGREF54 we depict the learned latent embeddings with the DMV-structured syntax model. Unlike the Markov structure, the DMV structure maps a large subset of singular and plural nouns to the same overlapping region. However, two clusters of singular and plural nouns are actually separated. We inspect the two clusters and the overlapping region in Figure FIGREF54 , it turns out that the nouns in the separated clusters are words that can appear as subjects and, therefore, for which verb agreement is important to model. In contrast, the nouns in the overlapping region are typically objects. This demonstrates that the latent embeddings are focusing on aspects of language that are specifically important for modeling dependency without ever having seen examples of dependency parses. Some previous work has deliberately created embeddings to capture different notions of similarity BIBREF43 , BIBREF44 , while they use extra morphology or dependency annotations to guide the embedding learning, our approach provides a potential alternative to create new embeddings that are guided by structured syntax model, only using unlabeled text corpora."
],
[
"Our approach is related to flow-based generative models, which are first described in NICE BIBREF16 and have recently received more attention BIBREF17 , BIBREF19 , BIBREF18 . This relevant work mostly adopts simple (e.g. Gaussian) and fixed priors and does not attempt to learn interpretable latent structures. Another related generative model class is variational auto-encoders (VAEs) BIBREF45 that optimize a lower bound on the marginal data likelihood, and can be extended to learn latent structures BIBREF46 , BIBREF47 . Against the flow-based models, VAEs remove the invertibility constraint but sacrifice the merits of exact inference and exact log likelihood computation, which potentially results in optimization challenges BIBREF48 . Our approach can also be viewed in connection with generative adversarial networks (GANs) BIBREF49 that is a likelihood-free framework to learn implicit generative models. However, it is non-trivial for a gradient-based method like GANs to propagate gradients through discrete structures."
],
[
"In this work, we define a novel generative approach to leverage continuous word representations for unsupervised learning of syntactic structure. Experiments on both POS induction and unsupervised dependency parsing tasks demonstrate the effectiveness of our proposed approach. Future work might explore more sophisticated invertible projections, or recurrent projections that jointly transform the entire input sequence. "
]
],
"section_name": [
"Introduction",
"Model",
"Example: Gaussian HMM",
"Markov Structure with Neural Projector",
"General Structure with Neural Projector",
"Learning & Inference",
"Learning with Invertibility",
"Invertible Volume-Preserving Neural Net",
"Experiments",
"Data",
"General Experimental Setup",
"Unsupervised POS tagging",
"Unsupervised Dependency Parsing without gold POS tags",
"Sensitivity Analysis",
"Qualitative Analysis of Embeddings",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"89e68f3e625f10fa7cc018ec88fa3b3b37134555",
"bd4e67352125af1957e2680803226ecc80ba23e7"
],
"answer": [
{
"evidence": [
"For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank. To create the observed data embeddings, we train skip-gram word embeddings BIBREF7 that are found to capture syntactic properties well when trained with small context window BIBREF8 , BIBREF9 . Following BIBREF9 , the dimensionality INLINEFORM0 is set to 100, and the training context window size is set to 1 to encode more syntactic information. The skip-gram embeddings are trained on the one billion word language modeling benchmark dataset BIBREF21 in addition to the WSJ corpus."
],
"extractive_spans": [
" Wall Street Journal (WSJ) portion of the Penn Treebank"
],
"free_form_answer": "",
"highlighted_evidence": [
"For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"e1a33fcce075ed87becbc9c08026e99dbbc63ed4",
"e5b8c208ed94467e94b803c65abf4b111d4d2997"
],
"answer": [
{
"evidence": [
"For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank. To create the observed data embeddings, we train skip-gram word embeddings BIBREF7 that are found to capture syntactic properties well when trained with small context window BIBREF8 , BIBREF9 . Following BIBREF9 , the dimensionality INLINEFORM0 is set to 100, and the training context window size is set to 1 to encode more syntactic information. The skip-gram embeddings are trained on the one billion word language modeling benchmark dataset BIBREF21 in addition to the WSJ corpus."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"3518f530a919fbe6aac27f0a3f73c908dc2db7bb",
"cd0a156e80802194d58a65040b4014bf72f0e5ba"
],
"answer": [
{
"evidence": [
"In this section, we introduce an invertibility condition for our neural projector to tackle the optimization challenge. Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists. Invertible transformations have been explored before in independent components analysis BIBREF14 , gaussianization BIBREF15 , and deep density models BIBREF16 , BIBREF17 , BIBREF18 , for unstructured data. Here, we generalize this style of approach to structured learning, and augment it with discrete latent variables ( INLINEFORM2 ). Under the invertibility condition, we derive a learning algorithm and give another view of our approach revealed by the objective function. Then, we present the architecture of a neural projector we use in experiments: a volume-preserving invertible neural network proposed by BIBREF16 for independent components estimation."
],
"extractive_spans": [],
"free_form_answer": "The neural projector must be invertible.",
"highlighted_evidence": [
"In this section, we introduce an invertibility condition for our neural projector to tackle the optimization challenge. Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section, we introduce an invertibility condition for our neural projector to tackle the optimization challenge. Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists. Invertible transformations have been explored before in independent components analysis BIBREF14 , gaussianization BIBREF15 , and deep density models BIBREF16 , BIBREF17 , BIBREF18 , for unstructured data. Here, we generalize this style of approach to structured learning, and augment it with discrete latent variables ( INLINEFORM2 ). Under the invertibility condition, we derive a learning algorithm and give another view of our approach revealed by the objective function. Then, we present the architecture of a neural projector we use in experiments: a volume-preserving invertible neural network proposed by BIBREF16 for independent components estimation."
],
"extractive_spans": [
"we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists"
],
"free_form_answer": "",
"highlighted_evidence": [
"Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What datasets do they evaluate on?",
"Do they evaluate only on English datasets?",
"What is the invertibility condition?"
],
"question_id": [
"6424e442b34a576f904d9649d63acf1e4fdefdfc",
"5eabfc6cc8aa8a99e6e42514ef9584569cb75dec",
"887c6727e9f25ade61b4853a869fe712fe0b703d"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Visualization (t-SNE) of skip-gram embeddings (trained on one billion words with context window size equal to 1) and latent embeddings learned by our approach with a Markov-structured prior. Each node represents a word and is colored according to the most likely gold POS tag from the Penn Treebank (best seen in color).",
"Figure 2: Depiction of proposed generative model. The syntax model is composed of discrete random variables, zi. Each ei is a latent continuous embeddings sampled from Gaussian distribution conditioned on zi, while xi is the observed embedding, deterministically derived from ei. The left portion depicts how the neural projector maps the simple Gaussian to a more complex distribution in the output space. The right portion depicts two instantiations of the syntax model in our approach: one is Markov-structured and the other is DMV-structured. For DMV, ztree is the latent dependency tree structure.",
"Figure 3: Depiction of the architecture of the inverse projection f−1φ that composes multiple volume-preserving coupling layers, with which we parameterize our model. On the right, we schematically depict how the inverse projection transforms the observed word embedding xi to a point ei in a new embedding space.",
"Table 1: Unsupervised POS tagging results on entire WSJ, compared with other baselines and state-of-the-art systems. Standard deviation is given in parentheses when available.",
"Figure 4: Normalized Confusion matrix for POS tagging experiments, row label represents the gold tag.",
"Table 2: Directed dependency accuracy on section 23 of WSJ, evaluating on sentences of length 6 10 and all lengths. Starred entries (∗) denote that the system benefits from additional punctuation-based constraints. Standard deviation is given in parentheses when available.",
"Table 3: Unsupervised POS tagging results of our approach on WSJ, with random initialization of syntax model.",
"Table 4: Unsupervised POS tagging results on WSJ, with fastText vectors as the observed embeddings.",
"Table 5: Directed dependency accuracy on section 23 of WSJ, with fastText vectors as the observed embeddings.",
"Table 6: Target words and their 5 nearest neighbors, based on skip-gram embeddings and our learned latent embeddings with Markov-structured syntax model.",
"Figure 5: Visualization (t-SNE) of learned latent embeddings with DMV-structured syntax model. Each node represents a word and is colored according to the most likely gold POS tag in the Penn Treebank (best seen in color)."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"5-Figure3-1.png",
"6-Table1-1.png",
"6-Figure4-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"8-Table5-1.png",
"9-Table6-1.png",
"9-Figure5-1.png"
]
} | [
"What is the invertibility condition?"
] | [
[
"1808.09111-Learning & Inference-0"
]
] | [
"The neural projector must be invertible."
] | 23 |
1906.08593 | Conflict as an Inverse of Attention in Sequence Relationship | Attention is a very efficient way to model the relationship between two sequences by comparing how similar two intermediate representations are. Initially demonstrated in NMT, it is a standard in all NLU tasks today when efficient interaction between sequences is considered. However, we show that attention, by virtue of its composition, works best only when it is given that there is a match somewhere between two sequences. It does not very well adapt to cases when there is no similarity between two sequences or if the relationship is contrastive. We propose an Conflict model which is very similar to how attention works but which emphasizes mostly on how well two sequences repel each other and finally empirically show how this method in conjunction with attention can boost the overall performance. | {
"paragraphs": [
[
"Modelling the relationship between sequences is extremely significant in most retrieval or classification problems involving two sequences. Traditionally, in Siamese networks, Hadamard product or concatenation have been used to fuse two vector representations of two input sequences to form a final representation for tasks like semantic similarity, passage retrieval. This representation, subsequently, has been used to compute similarity scores which has been used in a variety of training objectives like margin loss for ranking or cross-entropy error in classification.",
"We have also witnessed word or phrase level similarity to create alignment matrices between two sequences BIBREF0 , BIBREF1 . These alignment matrices has proved to be very useful to model the relationship between two word representations as well fuse the relevant information of one sequence into another. Empirical evidences have shown this alignment procedures have significantly performed better then simple concatenation or element-wise multiplication, especially for long sentences or paragraphs.",
"Attention works on creating neural alignment matrix using learnt weights without pre-computing alignment matrix and using them as features. The main objective of any attentive or alignment process is to look for matching words or phrases between two sequences and assign a high weight to the most similar pairs and vice-versa. The notion of matching or similarity maybe not semantic similarity but based on whatever task we have at hand. For example, for a task that requires capturing semantic similarity between two sequences like \"how rich is tom cruise\" and \"how much wealth does tom cruise have\", an attentive model shall discover the high similarity between \"rich\" and \"wealthy\" and assign a high weight value to the pair. Likewise, for a different task like question answering, a word \"long\" in a question like \"how long does it take to recover from a mild fever\" might be aligned with the phrase \"a week\" from the candidate answer \"it takes almost a week to recover fully from a fever\". Thus, attention significantly aids in better understanding the relevance of a similar user query in a similar measurement task or a candidate answer in a question answering task. The final prediction score is dependent on how well the relationship between two sequences are modeled and established.",
"The general process of matching one sequence with another through attention includes computing the alignment matrix containing weight value between every pair of word representations belonging to both of the sequences. Subsequently, softmax function is applied on all the elements of one of the two dimensions of the matrix to represent the matching probabilities of all the word of a sequence with respect to one particular word in the other sequence.",
"Since attention always looks for matching word representations, it operates under the assumption that there is always a match to be found inside the sequences. We provide a theoretical limitation to it and propose another technique called conflict that looks for contrasting relationship between words in two sequences. We empirically verify that our proposed conflict mechanism combined with attention can outperform the performance of attention working solely."
],
[
"Bahdanau et al. BIBREF2 introduced attention first in neural machine translation. It used a feed-forward network over addition of encoder and decoder states to compute alignment score. Our work is very similar to this except we use element wise difference instead of addition to build our conflict function. BIBREF3 came up with a scaled dot-product attention in their Transformer model which is fast and memory-efficient. Due to the scaling factor, it didn't have the issue of gradients zeroing out. On the other hand, BIBREF4 has experimented with global and local attention based on the how many hidden states the attention function takes into account. Their experiments have revolved around three attention functions - dot, concat and general. Their findings include that dot product works best for global attention. Our work also belongs to the global attention family as we consider all the hidden states of the sequence.",
"Attention has been widely used in pair-classification problems like natural language inference. Wang et al. BIBREF5 introduced BIMPM which matched one sequence with another in four different fashion but one single matching function which they used as cosine. Liu et al. BIBREF6 proposed SAN for language inference which also used dot-product attention between the sequences.",
"Summarizing, attention has helped in achieving state-of-the-art results in NLI and QA. Prior work in attention has been mostly in similarity based approaches while our work focuses on non-matching sequences."
],
[
"Let us consider that we have two sequences INLINEFORM0 and INLINEFORM1 each with M and N words respectively. The objective of attention is two-fold: compute alignment scores (or weight) between every word representation pairs from INLINEFORM2 and INLINEFORM3 and fuse the matching information of INLINEFORM4 with INLINEFORM5 thus computing a new representation of INLINEFORM6 conditioned on INLINEFORM7 .",
"The word representations that attention operates on can be either embeddings like GloVe or hidden states from any recurrent neural network. We denote these representations as u = INLINEFORM0 and v = INLINEFORM1 . We provide a mathematical working of how a general attention mechanism works between two sequences, followed by a explanation in words: DISPLAYFORM0 ",
" Explanation: Both are sequences are non-linearly projected into two different spaces (eqn.1) and each word representation in INLINEFORM0 is matched with that in INLINEFORM1 by computing a dot-product (eqn.2). INLINEFORM2 is a M X N matrix that stores the alignment scores between word INLINEFORM3 and INLINEFORM4 (eqn.2). Since, the scores are not normalized, a softmax function is applied on each row to convert them to probabilities (eqn. 3). Thus, each row contains relative importance of words in INLINEFORM5 to a particular word INLINEFORM6 . Weighted sum of INLINEFORM7 is taken (eqn. 4) and fused with the word representation INLINEFORM8 using concatenation (eqn.5)."
],
[
"Attention operates by using dot product or sometimes addition followed by linear projection to a scalar which models the similarity between two vectors. Subsequently, softmax is applied which gives high probabilities to most matching word representations. This assumes that there is some highly matched word pairs already existing and high scores will be assigned to them. Given a vector INLINEFORM0 =( INLINEFORM1 ,..., INLINEFORM2 ) on which softmax function is applied, each INLINEFORM3 INLINEFORM4 (0, 1). It is observable that the average value of INLINEFORM5 is always INLINEFORM6 . In other words, it is impossible to produce a vector having all INLINEFORM7 < INLINEFORM8 when two sequences have no matching at all.",
"In cases, where one or more word pairs from two different sequences are highly dissimilar, it is impossible to assign a very low probability to it without increasing the probability of some other pair somewhere else since INLINEFORM0 = 1.",
"For example, when we consider two sequences \"height of tom cruise\" and \"age of sun\", while computing the attention weights between the word \"height\" and all the words in the second sequence it can be observed that their no matching word in the latter. In this case, a standard dot-product based attention with softmax won't be able to produce weights which is below 0.33 (=1/3) for all the words in the second sequence with respect to the word \"height\" in the first sequence."
],
[
"We propose a different mechanism that does the opposite of what attention does that is computing how much two sequences repel each other. This works very similar to how attention works but inversely.",
"We demonstrate a general model but we also realize that there can be other variants of it which may be worked out to perform better. Our approach consists of using element wise difference between two vectors followed by a linear transformation to produce a scalar weight. The remaining of the process acts similar to how attention works. Mathematically, we can express it as: DISPLAYFORM0 ",
"where INLINEFORM0 INLINEFORM1 INLINEFORM2 is a parameter that we introduce to provide a weight for the pair. The two word representations INLINEFORM3 and INLINEFORM4 are projected to a space where their element wise difference can be used to model their dissimilarity and softmax applied on them can produce high probability to more dissimilar word pairs.",
"It is good to note that conflict suffers from the same limitation that attention suffers from. This is when a pair of sentences are highly matching especially with multiple associations. But when the two methods work together, each compensates for the other's shortcomings."
],
[
"We used two weighted representations of INLINEFORM0 using weights of attention and conflict as computed in Eqn. (4) and (8) respectively. Our final representation of a word representation INLINEFORM1 conditioned on INLINEFORM2 can be expressed as: DISPLAYFORM0 ",
" where A and C denote that they are from attention and conflict models respectively."
],
[
"Multi-head attention, as introduced in BIBREF3 , computes multiple identical attention mechanism parallelly on multiple linear projections of same inputs. The parameters of each attention and projections are different in each head. Finally, they concatenate all the attentions which is similar to how we concatenate conflict and attention. However, they use dot-product to compute each of the attention.",
"Our combined model that contains both attention and conflict can be thought of as a 2-head attention model but both heads are different. Our conflict head explicitly captures difference between the inputs."
],
[
"We observe how our conflict model learns the dissimilarities between word representations. We achieve that by visualizing the heatmap of the weight matrix INLINEFORM0 for both attention and conflict from eqns. (3) and (8). While attention successfully learns the alignments, conflict matrix also shows that our approach models the contradicting associations like \"animal\" and \"lake\" or \"australia\" and \"world\". These two associations are the unique pairs which are instrumental in determining that the two queries are not similar."
],
[
"We create two models both of which constitutes of three main parts: encoder, interaction and classifier and take two sequences as input. Except interaction, all the other parts are exactly identical between the two models. The encoder is shared among the sequences simply uses two stacked GRU layers. The interaction part consists of only attention for one model while for the another one it consists of attention and conflict combined as shown in (eqn.11) . The classifier part is simply stacked fully-connected layers. Figure 3 shows a block diagram of how our model looks like."
],
[
"The dataset includes pairs of questions labelled as 1 or 0 depending on whether a pair is duplicate or not respectively. This is a popular pair-level classification task on which extensive work has already been done before like BIBREF7 , BIBREF8 . For this task, we make the output layer of our model to predict two probabilities for non-duplicate and duplicate. We sample the data from the original dataset so that it contains equal positive and negative classes. Original dataset has some class imbalance but for sake simplicity we don't consider it. The final data that we use has roughly 400,000 question pairs and we split this data into train and test using 8:2 ratio.",
"We train all our models for roughly 2 epochs with a batch size of 64. We use a hidden dimension of 150 throughout the model. The embedding layer uses ELMO BIBREF9 which has proven to be very useful in various downstream language understanding tasks. Our FC layers consists of four dense layers with INLINEFORM0 activation after each layer. The dropout rate is kept as 0.2 for every recurrent and FC linear layers. We use Adam optimizer in our experiment with epsilon=1e-8, beta=0.9 and learning rate=1e-3."
],
[
"People Also Ask is a feature in Bing search result page where related questions are recommended to the user. User may click on a question to view the answer. Clicking is a positive feedback that shows user's interest in the question. We use this click logs to build a question classifier using the same model in Figure 3. The problem statement is very similar to BIBREF10 where they use logistic regression to predict whether an user would click on ad. Our goal is to classify if a question is potential high-click question or not for a given query. For this, we first create a labelled data set using the click logs where any question having CTR lower than 0.3 is labelled as 0 and a question having CTR more than 0.7 as 1.",
"Our final data resembles that of a pair-level classifier, as in Task 1, where user query and candidate questions are input. With these data set, we train a binary classifier to detect high-click and low-click questions."
],
[
"For both tasks, we compute classification accuracy using three model variants and report the results in Table 1 and Table 2. We observe that model with both attention and conflict combined gives the best results.",
"We also show the training loss curve for both the models having attention and attention combined with conflict respectively. Figure 4 and 5 shows these curves for Task 1 and Task 2 respectively. The curves are smoothed using moving average having an window size of 8. We notice that the conflict model has much steeper slope and converges to a much better minima in both the tasks. It can also be noticed that in the training procedure for the model which has both attention and conflict, the updates are much smoother."
],
[
"We also show qualitative results where we can observe that our model with attention and conflict combined does better on cases where pairs are non-duplicate and has very small difference. We have observed that the conflict model is very sensitive to even minor differences and compensates in such cases where attention poses high bias towards similarities already there in the sequences.",
"Sequence 1: What are the best ways to learn French ?",
"Sequence 2: How do I learn french genders ?",
"Attention only: 1",
"Attention+Conflict: 0",
"Ground Truth: 0",
"Sequence 1: How do I prevent breast cancer ?",
"Sequence 2: Is breast cancer preventable ?",
"Attention only: 1",
"Attention+Conflict: 0",
"Ground Truth: 0",
"We provide two examples with predictions from the models with only attention and combination of attention and conflict. Each example is accompanied by the ground truth in our data."
],
[
"We analyzed the gains in Task 1 which we get from the attention-conflict model in order to ensure that they are not due to randomness in weight initialization or simply additional parameters. We particularly focused on the examples which were incorrectly marked in attention model but correctly in attention-conflict model. We saw that 70% of those cases are the ones where the pair was incorrectly marked as duplicate in the previous model but our combined model correctly marked them as non-duplicate."
],
[
"In this work, we highlighted the limits of attention especially in cases where two sequences have a contradicting relationship based on the task it performs. To alleviate this problem and further improve the performance, we propose a conflict mechanism that tries to capture how two sequences repel each other. This acts like the inverse of attention and, empirically, we show that how conflict and attention together can improve the performance.",
"Future research work should be based on alternative design of conflict mechanism using other difference operators other than element wise difference which we use."
]
],
"section_name": [
"Introduction",
"Related Work",
"How attention works",
"Limits of using only Attention",
"Conflict model",
"Combination of attention and conflict",
"Relation to Multi-Head attention",
"Visualizing attention and conflict",
"The model",
"Task 1: Quora Duplicate Question Pair Detection",
"Task 2: Ranking questions in Bing's People Also Ask",
"Quantitative Analysis",
"Qualitative Comparison",
"Analyzing the gains",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"396dd1fbf7c5f7536df6069f73cfde4ee9d12cfc",
"94873136e21065a4b823aeddae434f9cd32902f0"
],
"answer": [
{
"evidence": [
"We also show qualitative results where we can observe that our model with attention and conflict combined does better on cases where pairs are non-duplicate and has very small difference. We have observed that the conflict model is very sensitive to even minor differences and compensates in such cases where attention poses high bias towards similarities already there in the sequences.",
"Sequence 1: What are the best ways to learn French ?",
"Sequence 2: How do I learn french genders ?",
"Attention only: 1",
"Attention+Conflict: 0",
"Ground Truth: 0",
"Sequence 1: How do I prevent breast cancer ?",
"Sequence 2: Is breast cancer preventable ?",
"We provide two examples with predictions from the models with only attention and combination of attention and conflict. Each example is accompanied by the ground truth in our data."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We have observed that the conflict model is very sensitive to even minor differences and compensates in such cases where attention poses high bias towards similarities already there in the sequences.\n\nSequence 1: What are the best ways to learn French ?\n\nSequence 2: How do I learn french genders ?\n\nAttention only: 1\n\nAttention+Conflict: 0\n\nGround Truth: 0\n\nSequence 1: How do I prevent breast cancer ?\n\nSequence 2: Is breast cancer preventable ?\n\nAttention only: 1\n\nAttention+Conflict: 0\n\nGround Truth: 0\n\nWe provide two examples with predictions from the models with only attention and combination of attention and conflict. Each example is accompanied by the ground truth in our data."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We also show qualitative results where we can observe that our model with attention and conflict combined does better on cases where pairs are non-duplicate and has very small difference. We have observed that the conflict model is very sensitive to even minor differences and compensates in such cases where attention poses high bias towards similarities already there in the sequences.",
"Sequence 1: What are the best ways to learn French ?",
"Sequence 2: How do I learn french genders ?",
"Attention only: 1",
"Attention+Conflict: 0",
"Ground Truth: 0",
"Sequence 1: How do I prevent breast cancer ?",
"Sequence 2: Is breast cancer preventable ?"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We have observed that the conflict model is very sensitive to even minor differences and compensates in such cases where attention poses high bias towards similarities already there in the sequences.\n\nSequence 1: What are the best ways to learn French ?\n\nSequence 2: How do I learn french genders ?\n\nAttention only: 1\n\nAttention+Conflict: 0\n\nGround Truth: 0\n\nSequence 1: How do I prevent breast cancer ?\n\nSequence 2: Is breast cancer preventable ?\n\nAttention only: 1\n\nAttention+Conflict: 0\n\nGround Truth: 0"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"48d4567d3a56f37f291b08a6696091e682386d90",
"55e23be76666d90113d7c01b2bd03dbbbe500ba1"
],
"answer": [
{
"evidence": [
"We create two models both of which constitutes of three main parts: encoder, interaction and classifier and take two sequences as input. Except interaction, all the other parts are exactly identical between the two models. The encoder is shared among the sequences simply uses two stacked GRU layers. The interaction part consists of only attention for one model while for the another one it consists of attention and conflict combined as shown in (eqn.11) . The classifier part is simply stacked fully-connected layers. Figure 3 shows a block diagram of how our model looks like."
],
"extractive_spans": [],
"free_form_answer": "GRU-based encoder, interaction block, and classifier consisting of stacked fully-connected layers.",
"highlighted_evidence": [
"We create two models both of which constitutes of three main parts: encoder, interaction and classifier and take two sequences as input.",
"The encoder is shared among the sequences simply uses two stacked GRU layers. The interaction part consists of only attention for one model while for the another one it consists of attention and conflict combined as shown in (eqn.11) . The classifier part is simply stacked fully-connected layers. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We create two models both of which constitutes of three main parts: encoder, interaction and classifier and take two sequences as input. Except interaction, all the other parts are exactly identical between the two models. The encoder is shared among the sequences simply uses two stacked GRU layers. The interaction part consists of only attention for one model while for the another one it consists of attention and conflict combined as shown in (eqn.11) . The classifier part is simply stacked fully-connected layers. Figure 3 shows a block diagram of how our model looks like."
],
"extractive_spans": [
"two stacked GRU layers",
"attention for one model while for the another one it consists of attention and conflict combined",
"fully-connected layers"
],
"free_form_answer": "",
"highlighted_evidence": [
"We create two models both of which constitutes of three main parts: encoder, interaction and classifier and take two sequences as input. Except interaction, all the other parts are exactly identical between the two models. The encoder is shared among the sequences simply uses two stacked GRU layers. The interaction part consists of only attention for one model while for the another one it consists of attention and conflict combined as shown in (eqn.11) . The classifier part is simply stacked fully-connected layers. Figure 3 shows a block diagram of how our model looks like."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3589917ad5ba9005fdec4ded444b74b36fec97d8",
"8437cd69a41ac651373861bd2acf40a77abe745e"
],
"answer": [
{
"evidence": [
"Task 1: Quora Duplicate Question Pair Detection",
"Task 2: Ranking questions in Bing's People Also Ask"
],
"extractive_spans": [
"Task 1: Quora Duplicate Question Pair Detection",
"Task 2: Ranking questions"
],
"free_form_answer": "",
"highlighted_evidence": [
"Task 1: Quora Duplicate Question Pair Detection",
"Task 2: Ranking questions in Bing's People Also Ask"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Task 1: Quora Duplicate Question Pair Detection",
"Task 2: Ranking questions in Bing's People Also Ask"
],
"extractive_spans": [
"Quora Duplicate Question Pair Detection",
"Ranking questions in Bing's People Also Ask"
],
"free_form_answer": "",
"highlighted_evidence": [
"Task 1: Quora Duplicate Question Pair Detection",
"Task 2: Ranking questions in Bing's People Also Ask"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they show on which examples how conflict works better than attention?",
"Which neural architecture do they use as a base for their attention conflict mechanisms?",
"On which tasks do they test their conflict method?"
],
"question_id": [
"6236762b5631d9e395f81e1ebccc4bf3ab9b24ac",
"31d695ba855d821d3e5cdb7bea638c7dbb7c87c7",
"b14217978ad9c3c9b6b1ce393b1b5c6e7f49ecab"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Attention Heatmaps",
"Figure 2: Conflict Heatmaps",
"Figure 3: Generic Model containing interaction layer. We use attention, conflict or conjunction of attention and conflict as the interaction layer.",
"Figure 5: Training loss curve for Task 2",
"Figure 4: Training loss curve for Task 1",
"Table 1: Result on Quora Dataset",
"Table 2: Result on PAA click data"
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure5-1.png",
"4-Figure4-1.png",
"5-Table1-1.png",
"5-Table2-1.png"
]
} | [
"Which neural architecture do they use as a base for their attention conflict mechanisms?"
] | [
[
"1906.08593-The model-0"
]
] | [
"GRU-based encoder, interaction block, and classifier consisting of stacked fully-connected layers."
] | 24 |
1809.00540 | Multilingual Clustering of Streaming News | Clustering news across languages enables efficient media monitoring by aggregating articles from multilingual sources into coherent stories. Doing so in an online setting allows scalable processing of massive news streams. To this end, we describe a novel method for clustering an incoming stream of multilingual documents into monolingual and crosslingual story clusters. Unlike typical clustering approaches that consider a small and known number of labels, we tackle the problem of discovering an ever growing number of cluster labels in an online fashion, using real news datasets in multiple languages. Our method is simple to implement, computationally efficient and produces state-of-the-art results on datasets in German, English and Spanish. | {
"paragraphs": [
[
"Following developing news stories is imperative to making real-time decisions on important political and public safety matters. Given the abundance of media providers and languages, this endeavor is an extremely difficult task. As such, there is a strong demand for automatic clustering of news streams, so that they can be organized into stories or themes for further processing. Performing this task in an online and efficient manner is a challenging problem, not only for newswire, but also for scientific articles, online reviews, forum posts, blogs, and microblogs.",
"A key challenge in handling document streams is that the story clusters must be generated on the fly in an online fashion: this requires handling documents one-by-one as they appear in the document stream. In this paper, we provide a treatment to the problem of online document clustering, i.e. the task of clustering a stream of documents into themes. For example, for news articles, we would want to cluster them into related news stories.",
"To this end, we introduce a system which aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream. Our clustering approach is part of a larger media monitoring project to solve the problem of monitoring massive text and TV/Radio streams (speech-to-text). In particular, media monitors write intelligence reports about the most relevant events, and being able to search, visualize and explore news clusters assists in gathering more insight about a particular story. Since relevant events may be spawned from any part of the world (and from many multilingual sources), it becomes imperative to cluster news across different languages.",
"In terms of granularity, the type of story clusters we are interested in are the group of articles which, for example : (i) Narrate recent air-strikes in Eastern Ghouta (Syria); (ii) Describe the recent launch of Space X's Falcon Heavy rocket."
],
[
"",
"We focus on clustering of a stream of documents, where the number of clusters is not fixed and learned automatically. We denote by INLINEFORM0 a (potentially infinite) space of multilingual documents. Each document INLINEFORM1 is associated with a language in which it is written through a function INLINEFORM2 where INLINEFORM3 is a set of languages. For example, INLINEFORM4 could return English, Spanish or German. (In the rest of the paper, for an integer INLINEFORM5 , we denote by INLINEFORM6 the set INLINEFORM7 .)",
"We are interested in associating each document with a monolingual cluster via the function INLINEFORM0 , which returns the cluster label given a document. This is done independently for each language, such that the space of indices we use for each language is separate.",
"Furthermore, we interlace the problem of monolingual clustering with crosslingual clustering. This means that as part of our problem formulation we are also interested in a function INLINEFORM0 that associates each monolingual cluster with a crosslingual cluster, such that each crosslingual cluster only groups one monolingual cluster per different language, at a given time. The crosslingual cluster for a document INLINEFORM1 is INLINEFORM2 . As such, a crosslingual cluster groups together monolingual clusters, at most one for each different language.",
"Intuitively, building both monolingual and crosslingual clusters allows the system to leverage high-precision monolingual features (e.g., words, named entities) to cluster documents of the same language, while simplifying the task of crosslingual clustering to the computation of similarity scores across monolingual clusters - which is a smaller problem space, since there are (by definition) less clusters than articles. We validate this choice in § SECREF5 ."
],
[
"Each document INLINEFORM0 is represented by two vectors in INLINEFORM1 and INLINEFORM2 . The first vector exists in a “monolingual space” (of dimensionality INLINEFORM3 ) and is based on a bag-of-words representation of the document. The second vector exists in a “crosslingual space” (of dimensionality INLINEFORM4 ) which is common to all languages. More details about these representations are discussed in § SECREF4 ."
],
[
"In this section, we give more details about the way we construct the document representations in the monolingual and crosslingual spaces. In particular, we introduce the definition of the similarity functions INLINEFORM0 and INLINEFORM1 that were referred in § SECREF3 ."
],
[
"Our similarity metric computes weighted cosine similarity on the different subvectors, both in the case of monolingual clustering and crosslingual clustering. Formally, for the monolingual case, the similarity is given by a function defined as: DISPLAYFORM0 ",
"and is computed on the TF-IDF subvectors where INLINEFORM0 is the number of subvectors for the relevant document representation. For the crosslingual case, we discuss below the function INLINEFORM1 , which has a similar structure.",
"Here, INLINEFORM0 is the INLINEFORM1 th document in the stream and INLINEFORM2 is a monolingual cluster. The function INLINEFORM3 returns the cosine similarity between the document representation of the INLINEFORM4 th document and the centroid for cluster INLINEFORM5 . The vector INLINEFORM6 denotes the weights through which each of the cosine similarity values for each subvectors are weighted, whereas INLINEFORM7 denotes the weights for the timestamp features, as detailed further. Details on learning the weights INLINEFORM8 and INLINEFORM9 are discussed in § SECREF26 .",
"The function INLINEFORM0 that maps a pair of document and cluster to INLINEFORM1 is defined as follows. Let DISPLAYFORM0 ",
"for a given INLINEFORM0 and INLINEFORM1 . For each document INLINEFORM2 and cluster INLINEFORM3 , we generate the following three-dimensional vector INLINEFORM4 :",
" INLINEFORM0 where INLINEFORM1 is the timestamp for document INLINEFORM2 and INLINEFORM3 is the timestamp for the newest document in cluster INLINEFORM4 .",
" INLINEFORM0 where INLINEFORM1 is the average timestamp for all documents in cluster INLINEFORM2 .",
" INLINEFORM0 where INLINEFORM1 is the timestamp for the oldest document in cluster INLINEFORM2 .",
"These three timestamp features model the time aspect of the online stream of news data and help disambiguate clustering decisions, since time is a valuable indicator that a news story has changed, even if a cluster representation has a reasonable match in the textual features with the incoming document. The same way a news story becomes popular and fades over time BIBREF2 , we model the probability of a document belonging to a cluster (in terms of timestamp difference) with a probability distribution.",
"For the case of crosslingual clustering, we introduce INLINEFORM0 , which has a similar definition to INLINEFORM1 , only instead of passing document/cluster similarity feature vectors, we pass cluster/cluster similarities, across all language pairs. Furthermore, the features are the crosslingual embedding vectors of the sections title, body and both combined (similarly to the monolingual case) and the timestamp features. For denoting the cluster timestamp, we use the average timestamps of all articles in it."
],
[
"In § SECREF19 we introduced INLINEFORM0 and INLINEFORM1 as the weight vectors for the several document representation features. We experiment with both setting these weights to just 1 ( INLINEFORM2 and INLINEFORM3 ) and also learning these weights using support vector machines (SVMs). To generate the SVM training data, we simulate the execution of the algorithm on a training data partition (which we do not get evaluated on) and in which the gold standard labels are given. We run the algorithm using only the first subvector INLINEFORM4 , which is the TF-IDF vector with the words of the document in the body and title. For each incoming document, we create a collection of positive examples, for the document and the clusters which share at least one document in the gold labeling. We then generate 20 negative examples for the document from the 20 best-matching clusters which are not correct. To find out the best-matching clusters, we rank them according to their similarity to the input document using only the first subvector INLINEFORM5 .",
"Using this scheme we generate a collection of ranking examples (one for each document in the dataset, with the ranking of the best cluster matches), which are then trained using the SVMRank algorithm BIBREF3 . We run 5-fold cross-validation on this data to select the best model, and train both a separate model for each language according to INLINEFORM0 and a crosslingual model according to INLINEFORM1 ."
],
[
"Our system was designed to cluster documents from a (potentially infinite) real-word data stream. The datasets typically used in the literature (TDT, Reuters) have a small number of clusters ( INLINEFORM0 20) with coarse topics (economy, society, etc.), and therefore are not relevant to the use case of media monitoring we treat - as it requires much more fine-grained story clusters about particular events. To evaluate our approach, we adapted a dataset constructed for the different purpose of binary classification of joining cluster pairs. We processed it to become a collection of articles annotated with monolingual and crosslingual cluster labels.",
"Statistics about this dataset are given in Table TABREF30 . As described further, we tune the hyper-parameter INLINEFORM0 on the development set. As for the hyper-parameters related to the timestamp features, we fixed INLINEFORM1 and tuned INLINEFORM2 on the development set, yielding INLINEFORM3 . To compute IDF scores (which are global numbers computed across a corpus), we used a different and much larger dataset that we collected from Deutsche Welle's news website (http://www.dw.com/). The dataset consists of 77,268, 118,045 and 134,243 documents for Spanish, English and German, respectively.",
"The conclusions from our experiments are: (a) the weighting of the similarity metric features using SVM significantly outperforms unsupervised baselines such as CluStream (Table TABREF35 ); (b) the SVM approach significantly helps to learn when to create a new cluster, compared to simple grid search for the optimal INLINEFORM0 (Table TABREF39 ); (c) separating the feature space into one for monolingual clusters in the form of keywords and the other for crosslingual clusters based on crosslingual embeddings significantly helps performance."
],
[
"In our first set of experiments, we report results on monolingual clustering for each language separately. Monolingual clustering of a stream of documents is an important problem that has been inspected by others, such as by ahmed2011unified and by aggarwal2006framework. We compare our results to our own implementation of the online micro-clustering routine presented by aggarwal2006framework, which shall be referred to as CluStream. We note that CluStream of aggarwal2006framework has been a widely used state-of-the-art system in media monitoring companies as well as academia, and serves as a strong baseline to this day.",
"In our preliminary experiments, we also evaluated an online latent semantic analysis method, in which the centroids we keep for the function INLINEFORM0 (see § SECREF3 ) are the average of reduced dimensional vectors of the incoming documents as generated by an incremental singular value decomposition (SVD) of a document-term matrix that is updated after each incoming document. However, we discovered that online LSA performs significantly worse than representing the documents the way is described in § SECREF4 . Furthermore, it was also significantly slower than our algorithm due to the time it took to perform singular value decomposition.",
"Table TABREF35 gives the final monolingual results on the three datasets. For English, we see that the significant improvement we get using our algorithm over the algorithm of aggarwal2006framework is due to an increased recall score. We also note that the trained models surpass the baseline for all languages, and that the timestamp feature (denoted by TS), while not required to beat the baseline, has a very relevant contribution in all cases. Although the results for both the baseline and our models seem to differ across languages, one can verify a consistent improvement from the latter to the former, suggesting that the score differences should be mostly tied to the different difficulty found across the datasets for each language. The presented scores show that our learning framework generalizes well to different languages and enables high quality clustering results.",
"To investigate the impact of the timestamp features, we ran an additional experiment using only the same three timestamp features as used in the best model on the English dataset. This experiment yielded scores of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 , which lead us to conclude that while these features are not competitive when used alone (hence temporal information by itself is not sufficient to predict the clusters), they contribute significantly to recall with the final feature ensemble.",
"We note that as described in § SECREF3 , the optimization of the INLINEFORM0 parameter is part of the development process. The parameter INLINEFORM1 is a similarity threshold used to decide when an incoming document should merge to the best cluster or create a new one. We tune INLINEFORM2 on the development set for each language, and the sensitivity to it is demonstrated in Figure FIGREF36 (this process is further referred to as INLINEFORM3 ). Although applying grid-search on this parameter is the most immediate approach to this problem, we experimented with a different method which yielded superior results: as described further, we discuss how to do this process with an additional classifier (denoted SVM-merge), which captures more information about the incoming documents and the existing clusters.",
"Additionally, we also experimented with computing the monolingual clusters with the same embeddings as used in the crosslingual clustering phase, which yielded poor results. In particular, this system achieved INLINEFORM0 score of INLINEFORM1 for English, which is below the bag-of-words baseline presented in Table TABREF35 . This result supports the approach we then followed of having two separate feature spaces for the monolingual and crosslingual clustering systems, where the monolingual space is discrete and the crosslingual space is based on embeddings.",
"To investigate the importance of each feature, we now consider in Table TABREF37 the accuracy of the SVM ranker for English as described in § SECREF19 . We note that adding features increases the accuracy of the SVM ranker, especially the timestamp features. However, the timestamp feature actually interferes with our optimization of INLINEFORM0 to identify when new clusters are needed, although they improve the SVM reranking accuracy. We speculate this is true because high accuracy in the reranking problem does not necessarily help with identifying when new clusters need to be opened.",
"To investigate this issue, we experimented with a different technique to learn when to create a new cluster. To this end, we trained another SVM classifier just to learn this decision, this time a binary classifier using LIBLINEAR BIBREF4 , by passing the max of the similarity of each feature between the incoming document and the current clustering pool as the input feature vector. This way, the classifier learns when the current clusters, as a whole, are of a different news story than the incoming document. As presented in Table TABREF39 , this method, which we refer to as SVM-merge, solved the issue of searching for the optimal INLINEFORM0 parameter for the SVM-rank model with timestamps, by greatly improving the F INLINEFORM1 score in respect to the original grid-search approach ( INLINEFORM2 )."
],
[
"As mentioned in § SECREF3 , crosslingual embeddings are used for crosslingual clustering. We experimented with the crosslingual embeddings of gardner2015translation and ammar2016massively. In our preliminary experiments we found that the former worked better for our use-case than the latter.",
"We test two different scenarios for optimizing the similarity threshold INLINEFORM0 for the crosslingual case. Table TABREF41 shows the results for these experiments. First, we consider the simpler case of adjusting a global INLINEFORM1 parameter for the crosslingual distances, as also described for the monolingual case. As shown, this method works poorly, since the INLINEFORM2 grid-search could not find a reasonable INLINEFORM3 which worked well for every possible language pair.",
"Subsequently, we also consider the case of using English as a pivot language (see § SECREF3 ), where distances for every other language are only compared to English, and crosslingual clustering decisions are made only based on this distance. This yielded our best crosslingual score of INLINEFORM0 , confirming that crosslingual similarity is of higher quality between each language and English, for the embeddings we used. This score represents only a small degradation in respect to the monolingual results, since clustering across different languages is a harder problem."
],
[
"Early research efforts, such as the TDT program BIBREF5 , have studied news clustering for some time. The problem of online monolingual clustering algorithms (for English) has also received a fair amount of attention in the literature. One of the earlier papers by aggarwal2006framework introduced a two-step clustering system with both offline and online components, where the online model is based on a streaming implementation of INLINEFORM0 -means and a bag-of-words document representation. Other authors have experimented with distributed representations, such as ahmed2011unified, who cluster news into storylines using Markov chain Monte Carlo methods, rehureklrec who used incremental Singular Value Decomposition (SVD) to find relevant topics from streaming data, and sato2017distributed who used the paragraph vector model BIBREF6 in an offline clustering setting.",
"More recently, crosslingual linking of clusters has been discussed by rupnik2016news in the context of linking existing clusters from the Event Registry BIBREF7 in a batch fashion, and by steinberger2016mediagist who also present a batch clustering linking system. However, these are not “truly” online crosslingual clustering systems since they only decide on the linking of already-built monolingual clusters. In particular, rupnik2016news compute distances of document pairs across clusters using nearest neighbors, which might not scale well in an online setting. As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach. Preliminary work makes use of deep learning techniques BIBREF8 , BIBREF9 to cluster documents while learning their representations, but not in an online or multilingual fashion, and with a very small number of cluster labels (4, in the case of the text benchmark).",
"In our work, we studied the problem of monolingual and crosslingual clustering, having experimented several directions and methods and the impact they have on the final clustering quality. We described the first system which aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream."
],
[
"We described a method for monolingual and crosslingual clustering of an incoming stream of documents. The method works by maintaining centroids for the monolingual and crosslingual clusters, where a monolingual cluster groups a set of documents and a crosslingual cluster groups a set of monolingual clusters. We presented an online crosslingual clustering method which auto-corrects past decisions in an efficient way. We showed that our method gives state-of-the-art results on a multilingual news article dataset for English, Spanish and German. Finally, we discussed how to leverage different SVM training procedures for ranking and classification to improve monolingual and crosslingual clustering decisions. Our system is integrated in a larger media monitoring project BIBREF10 , BIBREF11 and solving the use-cases of monitors and journalists, having been validated with qualitative user testing."
],
[
"We would like to thank Esma Balkır, Nikos Papasarantopoulos, Afonso Mendes, Shashi Narayan and the anonymous reviewers for their feedback. This project was supported by the European H2020 project SUMMA, grant agreement 688139 (see http://www.summa-project.eu) and by a grant from Bloomberg."
]
],
"section_name": [
"Introduction",
"Problem Formulation",
"The Clustering Algorithm",
"Document Representation",
"Similarity Metrics",
"Learning to Rank Candidates",
"Experiments",
"Monolingual Results",
"Crosslingual Results",
"Related Work",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"399408c0f5af70f8a01bb6cc2595b595cc0e359b",
"cb94433c96f512fd120cab7d848203f1543553da"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"8dd741d5006030f9048077e2b61e619815a5607d",
"d511a0afda9bc3644330b11fa95751a82542f6ef"
],
"answer": [
{
"evidence": [
"More recently, crosslingual linking of clusters has been discussed by rupnik2016news in the context of linking existing clusters from the Event Registry BIBREF7 in a batch fashion, and by steinberger2016mediagist who also present a batch clustering linking system. However, these are not “truly” online crosslingual clustering systems since they only decide on the linking of already-built monolingual clusters. In particular, rupnik2016news compute distances of document pairs across clusters using nearest neighbors, which might not scale well in an online setting. As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach. Preliminary work makes use of deep learning techniques BIBREF8 , BIBREF9 to cluster documents while learning their representations, but not in an online or multilingual fashion, and with a very small number of cluster labels (4, in the case of the text benchmark)."
],
"extractive_spans": [
"rupnik2016news"
],
"free_form_answer": "",
"highlighted_evidence": [
"As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"More recently, crosslingual linking of clusters has been discussed by rupnik2016news in the context of linking existing clusters from the Event Registry BIBREF7 in a batch fashion, and by steinberger2016mediagist who also present a batch clustering linking system. However, these are not “truly” online crosslingual clustering systems since they only decide on the linking of already-built monolingual clusters. In particular, rupnik2016news compute distances of document pairs across clusters using nearest neighbors, which might not scale well in an online setting. As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach. Preliminary work makes use of deep learning techniques BIBREF8 , BIBREF9 to cluster documents while learning their representations, but not in an online or multilingual fashion, and with a very small number of cluster labels (4, in the case of the text benchmark).",
"Statistics about this dataset are given in Table TABREF30 . As described further, we tune the hyper-parameter INLINEFORM0 on the development set. As for the hyper-parameters related to the timestamp features, we fixed INLINEFORM1 and tuned INLINEFORM2 on the development set, yielding INLINEFORM3 . To compute IDF scores (which are global numbers computed across a corpus), we used a different and much larger dataset that we collected from Deutsche Welle's news website (http://www.dw.com/). The dataset consists of 77,268, 118,045 and 134,243 documents for Spanish, English and German, respectively."
],
"extractive_spans": [
"rupnik2016news",
"Deutsche Welle's news website"
],
"free_form_answer": "",
"highlighted_evidence": [
"As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach.",
"To compute IDF scores (which are global numbers computed across a corpus), we used a different and much larger dataset that we collected from Deutsche Welle's news website (http://www.dw.com/). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"35cab42984170fbbadbb4a15bfec414ab606e9dd",
"7de1e10db8051838d308eee4b20aae8890cfa86b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.",
"FLOAT SELECTED: Table 3: Accuracy of the SVM ranker on the English training set. TOKENS are the word token features, LEMMAS are the lemma features for title and body, ENTS are named entity features and TS are timestamp features. All features are described in detail in §4, and are listed for both the title and the body."
],
"extractive_spans": [],
"free_form_answer": "F1, precision, recall, accuracy",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.",
"FLOAT SELECTED: Table 3: Accuracy of the SVM ranker on the English training set. TOKENS are the word token features, LEMMAS are the lemma features for title and body, ENTS are named entity features and TS are timestamp features. All features are described in detail in §4, and are listed for both the title and the body."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To investigate the importance of each feature, we now consider in Table TABREF37 the accuracy of the SVM ranker for English as described in § SECREF19 . We note that adding features increases the accuracy of the SVM ranker, especially the timestamp features. However, the timestamp feature actually interferes with our optimization of INLINEFORM0 to identify when new clusters are needed, although they improve the SVM reranking accuracy. We speculate this is true because high accuracy in the reranking problem does not necessarily help with identifying when new clusters need to be opened.",
"FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.",
"Table TABREF35 gives the final monolingual results on the three datasets. For English, we see that the significant improvement we get using our algorithm over the algorithm of aggarwal2006framework is due to an increased recall score. We also note that the trained models surpass the baseline for all languages, and that the timestamp feature (denoted by TS), while not required to beat the baseline, has a very relevant contribution in all cases. Although the results for both the baseline and our models seem to differ across languages, one can verify a consistent improvement from the latter to the former, suggesting that the score differences should be mostly tied to the different difficulty found across the datasets for each language. The presented scores show that our learning framework generalizes well to different languages and enables high quality clustering results."
],
"extractive_spans": [],
"free_form_answer": "Precision, recall, F1, accuracy",
"highlighted_evidence": [
"To investigate the importance of each feature, we now consider in Table TABREF37 the accuracy of the SVM ranker for English as described in § SECREF19 . ",
"FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.",
"Table TABREF35 gives the final monolingual results on the three datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they use graphical models?",
"What are the sources of the datasets?",
"What metric is used for evaluation?"
],
"question_id": [
"a99fdd34422f4231442c220c97eafc26c76508dd",
"2c78993524ca62bf1f525b60f2220a374d0e3535",
"d604f5fb114169f75f9a38fab18c1e866c5ac28b"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A pictorial description of the algorithm and the state it maintains. The algorithm maintains a monolingual cluster space, in which each cluster is a set of documents in a specific language. The algorithm also maintains a crosslingual cluster space, in which a cluster is a set of monolingual clusters in different languages. Documents are denoted by di, monolingual clusters by ci (circles) and crosslingual clusters by ai.",
"Figure 2: Crosslingual “domino-toppling”. aj is the jth crosslingual cluster (out of total N clusters) and Γ1 is the similarity between them as in §4. L(c) is the language for cluster c. M(a, `) returns the monolingual cluster for language ` ∈ L in crosslingual cluster a. See text for details.",
"Table 1: Statistics for the development and evaluation datasets, constructed from the dataset in Rupnik et al. (2016), as explained in §5. “Size” denotes the number of documents in the collection, “Avg. L.” is the average number of words in a document, “C” denotes the number of clusters in the collection and “Avg. S.” is the average number of documents in each cluster.",
"Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.",
"Table 3: Accuracy of the SVM ranker on the English training set. TOKENS are the word token features, LEMMAS are the lemma features for title and body, ENTS are named entity features and TS are timestamp features. All features are described in detail in §4, and are listed for both the title and the body.",
"Figure 3: The F1 score of the different language development sets as a function of the threshold τ . The first point for each language is identified using binary search.",
"Table 4: Comparison of two different cluster decision techniques for the English SVM model with all features (see Table 2). The first method, τsearch, corresponds to executing grid-search to find the optimal clustering τ parameter (see §3). SVM-merge is an alternative method in which we train an SVM binary classifier to decide if a new cluster should be created or not, where we use as features the maximal value of each coordinate for each document in a cluster.",
"Table 5: Crosslingual clustering results when considering two different approaches to compute distances across crosslingual clusters on the test set for Spanish, German and English. See text for details."
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png",
"8-Figure3-1.png",
"8-Table4-1.png",
"9-Table5-1.png"
]
} | [
"What metric is used for evaluation?"
] | [
[
"1809.00540-8-Table3-1.png",
"1809.00540-7-Table2-1.png",
"1809.00540-Monolingual Results-6",
"1809.00540-Monolingual Results-2"
]
] | [
"Precision, recall, F1, accuracy"
] | 25 |
2004.03354 | Inexpensive Domain Adaptation of Pretrained Language Models: A Case Study on Biomedical Named Entity Recognition | Domain adaptation of Pretrained Language Models (PTLMs) is typically achieved by pretraining on in-domain text. While successful, this approach is expensive in terms of hardware, runtime and CO_2 emissions. Here, we propose a cheaper alternative: We train Word2Vec on in-domain text and align the resulting word vectors with the input space of a general-domain PTLM (here: BERT). We evaluate on eight biomedical Named Entity Recognition (NER) tasks and compare against the recently proposed BioBERT model (Lee et al., 2020). We cover over 50% of the BioBERT-BERT F1 delta, at 5% of BioBERT's CO_2 footprint and 2% of its cloud compute cost. | {
"paragraphs": [
[
"Pretrained Language Models (PTLMs) such as BERT BIBREF1 have spearheaded advances on many NLP tasks. Usually, PTLMs are pretrained on unlabeled general-domain and/or mixed-domain text, such as Wikipedia, digital books or the Common Crawl corpus.",
"When applying PTLMs to specific domains, it can be useful to domain-adapt them. Domain adaptation of PTLMs has typically been achieved by pretraining on target-domain text. One such model is BioBERT BIBREF2, which was initialized from general-domain BERT and then pretrained on biomedical scientific publications. The domain adaptation is shown to be helpful for target-domain tasks such as biomedical Named Entity Recognition (NER) or Question Answering (QA). On the downside, the computational cost of pretraining can be considerable: BioBERTv1.0 was adapted for ten days on eight large GPUs (see Table TABREF1), which is expensive, environmentally unfriendly, prohibitive for small research labs and students, and may delay prototyping on emerging domains.",
"We therefore propose a fast, CPU-only domain-adaptation method for PTLMs: We train Word2Vec BIBREF3 on target-domain text and align the resulting word vectors with the wordpiece vectors of an existing general-domain PTLM. The PTLM thus gains domain-specific lexical knowledge in the form of additional word vectors, but its deeper layers remain unchanged. Since Word2Vec and the vector space alignment are efficient models, the process requires a fraction of the resources associated with pretraining the PTLM itself, and it can be done on CPU.",
"In Section SECREF4, we use the proposed method to domain-adapt BERT on PubMed+PMC (the data used for BioBERTv1.0) and/or CORD-19 (Covid-19 Open Research Dataset). We improve over general-domain BERT on eight out of eight biomedical NER tasks, using a fraction of the compute cost associated with BioBERT. In Section SECREF5, we show how to quickly adapt an existing Question Answering model to text about the Covid-19 pandemic, without any target-domain Language Model pretraining or finetuning."
],
[
"For our purpose, a PTLM consists of three parts: A tokenizer $\\mathcal {T}_\\mathrm {LM} : \\mathbb {L}^+ \\rightarrow \\mathbb {L}_\\mathrm {LM}^+$, a wordpiece embedding function $\\mathcal {E}_\\mathrm {LM}: \\mathbb {L}_\\mathrm {LM} \\rightarrow \\mathbb {R}^{d_\\mathrm {LM}}$ and an encoder function $\\mathcal {F}_\\mathrm {LM}$. $\\mathbb {L}_\\mathrm {LM}$ is a limited vocabulary of wordpieces. All words that are not in $\\mathbb {L}_\\mathrm {LM}$ are tokenized into sequences of shorter wordpieces, e.g., tachycardia becomes ta ##chy ##card ##ia. Given a sentence $S = [w_1, \\ldots , w_T]$, tokenized as $\\mathcal {T}_\\mathrm {LM}(S) = [\\mathcal {T}_\\mathrm {LM}(w_1); \\ldots ; \\mathcal {T}_\\mathrm {LM}(w_T)]$, $\\mathcal {E}_\\mathrm {LM}$ embeds every wordpiece in $\\mathcal {T}_\\mathrm {LM}(S)$ into a real-valued, trainable wordpiece vector. The wordpiece vectors of the entire sequence are stacked and fed into $\\mathcal {F}_\\mathrm {LM}$. Note that we consider position and segment embeddings to be a part of $\\mathcal {F}_\\mathrm {LM}$ rather than $\\mathcal {E}_\\mathrm {LM}$.",
"In the case of BERT, $\\mathcal {F}_\\mathrm {LM}$ is a Transformer BIBREF4, followed by a final Feed-Forward Net. During pretraining, the Feed-Forward Net predicts the identity of masked wordpieces. When finetuning on a supervised task, it is usually replaced with a randomly initialized task-specific layer."
],
[
"Domain adaptation of PTLMs is typically achieved by pretraining on unlabeled target-domain text. Some examples of such models are BioBERT BIBREF2, which was pretrained on the PubMed and/or PubMed Central (PMC) corpora, SciBERT BIBREF5, which was pretrained on papers from SemanticScholar, ClinicalBERT BIBREF6, BIBREF7 and ClinicalXLNet BIBREF8, which were pretrained on clinical patient notes, and AdaptaBERT BIBREF9, which was pretrained on Early Modern English text. In most cases, a domain-adapted PTLM is initialized from a general-domain PTLM (e.g., standard BERT), though BIBREF5 report better results with a model that was pretrained from scratch with a custom wordpiece vocabulary. In this paper, we focus on BioBERT, as its domain adaptation corpora are publicly available."
],
[
"Word vectors are distributed representations of words that are trained on unlabeled text. Contrary to PTLMs, word vectors are non-contextual, i.e., a word type is always assigned the same vector, regardless of context. In this paper, we use Word2Vec BIBREF3 to train word vectors. We will denote the Word2Vec lookup function as $\\mathcal {E}_\\mathrm {W2V} : \\mathbb {L}_\\mathrm {W2V} \\rightarrow \\mathbb {R}^{d_\\mathrm {W2V}}$."
],
[
"Word vector space alignment has most frequently been explored in the context of cross-lingual word embeddings. For instance, BIBREF10 align English and Spanish Word2Vec spaces by a simple linear transformation. BIBREF11 use a related method to align cross-lingual word vectors and multilingual BERT wordpiece vectors."
],
[
"In the following, we assume access to a general-domain PTLM, as described in Section SECREF2, and a corpus of unlabeled target-domain text."
],
[
"In a first step, we train Word2Vec on the target-domain corpus. In a second step, we take the intersection of $\\mathbb {L}_\\mathrm {LM}$ and $\\mathbb {L}_\\mathrm {W2V}$. In practice, the intersection mostly contains wordpieces from $\\mathbb {L}_\\mathrm {LM}$ that correspond to standalone words. It also contains single characters and other noise, however, we found that filtering them does not improve alignment quality. In a third step, we use the intersection to fit an unconstrained linear transformation $\\mathbf {W} \\in \\mathbb {R}^{d_\\mathrm {LM} \\times d_\\mathrm {W2V}}$ via least squares:",
"Intuitively, $\\mathbf {W}$ makes Word2Vec vectors “look like” the PTLM's native wordpiece vectors, just like cross-lingual alignment makes word vectors from one language “look like” word vectors from another language. In Table TABREF7 (top), we show examples of within-space and cross-space nearest neighbors after alignment."
],
[
"Next, we redefine the wordpiece embedding layer of the PTLM. The most radical strategy would be to replace the entire layer with the aligned Word2Vec vectors:",
"In initial experiments, this strategy led to a drop in performance, presumably because function words are not well represented by Word2Vec, and replacing them disrupts BERT's syntactic abilities. To prevent this problem, we leave existing wordpiece vectors intact and only add new ones:"
],
[
"In a final step, we update the tokenizer to account for the added words. Let $\\mathcal {T}_\\mathrm {LM}$ be the standard BERT tokenizer, and let $\\hat{\\mathcal {T}}_\\mathrm {LM}$ be the tokenizer that treats all words in $\\mathbb {L}_\\mathrm {LM} \\cup \\mathbb {L}_\\mathrm {W2V}$ as one-wordpiece tokens, while tokenizing any other words as usual.",
"In practice, a given word may or may not benefit from being tokenized by $\\hat{\\mathcal {T}}_\\mathrm {LM}$ instead of $\\mathcal {T}_\\mathrm {LM}$. To give a concrete example, 82% of the words in the BC5CDR NER dataset that end in the suffix -ia are inside a disease entity (e.g., tachycardia). $\\mathcal {T}_\\mathrm {LM}$ tokenizes this word as ta ##chy ##card ##ia, thereby exposing the orthographic cue to the model. As a result, $\\mathcal {T}_\\mathrm {LM}$ leads to higher recall on -ia diseases. But there are many cases where wordpiece tokenization is meaningless or misleading. For instance euthymia (not a disease) is tokenized by $\\mathcal {T}_\\mathrm {LM}$ as e ##uth ##ym ##ia, making it likely to be classified as a disease. By contrast, $\\hat{\\mathcal {T}}_\\mathrm {LM}$ gives euthymia a one-wordpiece representation that depends only on distributional semantics. We find that using $\\hat{\\mathcal {T}}_\\mathrm {LM}$ improves precision on -ia diseases.",
"To combine these complementary strengths, we use a 50/50 mixture of $\\mathcal {T}_\\mathrm {LM}$-tokenization and $\\hat{\\mathcal {T}}_\\mathrm {LM}$-tokenization when finetuning the PTLM on a task. At test time, we use both tokenizers and mean-pool the outputs. Let $o(\\mathcal {T}(S))$ be some output of interest (e.g., a logit), given sentence $S$ tokenized by $\\mathcal {T}$. We predict:"
],
[
"In this section, we use the proposed method to create GreenBioBERT, an inexpensive and environmentally friendly alternative to BioBERT. Recall that BioBERTv1.0 (biobert_v1.0_pubmed_pmc) was initialized from general-domain BERT (bert-base-cased) and pretrained on PubMed+PMC."
],
[
"We train Word2Vec with vector size $d_\\mathrm {W2V} = d_\\mathrm {LM} = 768$ on PubMed+PMC (see Appendix for details). Then, we follow the procedure described in Section SECREF3 to update the wordpiece embedding layer and tokenizer of general-domain BERT."
],
[
"We finetune GreenBioBERT on the eight publicly available NER tasks used in BIBREF2. We also do reproduction experiments with general-domain BERT and BioBERTv1.0, using the same setup as our model. We average results over eight random seeds. See Appendix for details on preprocessing, training and hyperparameters."
],
[
"Table TABREF7 (bottom) shows entity-level precision, recall and F1. For ease of visualization, Figure FIGREF13 shows what portion of the BioBERT – BERT F1 delta is covered. We improve over general-domain BERT on all tasks with varying effect sizes. Depending on the points of reference, we cover an average 52% to 60% of the BioBERT – BERT F1 delta (54% for BioBERTv1.0, 60% for BioBERTv1.1 and 52% for our reproduction experiments). Table TABREF17 (top) shows the importance of vector space alignment: If we replace the aligned Word2Vec vectors with their non-aligned counterparts (by setting $\\mathbf {W} = \\mathbf {1}$) or with randomly initialized vectors, F1 drops on all tasks."
],
[
"In this section, we use the proposed method to quickly adapt an existing general-domain QA model to an emerging target domain: Covid-19. Our baseline model is SQuADBERT (bert-large-uncased-whole-word-masking-finetuned-squad), a version of BERT that was finetuned on general-domain SQuAD BIBREF19. We evaluate on Deepset-AI Covid-QA, a SQuAD-style dataset with 1380 questions (see Appendix for details on data and preprocessing). We assume that there is no target-domain finetuning data, which is a realistic setup for a new domain."
],
[
"We train Word2Vec with vector size $d_\\mathrm {W2V} = d_\\mathrm {LM} = 1024$ on CORD-19 (Covid-19 Open Research Dataset) and/or PubMed+PMC. The process takes less than an hour on CORD-19 and about one day on the combined corpus, again without the need for a GPU. Then, we update SQuADBERT's wordpiece embedding layer and tokenizer, as described in Section SECREF3. We refer to the resulting model as GreenCovidSQuADBERT."
],
[
"Table TABREF17 (bottom) shows that GreenCovidSQuADBERT outperforms general-domain SQuADBERT in all metrics. Most of the improvement can be achieved with just the small CORD-19 corpus, which is more specific to the target domain (compare “Cord-19 only” and “Cord-19+PubMed+PMC”)."
],
[
"As a reaction to the trend towards high-resource models, we have proposed an inexpensive, CPU-only method for domain-adapting Pretrained Language Models: We train Word2Vec vectors on target-domain data and align them with the wordpiece vector space of a general-domain PTLM.",
"On eight biomedical NER tasks, we cover over 50% of the BioBERT – BERT F1 delta, at 5% of BioBERT's domain adaptation CO$_2$ footprint and 2% of its cloud compute cost. We have also shown how to rapidly adapt an existing BERT QA model to an emerging domain – the Covid-19 pandemic – without the need for target-domain Language Model pretraining or finetuning.",
"We hope that our approach will benefit practitioners with limited time or resources, and that it will encourage environmentally friendlier NLP."
],
[
"We downloaded the PubMed, PMC and CORD-19 corpora from:",
"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/ [20 January 2020, 68GB raw text]",
"https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/ [20 January 2020, 24GB raw text]",
"https://pages.semanticscholar.org/coronavirus-research [17 April 2020, 2GB raw text]",
"We extract all abstracts and text bodies and apply the BERT basic tokenizer (a word tokenizer that standard BERT uses before wordpiece tokenization). Then, we train CBOW Word2Vec with negative sampling. We use default parameters except for the vector size (which we set to $d_\\mathrm {W2V} = d_\\mathrm {LM}$)."
],
[
"General-domain BERT and BioBERTv1.0 were downloaded from:",
"https://storage.googleapis.com/bert_models/2018_10_18/cased_L-12_H-768_A-12.zip",
"https://github.com/naver/biobert-pretrained"
],
[
"We downloaded the NER datasets by following instructions on https://github.com/dmis-lab/biobert#Datasets. For detailed dataset statistics, see BIBREF2."
],
[
"We cut all sentences into chunks of 30 or fewer whitespace-tokenized words (without splitting inside labeled spans). Then, we tokenize every chunk $S$ with $\\mathcal {T} = \\mathcal {T}_\\mathrm {LM}$ or $\\mathcal {T} = \\hat{\\mathcal {T}}_\\mathrm {LM}$ and add special tokens:",
"Word-initial wordpieces in $\\mathcal {T}(S)$ are labeled as B(egin), I(nside) or O(utside), while non-word-initial wordpieces are labeled as X(ignore)."
],
[
"We follow BIBREF2's implementation (https://github.com/dmis-lab/biobert): We add a randomly initialized softmax classifier on top of the last BERT layer to predict the labels. We finetune the entire model to minimize negative log likelihood, with the standard Adam optimizer BIBREF20 and a linear learning rate scheduler (10% warmup). Like BIBREF2, we finetune on the concatenation of the training and development set. All finetuning runs were done on a GeForce Titan X GPU (12GB).",
"Since we do not have the resources for an extensive hyperparameter search, we use defaults and recommendations from the BioBERT repository: Batch size of 32, peak learning rate of $1 \\cdot 10^{-5}$, and 100 epochs.",
"At inference time, we gather the output logits of word-initial wordpieces only. Since the number of word-initial wordpieces is the same for $\\mathcal {T}_\\mathrm {LM}(S)$ and $\\hat{\\mathcal {T}}_\\mathrm {LM}(S)$, this makes mean-pooling the logits straightforward."
],
[
"We found it easier to reproduce or exceed BIBREF2's results for general-domain BERT, compared to their results for BioBERTv1.0 (see Figure FIGREF13, main paper). While this may be due to hyperparameters, it suggests that BioBERTv1.0 was more strongly tuned than BERT in the original BioBERT paper. This observation does not affect our conclusions, as GreenBioBERT performs better than reproduced BERT as well."
],
[
"We downloaded the SQuADBERT baseline from:",
"https://huggingface.co./bert-large-uncased-whole-word-masking-finetuned-squad"
],
[
"We downloaded the Deepset-AI Covid-QA dataset from:",
"https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/200423_covidQA.json [24 April 2020]",
"At the time of writing, the dataset contains 1380 questions and gold answer spans. Every question is associated with one of 98 research papers (contexts). We treat the entire dataset as a test set.",
"Note that there are some important differences between the dataset and SQuAD, which make the task challenging:",
"The contexts are full documents rather than single paragraphs. Thus, the correct answer may appear several times, often with slightly different wordings. Only a single one of the occurrences is annotated as correct, e.g.:",
"What was the prevalence of Coronavirus OC43 in community samples in Ilorin, Nigeria?",
"13.3% (95% CI 6.9-23.6%) # from main text",
"(13.3%, 10/75). # from abstract",
"SQuAD gold answers are defined as the “shortest span in the paragraph that answered the question” BIBREF19, but many Covid-QA gold answers are longer and contain non-essential context, e.g.:",
"When was the Middle East Respiratory Syndrome Coronavirus isolated first?",
"(MERS-CoV) was first isolated in 2012, in a 60-year-old man who died in Jeddah, KSA due to severe acute pneumonia and multiple organ failure",
"2012,"
],
[
"We tokenize every question-context pair $(Q, C)$ with $\\mathcal {T} = \\mathcal {T}_\\mathrm {LM}$ or $\\mathcal {T} = \\hat{\\mathcal {T}}_\\mathrm {LM}$, which yields $(\\mathcal {T}(Q), \\mathcal {T}(C))$. Since $\\mathcal {T}(C)$ is usually too long to be digested in a single forward pass, we define a sliding window with width and stride $N = \\mathrm {floor}(\\frac{509 - |\\mathcal {T}(Q)|}{2})$. At step $n$, the “active” window is between $a^{(l)}_n = nN$ and $a^{(r)}_n = \\mathrm {min}(|C|, nN+N)$. The input is defined as:",
"$p^{(l)}_n$ and $p^{(r)}_n$ are chosen such that $|X^{(n)}| = 512$, and such that the active window is in the center of the input (if possible)."
],
[
"Feeding $X^{(n)}$ into the pretrained QA model yields start logits $\\mathbf {h^{\\prime }}^{(\\mathrm {start}, n)} \\in \\mathbb {R}^{|X^{(n)}|}$ and end logits $\\mathbf {h^{\\prime }}^{(\\mathrm {end},n)} \\in \\mathbb {R}^{|X^{(n)}|}$. We extract and concatenate the slices that correspond to the active windows of all steps:",
"Next, we map the logits from the wordpiece level to the word level. This allows us to mean-pool the outputs of $\\mathcal {T}_\\mathrm {LM}$ and $\\hat{\\mathcal {T}}_\\mathrm {LM}$ even when $|\\mathcal {T}_\\mathrm {LM}(C)| \\ne |\\hat{\\mathcal {T}}_\\mathrm {LM}(C)|$.",
"Let $c_i$ be a whitespace-delimited word in $C$. Let $\\mathcal {T}(C)_{j:j+|\\mathcal {T}(c_i)|}$ be the corresponding wordpieces. The start and end logits of $c_i$ are derived as:",
"Finally, we return the answer span $C_{k:k^{\\prime }}$ that maximizes $o^{(\\mathrm {start})}_k + o^{(\\mathrm {end})}_{k^{\\prime }}$, subject to the constraints that $k^{\\prime }$ does not precede $k$ and the answer span is not longer than 500 characters."
]
],
"section_name": [
"Introduction",
"Related work ::: The BERT PTLM",
"Related work ::: Domain-adapted PTLMs",
"Related work ::: Word vectors",
"Related work ::: Word vector space alignment",
"Method",
"Method ::: Creating new input vectors",
"Method ::: Updating the wordpiece embedding layer",
"Method ::: Updating the tokenizer",
"Experiment 1: Biomedical NER",
"Experiment 1: Biomedical NER ::: Domain adaptation",
"Experiment 1: Biomedical NER ::: Finetuning",
"Experiment 1: Biomedical NER ::: Results and discussion",
"Experiment 2: Covid-19 QA",
"Experiment 2: Covid-19 QA ::: Domain adaptation",
"Experiment 2: Covid-19 QA ::: Results and discussion",
"Conclusion",
"Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Word2Vec training",
"Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Pretrained models",
"Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Data",
"Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Preprocessing",
"Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Modeling, training and inference",
"Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Note on our reproduction experiments",
"Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Pretrained model",
"Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Data",
"Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Preprocessing",
"Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Modeling and inference"
]
} | {
"answers": [
{
"annotation_id": [
"873042ce58f5503ec79b3e4a84d84a4f9fd5fd5a",
"f3c94fe977613af30137a843308405a6774256b2"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."
],
"extractive_spans": [],
"free_form_answer": "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."
],
"extractive_spans": [],
"free_form_answer": "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"35cb21d3ddf17ed91cfb4caf866293cf3f1b98f4",
"f13dc63b177a898f80181adf3fcd40b4707363d0"
],
"answer": [
{
"evidence": [
"We train Word2Vec with vector size $d_\\mathrm {W2V} = d_\\mathrm {LM} = 768$ on PubMed+PMC (see Appendix for details). Then, we follow the procedure described in Section SECREF3 to update the wordpiece embedding layer and tokenizer of general-domain BERT."
],
"extractive_spans": [
"PubMed+PMC"
],
"free_form_answer": "",
"highlighted_evidence": [
"We train Word2Vec with vector size $d_\\mathrm {W2V} = d_\\mathrm {LM} = 768$ on PubMed+PMC (see Appendix for details). Then, we follow the procedure described in Section SECREF3 to update the wordpiece embedding layer and tokenizer of general-domain BERT."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In Section SECREF4, we use the proposed method to domain-adapt BERT on PubMed+PMC (the data used for BioBERTv1.0) and/or CORD-19 (Covid-19 Open Research Dataset). We improve over general-domain BERT on eight out of eight biomedical NER tasks, using a fraction of the compute cost associated with BioBERT. In Section SECREF5, we show how to quickly adapt an existing Question Answering model to text about the Covid-19 pandemic, without any target-domain Language Model pretraining or finetuning."
],
"extractive_spans": [
"PubMed+PMC (the data used for BioBERTv1.0) and/or CORD-19 (Covid-19 Open Research Dataset)"
],
"free_form_answer": "",
"highlighted_evidence": [
"In Section SECREF4, we use the proposed method to domain-adapt BERT on PubMed+PMC (the data used for BioBERTv1.0) and/or CORD-19 (Covid-19 Open Research Dataset)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"no",
"no"
],
"question": [
"Which eight NER tasks did they evaluate on?",
"What in-domain text did they use?"
],
"question_id": [
"1d3e914d0890fc09311a70de0b20974bf7f0c9fe",
"16535db1d73a9373ffe9d6eedaa2369cefd91ac4"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Domain adaptation cost. CO2 emissions are calculated according to Strubell et al. (2019). Since our hardware configuration is not available on Google Cloud, we take an m1-ultramem-40 instance (40 vCPUs, 961GB RAM) to estimate an upper bound on our Google Cloud cost.",
"Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row.",
"Figure 1: NER test set F1, transformed as (x − BERT(ref))/ (BioBERTv1.0(ref) − BERT(ref)). A value of 0.5 means that 50% of the reported BioBERTv1.0 – BERT delta is covered. “ref”: Reference from Lee et al. (2020). “repr”: Our reproduction experiments. Error bars: Standard error of the mean.",
"Table 3: Top: NER ablation study. Drop in F1 (w.r.t. GreenBioBERT) when using non-aligned or randomly initialized word vectors instead of aligned word vectors. Bottom: Results on Deepset-AI Covid-QA (%). EM (exact match) and F1 are evaluated with the SQuAD scorer. “substr”: Predictions that are a substring of the gold answer. Substring answers are much more frequent than exact matches because not all gold answers are minimal spans (see Appendix for an example)."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Figure1-1.png",
"4-Table3-1.png"
]
} | [
"Which eight NER tasks did they evaluate on?"
] | [
[
"2004.03354-3-Table2-1.png"
]
] | [
"BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800"
] | 26 |
1912.13337 | What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge | Open-domain question answering (QA) is known to involve several underlying knowledge and reasoning challenges, but are models actually learning such knowledge when trained on benchmark tasks? To investigate this, we introduce several new challenge tasks that probe whether state-of-the-art QA models have general knowledge about word definitions and general taxonomic reasoning, both of which are fundamental to more complex forms of reasoning and are widespread in benchmark datasets. As an alternative to expensive crowd-sourcing, we introduce a methodology for automatically building datasets from various types of expert knowledge (e.g., knowledge graphs and lexical taxonomies), allowing for systematic control over the resulting probes and for a more comprehensive evaluation. We find automatically constructing probes to be vulnerable to annotation artifacts, which we carefully control for. Our evaluation confirms that transformer-based QA models are already predisposed to recognize certain types of structural lexical knowledge. However, it also reveals a more nuanced picture: their performance degrades substantially with even a slight increase in the number of hops in the underlying taxonomic hierarchy, or as more challenging distractor candidate answers are introduced. Further, even when these models succeed at the standard instance-level evaluation, they leave much room for improvement when assessed at the level of clusters of semantically connected probes (e.g., all Isa questions about a concept). | {
"paragraphs": [
[
"Automatically answering questions, especially in the open-domain setting (i.e., where minimal or no contextual knowledge is explicitly provided), requires bringing to bear considerable amount of background knowledge and reasoning abilities. For example, knowing the answers to the two questions in Figure FIGREF1 requires identifying a specific ISA relation (i.e., that cooking is a type of learned behavior) as well as recalling the definition of a concept (i.e., that global warming is defined as a worldwide increase in temperature). In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate).",
"Recent successes in QA, driven largely by the creation of new resources BIBREF2, BIBREF3, BIBREF4, BIBREF5 and advances in model pre-training BIBREF6, BIBREF7, raise a natural question: do state-of-the-art multiple-choice QA (MCQA) models that excel at standard tasks really have basic knowledge and reasoning skills?",
"Most existing MCQA datasets are constructed through either expensive crowd-sourcing BIBREF8 or hand engineering effort, in the former case making it possible to collect large amounts of data at the cost of losing systematic control over the semantics of the target questions. Hence, doing a controlled experiment to answer such a question for QA is difficult given a lack of targeted challenge datasets.",
"Having definitive empirical evidence of model competence on any given phenomenon requires constructing a wide range of systematic tests. For example, in measuring competence of definitions, not only do we want to see that the model can handle individual questions such as Figure FIGREF1.1 inside of benchmark tasks, but that it can answer a wider range of questions that exhaustively cover a broad set of concepts and question perturbations (i.e., systematic adjustments to how the questions are constructed). The same applies to ISA reasoning; not only is it important to recognize in the question in Figure FIGREF1.1 that cooking is a learned behavior, but also that cooking is a general type of behavior or, through a few more inferential steps, a type of human activity.",
"In this paper, we look at systematically constructing such tests by exploiting the vast amounts of structured information contained in various types of expert knowledge such as knowledge graphs and lexical taxonomies. Our general methodology works as illustrated in Figure FIGREF1: given any MCQA model trained on a set of benchmark tasks, we systematically generate a set of synthetic dataset probes (i.e., MCQA renderings of the target information) from information in expert knowledge sources. We then use these probes to ask two empirical questions: 1) how well do models trained on benchmark tasks perform on these probing tasks and; 2) can such models be re-trained to master new challenges with minimal performance loss on their original tasks?",
"While our methodology is amenable to any knowledge source and set of models/benchmark tasks, we focus on probing state-of-the-art transformer models BIBREF7, BIBREF9 in the domain of science MCQA. For sources of expert knowledge, we use WordNet, a comprehensive lexical ontology, and other publicly available dictionary resources. We devise probes that measure model competence in definition and taxonomic knowledge in different settings (including hypernymy, hyponymy, and synonymy detection, and word sense disambiguation). This choice is motivated by fact that the science domain is considered particularly challenging for QA BIBREF10, BIBREF11, BIBREF12, and existing science benchmarks are known to involve widespread use of such knowledge (see BIBREF1, BIBREF13 for analysis), which is also arguably fundamental to more complex forms of reasoning.",
"We show that accurately probing QA models via synthetic datasets is not straightforward, as unexpected artifacts can easily arise in such data. This motivates our carefully constructed baselines and close data inspection to ensure probe quality.",
"Our results confirm that transformer-based QA models have a remarkable ability to recognize certain types of knowledge captured in our probes—even without additional fine-tuning. Such models can even outperform strong task-specific models trained directly on our probing tasks (e.g., on definitions, our best model achieves 77% test accuracy without specialized training, as opposed to 51% for a task-specific LSTM-based model). We also show that the same models can be effectively re-fine-tuned on small samples (even 100 examples) of probe data, and that high performance on the probes tends to correlate with a smaller drop in the model's performance on the original QA task.",
"Our comprehensive assessment reveals several interesting nuances to the overall positive trend. For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. State-of-the-art QA models thus have much room to improve even in some fundamental building blocks, namely definitions and taxonomic hierarchies, of more complex forms of reasoning."
],
[
"We follow recent work on constructing challenge datasets for probing neural models, which has primarily focused on the task of natural language inference (NLI) BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18. Most of this work looks at constructing data through adversarial generation methods, which have also been found useful for creating stronger models BIBREF19. There has also been work on using synthetic data of the type we consider in this paper BIBREF20, BIBREF21, BIBREF22. We closely follow the methodology of BIBREF22, who use hand-constructed linguistic fragments to probe NLI models and study model re-training using a variant of the inoculation by fine-tuning strategy of BIBREF23. In contrast, we focus on probing open-domain MCQA models (see BIBREF24 for a related study in the reading comprehension setting) as well as constructing data from much larger sources of structured knowledge.",
"Our main study focuses on probing the BERT model and fine-tuning approach of BIBREF7, and other variants thereof, which are all based on the transformer architecture of BIBREF25. Related to our efforts, there have been recent studies into the types of relational knowledge contained in large-scale knowledge models BIBREF26, BIBREF27, BIBREF28, which, similar to our work, probe models using structured knowledge sources. This prior work, however, primarily focuses on unearthing the knowledge contained in the underlying language models as is without further training, using simple (single token) cloze-style probing tasks and templates (similar to what we propose in Section SECREF3). In contrast, we focus on understanding the knowledge contained in language models after they have been trained for a QA end-task using benchmark datasets in which such knowledge is expected to be widespread. Further, our evaluation is done before and after these models are fine-tuned on our probe QA tasks, using a more complex set of QA templates and target inferences.",
"The use of lexical resources and knowledge graphs such as WordNet to construct datasets has a long history, and has recently appeared in work on adversarial attacks BIBREF14, BIBREF29 and general task construction BIBREF30, BIBREF31. In the area of MCQA, there is related work on constructing questions from tuples BIBREF32, BIBREF3, both of which involve standard crowd annotation to elicit question-answer pairs (see also BIBREF33, BIBREF34). In contrast to this work, we focus on generating data in an entirely automatic fashion, which obviates the need for expensive annotation and gives us the flexibility to construct much larger datasets that control a rich set of semantic aspects of the target questions."
],
[
"Our probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\\textbf {q}$ and a set of answer choices or candidates $\\lbrace a_{1},...a_{N}\\rbrace $. This section describes in detail the 5 different datasets we build, which are drawn from two sources of expert knowledge, namely WordNet BIBREF35 and the GNU Collaborative International Dictionary of English (GCIDE). We describe each resource in turn, and explain how the resulting dataset probes, which we call WordNetQA and DictionaryQA, are constructed.",
"For convenience, we will describe each source of expert knowledge as a directed, edge-labeled graph $G$. The nodes of this graph are $\\mathcal {V} = \\mathcal {C} \\cup \\mathcal {W} \\cup \\mathcal {S} \\cup \\mathcal {D}$, where $\\mathcal {C}$ is a set of atomic concepts, $\\mathcal {W}$ a set of words, $\\mathcal {S}$ a set of sentences, and $\\mathcal {D}$ a set of definitions (see Table TABREF4 for details for WordNet and GCIDE). Each edge of $G$ is directed from an atomic concept in $\\mathcal {C}$ to another node in $V$, and is labeled with a relation, such as hypernym or isa$^\\uparrow $, from a set of relations $\\mathcal {R}$ (see Table TABREF4).",
"When defining our probe question templates, it will be useful to view $G$ as a set of (relation, source, target) triples $\\mathcal {T} \\subseteq \\mathcal {R} \\times \\mathcal {C} \\times \\mathcal {V}$. Due to their origin in an expert knowledge source, such triples preserve semantic consistency. For instance, when the relation in a triple is def, the corresponding edge maps a concept in $\\mathcal {C}$ to a definition in $\\mathcal {D}$.",
"To construct probe datasets, we rely on two heuristic functions, defined below for each individual probe: $\\textsc {gen}_{\\mathcal {Q}}(\\tau )$, which generates gold question-answer pairs $(\\textbf {q},\\textbf {a})$ from a set of triples $\\tau \\subseteq \\mathcal {T}$ and question templates $\\mathcal {Q}$, and $\\textsc {distr}(\\tau ^{\\prime })$, which generates distractor answers choices $\\lbrace a^{\\prime }_{1},...a^{\\prime }_{N-1} \\rbrace $ based on another set of triples $\\tau ^{\\prime }$ (where usually $\\tau \\subset \\tau ^{\\prime }$). For brevity, we will use $\\textsc {gen}(\\tau )$ to denote $\\textsc {gen}_{\\mathcal {Q}}(\\tau )$, leaving question templates $\\mathcal {Q}$ implicit."
],
[
"WordNet is an English lexical database consisting of around 117k concepts, which are organized into groups of synsets that each contain a gloss (i.e., a definition of the target concept), a set of representative English words (called lemmas), and, in around 33k synsets, example sentences. In addition, many synsets have ISA links to other synsets that express complex taxonomic relations. Figure FIGREF6 shows an example and Table TABREF4 summarizes how we formulate WordNet as a set of triples $\\mathcal {T}$ of various types. These triples together represent a directed, edge-labeled graph $G$. Our main motivation for using WordNet, as opposed to a resource such as ConceptNet BIBREF36, is the availability of glosses ($\\mathcal {D}$) and example sentences ($\\mathcal {S}$), which allows us to construct natural language questions that contextualize the types of concepts we want to probe."
],
[
"We build 4 individual datasets based on semantic relations native to WordNet (see BIBREF37): hypernymy (i.e., generalization or ISA reasoning up a taxonomy, ISA$^\\uparrow $), hyponymy (ISA$^{\\downarrow }$), synonymy, and definitions. To generate a set of questions in each case, we employ a number of rule templates $\\mathcal {Q}$ that operate over tuples. A subset of such templates is shown in Table TABREF8. The templates were designed to mimic naturalistic questions we observed in our science benchmarks.",
"For example, suppose we wish to create a question $\\textbf {q}$ about the definition of a target concept $c \\in \\mathcal {C}$. We first select a question template from $\\mathcal {Q}$ that first introduces the concept $c$ and its lemma $l \\in \\mathcal {W}$ in context using the example sentence $s \\in \\mathcal {S}$, and then asks to identify the corresponding WordNet gloss $d \\in \\mathcal {D}$, which serves as the gold answer $\\textbf {a}$. The same is done for ISA reasoning; each question about a hypernym/hyponym relation between two concepts $c \\rightarrow ^{\\uparrow /\\downarrow } c^{\\prime } \\in \\mathcal {T}_{i}$ (e.g., $\\texttt {dog} \\rightarrow ^{\\uparrow /\\downarrow } \\texttt {animal/terrier}$) first introduces a context for $c$ and then asks for an answer that identifies $c^{\\prime }$ (which is also provided with a gloss so as to contain all available context).",
"In the latter case, the rules $(\\texttt {isa}^{r},c,c^{\\prime }) \\in \\mathcal {T}_i$ in Table TABREF8 cover only direct ISA links from $c$ in direction $r \\in \\lbrace \\uparrow ,\\downarrow \\rbrace $. In practice, for each $c$ and direction $r$, we construct tests that cover the set HOPS$(c,r)$ of all direct as well as derived ISA relations of $c$:",
"This allows us to evaluate the extent to which models are able to handle complex forms of reasoning that require several inferential steps or hops."
],
[
"An example of how distractors are generated is shown in Figure FIGREF6, which relies on similar principles as above. For each concept $c$, we choose 4 distractor answers that are close in the WordNet semantic space. For example, when constructing hypernymy tests for $c$ from the set hops$(c,\\uparrow )$, we build distractors by drawing from $\\textsc {hops}(c,\\downarrow )$ (and vice versa), as well as from the $\\ell $-deep sister family of $c$, defined as follows. The 1-deep sister family is simply $c$'s siblings or sisters, i.e., the other children $\\tilde{c} \\ne c$ of the parent node $c^{\\prime }$ of $c$. For $\\ell > 1$, the $\\ell $-deep sister family also includes all descendants of each $\\tilde{c}$ up to $\\ell -1$ levels deep, denoted $\\textsc {hops}_{\\ell -1}(\\tilde{c},\\downarrow )$. Formally:",
"For definitions and synonyms we build distractors from all of these sets (with a similar restriction on the depth of sister distractors as noted above). In doing this, we can systematically investigate model performance on a wide range of distractor sets."
],
[
"Based on how we generate data, for each concept $c$ (i.e., atomic WordNet synset) and probe type (i.e., definitions, hypernymy, etc.), we have a wide variety of questions related to $c$ that manipulate 1) the complexity of reasoning that is involved (e.g., the number of inferential hops) and; 2) the types of distractors (or distractor perturbations) that are employed. We call such sets semantic clusters. As we describe in the next section, semantic clusters allow us to devise new types of evaluation that reveal whether models have comprehensive and consistent knowledge of target concepts (e.g., evaluating whether a model can correctly answer several questions associated with a concept, as opposed to a few disjoint instances).",
"Details of the individual datasets are shown in Table TABREF12. From these sets, we follow BIBREF22 in allocating a maximum of 3k examples for training and reserve the rest for development and testing. Since we are interested in probing, having large held-out sets allows us to do detailed analysis and cluster-based evaluation."
],
[
"The DictionaryQA dataset is created from the GCIDE dictionary, which is a comprehensive open-source English dictionary built largely from the Webster's Revised Unabridged Dictionary BIBREF38. Each entry consists of a word, its part-of-speech, its definition, and an optional example sentence (see Table TABREF14). Overall, 33k entries (out of a total of 155k) contain example sentences/usages. As with the WordNet probes, we focus on this subset so as to contextualize each word being probed. In contrast to WordNet, GCIDE does not have ISA relations or explicit synsets, so we take each unique entry to be a distinct sense. We then use the dictionary entries to create a probe that centers around word-sense disambiguation, as described below."
],
[
"To generate gold questions and answers, we use the same generation templates for definitions exemplified in Figure TABREF8 for WordNetQA. To generate distractors, we simply take alternative definitions for the target words that represent a different word sense (e.g., the alternative definitions of gift shown in Table TABREF14), as well as randomly chosen definitions if needed to create a 5-way multiple choice question. As above, we reserve a maximum of 3k examples for training. Since we have only 9k examples in total in this dataset (see WordSense in Table TABREF12), we also reserve 3k each for development and testing.",
"We note that initial attempts to build this dataset through standard random splitting gave rise to certain systematic biases that were exploited by the choice-only baseline models described in the next section, and hence inflated overall model scores. After several efforts at filtering we found that, among other factors, using definitions from entries without example sentences as distractors (e.g., the first two entries in Table TABREF14) had a surprising correlation with such biases. This suggests that possible biases involving differences between dictionary entries with and without examples can taint the resulting automatically generated MCQA dataset (for more discussion on the pitfalls involved with automatic dataset construction, see Section SECREF5)."
],
[
"Given the probes above, we now can start to answer the empirical questions posed at the beginning. Our main focus is on looking at transformer-based MCQA models trained in the science domain (using the benchmarks shown in Table TABREF21). In this section, we provide details of MCQA and the target models, as well as several baselines that we use to sanity check our new datasets. To evaluate model competence, we look at a combination of model performance after science pre-training and after additional model fine-tuning using the lossless inoculation strategy of BIBREF22 (Section SECREF22). In Section SECREF24, we also discuss a cluster-level accuracy metric for measuring performance over semantic clusters."
],
[
"Given a dataset $D =\\lbrace (\\textbf {q}^{(d)}, \\lbrace a_{1}^{(d)},..., a_{N}^{(d)}\\rbrace ) \\rbrace _{d}^{\\mid D \\mid }$ consisting of pairs of questions stems $\\textbf {q}$ and answer choices $a_{i}$, the goal is to find the correct answer $a_{i^{*}}$ that correctly answers each $\\textbf {q}$. Throughout this paper, we look at 5-way multiple-choice problems (i.e., where each $N=5$)."
],
[
"To model this, our investigation centers around the use of the transformer-based BIBREF25 BERT encoder and fine-tuning approach of BIBREF7 (see also BIBREF6). For each question and individual answer pair $q^{(j)}_{a_{i}}$, we assume the following rendering of this input:",
"which is run through the pre-trained BERT encoder to generate a representation for $ q^{(j)}_{a_{i}}$ using the hidden state representation for CLS (i.e., the classifier token) $\\textbf {c}_{i}$:",
"The probability of a given answer $p^{(j)}_{i}$ is then computed as $p^{(j)}_{i} \\propto e^{\\textbf {v}\\cdot \\textbf {c}^{(j)}_{i}}$, which uses an additional set of classification parameters $\\textbf {v} \\in \\mathbb {R}^{H}$ that are optimized (along with the full transformer network) by taking the final loss of the probability of each correct answer $p_{i^{*}}$ over all answer choices:",
"We specifically use BERT-large uncased with whole-word masking, as well as the RoBERTa-large model from BIBREF9, which is a more robustly trained version of the original BERT model. Our system uses the implementations provided in AllenNLP BIBREF39 and Huggingface BIBREF40."
],
[
"When creating synthetic datasets, it is important to ensure that systematic biases, or annotation artifacts BIBREF41, are not introduced into the resulting probes and that the target datasets are sufficiently challenging (or good, in the sense of BIBREF42). To test for this, we use several of the MCQA baseline models first introduced in BIBREF0, which take inspiration from the LSTM-based models used in BIBREF43 for NLI and various partial-input baselines based on these models.",
"Following the notation from BIBREF0, for any given sequence $s$ of tokens in $\\lbrace q^{(j)}, a_{1}^{(j)},...,a_{N}^{(j)}\\rbrace $ in $D$, an encoding of $s$ is given as $h_{s}^{(j)} = \\textbf {BiLSTM}(\\textsc {embed}(s)) \\in \\mathbb {R}^{|s| \\times 2h}$ (where $h$ is the dimension of the hidden state in each directional network, and embed$(\\cdot )$ is an embedding function that assigns token-level embeddings to each token in $s$). A contextual representation for each $s$ is then built by applying an element-wise max operation over $h_{s}$ as follows:",
"With these contextual representations, different baseline models can be constructed. For example, a Choice-Only model, which is a variant of the well-known hypothesis-only baseline used in NLI BIBREF46, scores each choice $c_{i}$ in the following way:",
"for $\\textbf {W}^{T} \\in \\mathbb {R}^{2h}$ independently of the question and assigns a probability to each answer $p_{i}^{(j)} \\propto e^{\\alpha _{i}^{(j)}}$.",
"A slight variant of this model, the Choice-to-choice model, tries to single out a given answer choice relative to other choices by scoring all choice pairs $\\alpha _{i,i^{\\prime }}^{(j)} = \\textsc {Att}(r^{(j)}_{c_{i}},r^{(j)}_{c_{i^{\\prime }}}) \\in \\mathbb {R}$ using a learned attention mechanism Att and finding the choice with the minimal similarity to other options (for full details, see their original paper). In using these partial-input baselines, which we train directly on each target probe, we can check whether systematic biases related to answer choices were introduced into the data creation process.",
"A Question-to-choice model, in contrast, uses the contextual representations for each question and individual choice and an attention model Att model to get a score $\\alpha ^{(j)}_{q,i} = \\textsc {Att}(r^{(j)}_{q},r^{(j)}_{c_{i}}) \\in \\mathbb {R}$ as above. Here we also experiment with using ESIM BIBREF47 to generate the contextual representations $r$, as well as a simpler VecSimilarity model that measures the average vector similarity between question and answer tokens: $\\alpha ^{(j)}_{q,i} = \\textsc {Sim}(\\textsc {embed}(q^{(j)}),\\textsc {embed}(c^{(j)}_{i}))$. In contrast to the models above, these sets of baselines are used to check for artifacts between questions and answers that are not captured in the partial-input baselines (see discussion in BIBREF49) and ensure that the overall MCQA tasks are sufficiently difficult for our transformer models."
],
[
"Using the various models introduced above, we train these models on benchmark tasks in the science domain and look at model performance on our probes with and without additional training on samples of probe data, building on the idea of inoculation from BIBREF23. Model inoculation is the idea of continuing to train models on new challenge tasks (in our cases, separately for each probe) using only a small amount of examples. Unlike in ordinary fine-tuning, the goal is not to learn an entirely re-purposed model, but to improve on (or vaccinate against) particular phenomena (e.g., our synthetic probes) that potentially deviate from a model's original training distribution (but that nonetheless might involve knowledge already contained in the model).",
"In the variant proposed in BIBREF22, for each pre-trained (science) model and architecture $M_{a}$ we continue training the model on $k$ new probe examples (with a maximum of $k=$ 3k) under a set of different hyper-parameter configurations $j \\in \\lbrace 1, ..., J\\rbrace $ and identify, for each $k$, the model $M_{*}^{a,k}$ with the best aggregate performance $S$ on the original (orig) and new task:",
"As in BIBREF22, we found all models to be especially sensitive to different learning rates, and performed comprehensive hyper-parameters searches that also manipulate the number of iterations and random seeds used.",
"Using this methodology, we can see how much exposure to new data it takes for a given model to master a new task, and whether there are phenomena that stress particular models (e.g., lead to catastrophic forgetting of the original task). Given the restrictions on the number of fine-tuning examples, our assumption is that when models are able to maintain good performance on their original task during inoculation, the quickness with which they are able to learn the inoculated task provides evidence of prior competence, which is precisely what we aim to probe. To measure past performance, we define a model's inoculation cost as the difference in the performance of this model on its original task before and after inoculation.",
"We pre-train on an aggregated training set of the benchmark science exams detailed in Table TABREF21, and created an aggregate development set of around 4k science questions for evaluating overall science performance and inoculation costs. To handle the mismatch between number of answer choices in these sets, we made all sets 5-way by adding empty answers as needed. We also experimented with a slight variant of inoculation, called add-some inoculation, which involves balancing the inoculation training sets with naturalistic science questions. We reserve the MCQL dataset in Table TABREF21 for this purpose, and experiment with balancing each probe example with a science example (x1 matching) and adding twice as many science questions (x2 matching, up to 3k) for each new example."
],
[
"The standard way to evaluate our MCQA models is by looking at the overall accuracy of the correct answer prediction, or what we call instance-level accuracy (as in Table TABREF25). Given the nature of our data and the existence of semantic clusters as detailed in Section SECREF11 (i.e., sets of questions and answers under different distractor choices and inference complexity), we also measure a model's cluster-level (or strict cluster) accuracy, which requires correctly answering all questions in a cluster. Example semantic clusters are shown in Table TABREF30; in the first case, there are 6 ISA$^\\uparrow $ questions (including perturbations) about the concept trouser.n.01 (e.g., involving knowing that trousers are a type of consumer good and garment/clothing), which a model must answer in order to receive full credit.",
"Our cluster-based analysis is motivated by the idea that if a model truly knows the meaning of a given concept, such as the concept of trousers, then it should be able to answer arbitrary questions about this concept without sensitivity to varied distractors. While our strict cluster metric is simplistic, it takes inspiration from work on visual QA BIBREF53, and allows us to evaluate how consistent and robust models are across our different probes, and to get insight into whether errors are concentrated on a small set of concepts or widespread across clusters."
],
[
"In this section, we provide the results of the empirical questions first introduced in Figure FIGREF1, starting with the results of our baseline models."
],
[
"As shown in Table TABREF25, most of our partial-input baselines (i.e., Choice-Only and Choice-to-Choice models) failed to perform well on our dataset probes across a wide range of models, showing that such probes are generally immune from biases relating to how distractors were generated. As already discussed in Section SECREF13, however, initial versions of the DictionaryQA dataset had unforeseen biases partly related to whether distractors were sampled from entries without example sentences, which resulted in high Choice-Only-GloVe scores ranging around 56% accuracy before a filtering step was applied to remove these distractors.",
"We had similar issues with the hypernymy probe which, even after a filtering step that used our Choice-to-Choice-GloVe model, still leads to high results on the BERT and RoBERTa choice-only models. Given that several attempts were made to entirely de-duplicate the different splits (both in terms of gold answers and distractor types), the source of these biases is not at all obvious, which shows how easy it is for unintended biases in expert knowledge to appear in the resulting datasets and the importance of having rigorous baselines. We also note the large gap in some cases between the BERT and RoBERTa versus GloVe choice-only models, which highlights the need for having partial-input baselines that use the best available models.",
"Using a more conventional set of Task-Specific QA models (i.e., the LSTM-based Question-to-Choice models trained directly on the probes), we can see that results are not particularly strong on any of the datasets, suggesting that our probes are indeed sufficiently challenging and largely immune from overt artifacts. The poor performance of the VecSimilarity (which uses pre-trained Word2Vec embeddings without additional training) provides additional evidence that elementary lexical matching strategies are insufficient for solving any of the probing tasks."
],
[
"Science models that use non-transformer based encoders, such as the ESIM model with GloVe and ELMO, perform poorly across all probes, in many cases scoring near random chance, showing limits to how well they generalize from science to other tasks even with pre-trained GloVe and ELMO embeddings. In sharp contrast, the transformer models have mixed results, the most striking result being the RoBERTa models on the definitions and synonymy probes (achieving a test accuracy of 77% and 61%, respectively), which outperform several of the task-specific LSTM models trained directly on the probes. At first glance, this suggests that RoBERTa, which generally far outpaces even BERT across most probes, has high competence of definitions and synonyms even without explicit training on our new tasks.",
"Given the controlled nature of our probes, we can get a more detailed view of how well the science models are performing across different reasoning and distractor types, as shown in the first column of Figure FIGREF28 for ESIM and RoBERTa. The ESIM science model without training has uniformly poor performance across all categories, whereas the performance of RoBERTa is more varied. Across all datasets and number of hops (i.e., the rows in the heat maps), model performance for RoBERTa is consistently highest among examples with random distractors (i.e., the first column), and lowest in cases involving distractors that are closest in WordNet space (e.g., sister and ISA, or up/down, distractors of distance $k^{\\prime }=1$). This is not surprising, given that, in the first case, random distractors are likely to be the easiest category (and the opposite for distractors close in space), but suggests that RoBERTa might only be getting the easiest cases correct.",
"Model performance also clearly degrades for hypernymy and hyponymy across all models as the number of hops $k$ increases (see red dashed boxes). For example, problems that involve hyponym reasoning with sister distractors of distance $k^{\\prime }=1$ (i.e., the second column) degrades from 47% to 15% when the number of hops $k$ increases from 1 to 4. This general tendency persists even after additional fine-tuning, as we discuss next, and gives evidence that models are limited in their capacity for certain types of multi-hop inferences.",
"As discussed by BIBREF26, the choice of generation templates can have a significant effect on model performance. The results so far should therefore be regarded as a lower bound on model competence. It is possible that model performance is high for definitions, for example, because the associated templates best align with the science training distribution (which we know little about). For this reason, the subsequent inoculation step is important—it gives the model an opportunity to learn about our target templates and couple this learned knowledge with its general knowledge acquired during pre-training and science training (which is, again, what we aim to probe)."
],
[
"Model performance after additional fine-tuning, or inoculation, is shown in the last 3 rows of Table TABREF25, along with learning curves shown in Figure FIGREF29 for a selection of probes and models. In the former case, the performance represents the model (and inoculation amount) with the highest aggregate performance over the old task and new probe. Here we again see the transformer-based models outperform non-transformer models, and that better models correlate with lower inoculation costs. For example, when inoculating on synonymy, the cost for ESIM is around 7% reduced accuracy on its original task, as opposed to $< 1$% and around 1% for BERT and RoBERTa, respectively. This shows the high capacity for transformer models to absorb new tasks with minimal costs, as also observed in BIBREF22 for NLI.",
"As shown in Figure FIGREF29, transformer models tend to learn most tasks fairly quickly while keeping constant scores on their original tasks (i.e., the flat dashed lines observed in plots 1-4), which gives evidence of high competence. In both cases, add-some inoculation proves to be a cheap and easy way to 1) improve scores on the probing tasks (i.e., the solid black and blue lines in plot 1) and; 2) minimize loss on science (e.g., the blue and black dashed lines in plots 2-4). The opposite is the case for ESIM (plots 5-6); models are generally unable to simultaneously learn individual probes without degrading on their original task, and adding more science data during inoculation confuses models on both tasks.",
"As shown in Figure FIGREF28, RoBERTa is able to significantly improve performance across most categories even after inoculation with a mere 100 examples (the middle plot), which again provides strong evidence of prior competence. As an example, RoBERTa improves on 2-hop hyponymy inference with random distractors by 18% (from 59% to 77%). After 3k examples, the model has high performance on virtually all categories (the same score increases from 59% to 87%), however results still tends to degrade as a function of hop and distractor complexity, as discussed above.",
"Despite the high performance of our transformer models after inoculation, model performance on most probes (with the exception of Definitions) averages around 80% for our best models. This suggests that there is still considerable room for improvement, especially for synonymy and word sense, which is a topic that we discuss more in Section SECREF6."
],
[
"Table TABREF32 shows cluster-level accuracies for the different WordNetQA probes. As with performance across the different inference/distractor categories, these results are mixed. For some probes, such as definitions, our best models appear to be rather robust; e.g., our RoBERTa model has a cluster accuracy of $75\\%$, meaning that it can answer all questions perfectly for 75% of the target concepts and that errors are concentrated on a small minority (25%) of concepts. On synonymy and hypernymy, both BERT and RoBERTa appear robust on the majority of concepts, showing that errors are similarly concentrated. In contrast, our best model on hyponymy has an accuracy of 36%, meaning that its errors are spread across many concepts, thus suggesting less robustness.",
"Table TABREF30 shows a selection of semantic clusters involving ISA reasoning, as well as the model performance over different answers (shown symbolically) and perturbations. For example, in the the second case, the cluster is based around the concept/synset oppose.v.06 and involves 4 inferences and a total 24 questions (i.e., inferences with perturbations). Our weakest model, ESIM, answers only 5 out of 24 questions correctly, whereas RoBERTa gets 21/24. In the other cases, RoBERTa gets all clusters correct, whereas BERT and ESIM get none of them correct.",
"We emphasize that these results only provide a crude look into model consistency and robustness. Recalling again the details in Table TABREF12, probes differ in terms of average size of clusters. Hyponymy, in virtue of having many more questions per cluster, might simply be a much more difficult dataset. In addition, such a strict evaluation does not take into account potential errors inside of clusters, which is an important issue that we discuss in the next section. We leave addressing such issues and coming up with more insightful cluster-based metrics for future work."
],
[
"We presented several new challenge datasets and a novel methodology for automatically building such datasets from knowledge graphs and taxonomies. We used these to probe state-of-the-art open-domain QA models (centering around models based on variants of BERT). While our general methodology is amendable to any target knowledge resource or QA model/domain, we focus on probing definitions and ISA knowledge using open-source dictionaries and MCQA models trained in the science domain.",
"We find, consistent with recent probing studies BIBREF26, that transformer-based models have a remarkable ability to answer questions that involve complex forms of relational knowledge, both with and without explicit exposure to our new target tasks. In the latter case, a newer RoBERTa model trained only on benchmark science tasks is able to outperform several task-specific LSTM-based models trained directly on our probing data. When re-trained on small samples (e.g., 100 examples) of probing data using variations of the lossless inoculation strategy from BIBREF22, RoBERTa is able to master many aspects of our probes with virtually no performance loss on its original QA task.",
"These positive results suggest that transformer-based models, especially models additionally fine-tuned on small samples of synthetic data, can be used in place of task-specific models used for querying relational knowledge, as has already been done for targeted tasks such as word sense disambiguation BIBREF54. Since models seem to already contain considerable amounts of relational knowledge, our simple inoculation strategy, which tries to nudge models to bring out this knowledge explicitly, could serve as a cheaper alternative to recent attempts to build architectures that explicitly incorporate structured knowledge BIBREF55; we see many areas where our inoculation strategy could be improved for such purposes, including having more complex loss functions that manage old and new information, as well as using techniques that take into account network plasticity BIBREF56.",
"The main appeal of using automatically generate datasets is the ability to systematically manipulate and control the complexity of target questions, which allows for more controlled experimentation and new forms of evaluation. Despite the positive results described above, results that look directly at the effect of different types of distractors and the complexity of reasoning show that our best models, even after additional fine-tuning, struggle with certain categories of hard distractors and multi-hop inferences. For some probes, our cluster-based analysis also reveals that errors are widespread across concept clusters, suggesting that models are not always consistent and robust. These results, taken together with our findings about the vulnerability of synthetic datasets to systematic biases, suggest that there is much room for improvement and that the positive results should be taken with a grain of salt. Developing better ways to evaluate semantic clusters and model robustness would be a step in this direction.",
"We emphasize that using synthetic versus naturalistic QA data comes with important trade-offs. While we are able to generate large amounts of systematically controlled data at virtually no cost or need for manual annotation, it is much harder to validate the quality of such data at such a scale and such varying levels of complexity. Conversely, with benchmark QA datasets, it is much harder to perform the type of careful manipulations and cluster-based analyses we report here. While we assume that the expert knowledge we employ, in virtue of being hand-curated by human experts, is generally correct, we know that such resources are fallible and error-prone. Initial crowd-sourcing experiments that look at validating samples of our data show high agreement across probes and that human scores correlate with the model trends across the probe categories. More details of these studies are left for future work."
]
],
"section_name": [
"Introduction",
"Related Work",
"Dataset Probes and Construction",
"Dataset Probes and Construction ::: WordNetQA",
"Dataset Probes and Construction ::: WordNetQA ::: Example Generation @!START@$\\textsc {gen}(\\tau )$@!END@.",
"Dataset Probes and Construction ::: WordNetQA ::: Distractor Generation: @!START@$\\textsc {distr}(\\tau ^{\\prime })$@!END@.",
"Dataset Probes and Construction ::: WordNetQA ::: Perturbations and Semantic Clusters",
"Dataset Probes and Construction ::: DictionaryQA",
"Dataset Probes and Construction ::: DictionaryQA ::: Example and Distractor Generation.",
"Probing Methodology and Modeling",
"Probing Methodology and Modeling ::: Task Definition and Modeling",
"Probing Methodology and Modeling ::: Task Definition and Modeling ::: Question+Answer Encoder.",
"Probing Methodology and Modeling ::: Task Definition and Modeling ::: Baselines and Sanity Checks.",
"Probing Methodology and Modeling ::: Inoculation and Pre-training",
"Probing Methodology and Modeling ::: Evaluating Model Competence",
"Results and Findings",
"Results and Findings ::: Are our Probes Sufficiently Challenging?",
"Results and Findings ::: How well do pre-trained MCQA models do?",
"Results and Findings ::: Can Models Be Effectively Inoculated?",
"Results and Findings ::: Are Models Consistent across Clusters?",
"Discussion and Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"cc9f15c1050efaa00fd6339cce6f9a59f9a23aa9",
"9dd75e424c583f2f746794c43d20b9a8b8f44a2a"
],
"answer": [
{
"evidence": [
"Dataset Probes and Construction",
"Our probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\\textbf {q}$ and a set of answer choices or candidates $\\lbrace a_{1},...a_{N}\\rbrace $. This section describes in detail the 5 different datasets we build, which are drawn from two sources of expert knowledge, namely WordNet BIBREF35 and the GNU Collaborative International Dictionary of English (GCIDE). We describe each resource in turn, and explain how the resulting dataset probes, which we call WordNetQA and DictionaryQA, are constructed.",
"For convenience, we will describe each source of expert knowledge as a directed, edge-labeled graph $G$. The nodes of this graph are $\\mathcal {V} = \\mathcal {C} \\cup \\mathcal {W} \\cup \\mathcal {S} \\cup \\mathcal {D}$, where $\\mathcal {C}$ is a set of atomic concepts, $\\mathcal {W}$ a set of words, $\\mathcal {S}$ a set of sentences, and $\\mathcal {D}$ a set of definitions (see Table TABREF4 for details for WordNet and GCIDE). Each edge of $G$ is directed from an atomic concept in $\\mathcal {C}$ to another node in $V$, and is labeled with a relation, such as hypernym or isa$^\\uparrow $, from a set of relations $\\mathcal {R}$ (see Table TABREF4).",
"When defining our probe question templates, it will be useful to view $G$ as a set of (relation, source, target) triples $\\mathcal {T} \\subseteq \\mathcal {R} \\times \\mathcal {C} \\times \\mathcal {V}$. Due to their origin in an expert knowledge source, such triples preserve semantic consistency. For instance, when the relation in a triple is def, the corresponding edge maps a concept in $\\mathcal {C}$ to a definition in $\\mathcal {D}$.",
"To construct probe datasets, we rely on two heuristic functions, defined below for each individual probe: $\\textsc {gen}_{\\mathcal {Q}}(\\tau )$, which generates gold question-answer pairs $(\\textbf {q},\\textbf {a})$ from a set of triples $\\tau \\subseteq \\mathcal {T}$ and question templates $\\mathcal {Q}$, and $\\textsc {distr}(\\tau ^{\\prime })$, which generates distractor answers choices $\\lbrace a^{\\prime }_{1},...a^{\\prime }_{N-1} \\rbrace $ based on another set of triples $\\tau ^{\\prime }$ (where usually $\\tau \\subset \\tau ^{\\prime }$). For brevity, we will use $\\textsc {gen}(\\tau )$ to denote $\\textsc {gen}_{\\mathcal {Q}}(\\tau )$, leaving question templates $\\mathcal {Q}$ implicit.",
"Dataset Probes and Construction ::: WordNetQA",
"WordNet is an English lexical database consisting of around 117k concepts, which are organized into groups of synsets that each contain a gloss (i.e., a definition of the target concept), a set of representative English words (called lemmas), and, in around 33k synsets, example sentences. In addition, many synsets have ISA links to other synsets that express complex taxonomic relations. Figure FIGREF6 shows an example and Table TABREF4 summarizes how we formulate WordNet as a set of triples $\\mathcal {T}$ of various types. These triples together represent a directed, edge-labeled graph $G$. Our main motivation for using WordNet, as opposed to a resource such as ConceptNet BIBREF36, is the availability of glosses ($\\mathcal {D}$) and example sentences ($\\mathcal {S}$), which allows us to construct natural language questions that contextualize the types of concepts we want to probe.",
"Dataset Probes and Construction ::: WordNetQA ::: Example Generation @!START@$\\textsc {gen}(\\tau )$@!END@.",
"We build 4 individual datasets based on semantic relations native to WordNet (see BIBREF37): hypernymy (i.e., generalization or ISA reasoning up a taxonomy, ISA$^\\uparrow $), hyponymy (ISA$^{\\downarrow }$), synonymy, and definitions. To generate a set of questions in each case, we employ a number of rule templates $\\mathcal {Q}$ that operate over tuples. A subset of such templates is shown in Table TABREF8. The templates were designed to mimic naturalistic questions we observed in our science benchmarks.",
"For example, suppose we wish to create a question $\\textbf {q}$ about the definition of a target concept $c \\in \\mathcal {C}$. We first select a question template from $\\mathcal {Q}$ that first introduces the concept $c$ and its lemma $l \\in \\mathcal {W}$ in context using the example sentence $s \\in \\mathcal {S}$, and then asks to identify the corresponding WordNet gloss $d \\in \\mathcal {D}$, which serves as the gold answer $\\textbf {a}$. The same is done for ISA reasoning; each question about a hypernym/hyponym relation between two concepts $c \\rightarrow ^{\\uparrow /\\downarrow } c^{\\prime } \\in \\mathcal {T}_{i}$ (e.g., $\\texttt {dog} \\rightarrow ^{\\uparrow /\\downarrow } \\texttt {animal/terrier}$) first introduces a context for $c$ and then asks for an answer that identifies $c^{\\prime }$ (which is also provided with a gloss so as to contain all available context).",
"In the latter case, the rules $(\\texttt {isa}^{r},c,c^{\\prime }) \\in \\mathcal {T}_i$ in Table TABREF8 cover only direct ISA links from $c$ in direction $r \\in \\lbrace \\uparrow ,\\downarrow \\rbrace $. In practice, for each $c$ and direction $r$, we construct tests that cover the set HOPS$(c,r)$ of all direct as well as derived ISA relations of $c$:",
"This allows us to evaluate the extent to which models are able to handle complex forms of reasoning that require several inferential steps or hops.",
"Dataset Probes and Construction ::: WordNetQA ::: Distractor Generation: @!START@$\\textsc {distr}(\\tau ^{\\prime })$@!END@.",
"An example of how distractors are generated is shown in Figure FIGREF6, which relies on similar principles as above. For each concept $c$, we choose 4 distractor answers that are close in the WordNet semantic space. For example, when constructing hypernymy tests for $c$ from the set hops$(c,\\uparrow )$, we build distractors by drawing from $\\textsc {hops}(c,\\downarrow )$ (and vice versa), as well as from the $\\ell $-deep sister family of $c$, defined as follows. The 1-deep sister family is simply $c$'s siblings or sisters, i.e., the other children $\\tilde{c} \\ne c$ of the parent node $c^{\\prime }$ of $c$. For $\\ell > 1$, the $\\ell $-deep sister family also includes all descendants of each $\\tilde{c}$ up to $\\ell -1$ levels deep, denoted $\\textsc {hops}_{\\ell -1}(\\tilde{c},\\downarrow )$. Formally:",
"For definitions and synonyms we build distractors from all of these sets (with a similar restriction on the depth of sister distractors as noted above). In doing this, we can systematically investigate model performance on a wide range of distractor sets.",
"Dataset Probes and Construction ::: WordNetQA ::: Perturbations and Semantic Clusters",
"Based on how we generate data, for each concept $c$ (i.e., atomic WordNet synset) and probe type (i.e., definitions, hypernymy, etc.), we have a wide variety of questions related to $c$ that manipulate 1) the complexity of reasoning that is involved (e.g., the number of inferential hops) and; 2) the types of distractors (or distractor perturbations) that are employed. We call such sets semantic clusters. As we describe in the next section, semantic clusters allow us to devise new types of evaluation that reveal whether models have comprehensive and consistent knowledge of target concepts (e.g., evaluating whether a model can correctly answer several questions associated with a concept, as opposed to a few disjoint instances).",
"Details of the individual datasets are shown in Table TABREF12. From these sets, we follow BIBREF22 in allocating a maximum of 3k examples for training and reserve the rest for development and testing. Since we are interested in probing, having large held-out sets allows us to do detailed analysis and cluster-based evaluation.",
"Dataset Probes and Construction ::: DictionaryQA",
"The DictionaryQA dataset is created from the GCIDE dictionary, which is a comprehensive open-source English dictionary built largely from the Webster's Revised Unabridged Dictionary BIBREF38. Each entry consists of a word, its part-of-speech, its definition, and an optional example sentence (see Table TABREF14). Overall, 33k entries (out of a total of 155k) contain example sentences/usages. As with the WordNet probes, we focus on this subset so as to contextualize each word being probed. In contrast to WordNet, GCIDE does not have ISA relations or explicit synsets, so we take each unique entry to be a distinct sense. We then use the dictionary entries to create a probe that centers around word-sense disambiguation, as described below.",
"Dataset Probes and Construction ::: DictionaryQA ::: Example and Distractor Generation.",
"To generate gold questions and answers, we use the same generation templates for definitions exemplified in Figure TABREF8 for WordNetQA. To generate distractors, we simply take alternative definitions for the target words that represent a different word sense (e.g., the alternative definitions of gift shown in Table TABREF14), as well as randomly chosen definitions if needed to create a 5-way multiple choice question. As above, we reserve a maximum of 3k examples for training. Since we have only 9k examples in total in this dataset (see WordSense in Table TABREF12), we also reserve 3k each for development and testing.",
"We note that initial attempts to build this dataset through standard random splitting gave rise to certain systematic biases that were exploited by the choice-only baseline models described in the next section, and hence inflated overall model scores. After several efforts at filtering we found that, among other factors, using definitions from entries without example sentences as distractors (e.g., the first two entries in Table TABREF14) had a surprising correlation with such biases. This suggests that possible biases involving differences between dictionary entries with and without examples can taint the resulting automatically generated MCQA dataset (for more discussion on the pitfalls involved with automatic dataset construction, see Section SECREF5)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Dataset Probes and Construction\nOur probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\\textbf {q}$ and a set of answer choices or candidates $\\lbrace a_{1},...a_{N}\\rbrace $. This section describes in detail the 5 different datasets we build, which are drawn from two sources of expert knowledge, namely WordNet BIBREF35 and the GNU Collaborative International Dictionary of English (GCIDE). We describe each resource in turn, and explain how the resulting dataset probes, which we call WordNetQA and DictionaryQA, are constructed.\n\nFor convenience, we will describe each source of expert knowledge as a directed, edge-labeled graph $G$. The nodes of this graph are $\\mathcal {V} = \\mathcal {C} \\cup \\mathcal {W} \\cup \\mathcal {S} \\cup \\mathcal {D}$, where $\\mathcal {C}$ is a set of atomic concepts, $\\mathcal {W}$ a set of words, $\\mathcal {S}$ a set of sentences, and $\\mathcal {D}$ a set of definitions (see Table TABREF4 for details for WordNet and GCIDE). Each edge of $G$ is directed from an atomic concept in $\\mathcal {C}$ to another node in $V$, and is labeled with a relation, such as hypernym or isa$^\\uparrow $, from a set of relations $\\mathcal {R}$ (see Table TABREF4).\n\nWhen defining our probe question templates, it will be useful to view $G$ as a set of (relation, source, target) triples $\\mathcal {T} \\subseteq \\mathcal {R} \\times \\mathcal {C} \\times \\mathcal {V}$. Due to their origin in an expert knowledge source, such triples preserve semantic consistency. For instance, when the relation in a triple is def, the corresponding edge maps a concept in $\\mathcal {C}$ to a definition in $\\mathcal {D}$.\n\nTo construct probe datasets, we rely on two heuristic functions, defined below for each individual probe: $\\textsc {gen}_{\\mathcal {Q}}(\\tau )$, which generates gold question-answer pairs $(\\textbf {q},\\textbf {a})$ from a set of triples $\\tau \\subseteq \\mathcal {T}$ and question templates $\\mathcal {Q}$, and $\\textsc {distr}(\\tau ^{\\prime })$, which generates distractor answers choices $\\lbrace a^{\\prime }_{1},...a^{\\prime }_{N-1} \\rbrace $ based on another set of triples $\\tau ^{\\prime }$ (where usually $\\tau \\subset \\tau ^{\\prime }$). For brevity, we will use $\\textsc {gen}(\\tau )$ to denote $\\textsc {gen}_{\\mathcal {Q}}(\\tau )$, leaving question templates $\\mathcal {Q}$ implicit.\n\nDataset Probes and Construction ::: WordNetQA\nWordNet is an English lexical database consisting of around 117k concepts, which are organized into groups of synsets that each contain a gloss (i.e., a definition of the target concept), a set of representative English words (called lemmas), and, in around 33k synsets, example sentences. In addition, many synsets have ISA links to other synsets that express complex taxonomic relations. Figure FIGREF6 shows an example and Table TABREF4 summarizes how we formulate WordNet as a set of triples $\\mathcal {T}$ of various types. These triples together represent a directed, edge-labeled graph $G$. Our main motivation for using WordNet, as opposed to a resource such as ConceptNet BIBREF36, is the availability of glosses ($\\mathcal {D}$) and example sentences ($\\mathcal {S}$), which allows us to construct natural language questions that contextualize the types of concepts we want to probe.\n\nDataset Probes and Construction ::: WordNetQA ::: Example Generation @!START@$\\textsc {gen}(\\tau )$@!END@.\nWe build 4 individual datasets based on semantic relations native to WordNet (see BIBREF37): hypernymy (i.e., generalization or ISA reasoning up a taxonomy, ISA$^\\uparrow $), hyponymy (ISA$^{\\downarrow }$), synonymy, and definitions. To generate a set of questions in each case, we employ a number of rule templates $\\mathcal {Q}$ that operate over tuples. A subset of such templates is shown in Table TABREF8. The templates were designed to mimic naturalistic questions we observed in our science benchmarks.\n\nFor example, suppose we wish to create a question $\\textbf {q}$ about the definition of a target concept $c \\in \\mathcal {C}$. We first select a question template from $\\mathcal {Q}$ that first introduces the concept $c$ and its lemma $l \\in \\mathcal {W}$ in context using the example sentence $s \\in \\mathcal {S}$, and then asks to identify the corresponding WordNet gloss $d \\in \\mathcal {D}$, which serves as the gold answer $\\textbf {a}$. The same is done for ISA reasoning; each question about a hypernym/hyponym relation between two concepts $c \\rightarrow ^{\\uparrow /\\downarrow } c^{\\prime } \\in \\mathcal {T}_{i}$ (e.g., $\\texttt {dog} \\rightarrow ^{\\uparrow /\\downarrow } \\texttt {animal/terrier}$) first introduces a context for $c$ and then asks for an answer that identifies $c^{\\prime }$ (which is also provided with a gloss so as to contain all available context).\n\nIn the latter case, the rules $(\\texttt {isa}^{r},c,c^{\\prime }) \\in \\mathcal {T}_i$ in Table TABREF8 cover only direct ISA links from $c$ in direction $r \\in \\lbrace \\uparrow ,\\downarrow \\rbrace $. In practice, for each $c$ and direction $r$, we construct tests that cover the set HOPS$(c,r)$ of all direct as well as derived ISA relations of $c$:\n\nThis allows us to evaluate the extent to which models are able to handle complex forms of reasoning that require several inferential steps or hops.\n\nDataset Probes and Construction ::: WordNetQA ::: Distractor Generation: @!START@$\\textsc {distr}(\\tau ^{\\prime })$@!END@.\nAn example of how distractors are generated is shown in Figure FIGREF6, which relies on similar principles as above. For each concept $c$, we choose 4 distractor answers that are close in the WordNet semantic space. For example, when constructing hypernymy tests for $c$ from the set hops$(c,\\uparrow )$, we build distractors by drawing from $\\textsc {hops}(c,\\downarrow )$ (and vice versa), as well as from the $\\ell $-deep sister family of $c$, defined as follows. The 1-deep sister family is simply $c$'s siblings or sisters, i.e., the other children $\\tilde{c} \\ne c$ of the parent node $c^{\\prime }$ of $c$. For $\\ell > 1$, the $\\ell $-deep sister family also includes all descendants of each $\\tilde{c}$ up to $\\ell -1$ levels deep, denoted $\\textsc {hops}_{\\ell -1}(\\tilde{c},\\downarrow )$. Formally:\n\nFor definitions and synonyms we build distractors from all of these sets (with a similar restriction on the depth of sister distractors as noted above). In doing this, we can systematically investigate model performance on a wide range of distractor sets.\n\nDataset Probes and Construction ::: WordNetQA ::: Perturbations and Semantic Clusters\nBased on how we generate data, for each concept $c$ (i.e., atomic WordNet synset) and probe type (i.e., definitions, hypernymy, etc.), we have a wide variety of questions related to $c$ that manipulate 1) the complexity of reasoning that is involved (e.g., the number of inferential hops) and; 2) the types of distractors (or distractor perturbations) that are employed. We call such sets semantic clusters. As we describe in the next section, semantic clusters allow us to devise new types of evaluation that reveal whether models have comprehensive and consistent knowledge of target concepts (e.g., evaluating whether a model can correctly answer several questions associated with a concept, as opposed to a few disjoint instances).\n\nDetails of the individual datasets are shown in Table TABREF12. From these sets, we follow BIBREF22 in allocating a maximum of 3k examples for training and reserve the rest for development and testing. Since we are interested in probing, having large held-out sets allows us to do detailed analysis and cluster-based evaluation.\n\nDataset Probes and Construction ::: DictionaryQA\nThe DictionaryQA dataset is created from the GCIDE dictionary, which is a comprehensive open-source English dictionary built largely from the Webster's Revised Unabridged Dictionary BIBREF38. Each entry consists of a word, its part-of-speech, its definition, and an optional example sentence (see Table TABREF14). Overall, 33k entries (out of a total of 155k) contain example sentences/usages. As with the WordNet probes, we focus on this subset so as to contextualize each word being probed. In contrast to WordNet, GCIDE does not have ISA relations or explicit synsets, so we take each unique entry to be a distinct sense. We then use the dictionary entries to create a probe that centers around word-sense disambiguation, as described below.\n\nDataset Probes and Construction ::: DictionaryQA ::: Example and Distractor Generation.\nTo generate gold questions and answers, we use the same generation templates for definitions exemplified in Figure TABREF8 for WordNetQA. To generate distractors, we simply take alternative definitions for the target words that represent a different word sense (e.g., the alternative definitions of gift shown in Table TABREF14), as well as randomly chosen definitions if needed to create a 5-way multiple choice question. As above, we reserve a maximum of 3k examples for training. Since we have only 9k examples in total in this dataset (see WordSense in Table TABREF12), we also reserve 3k each for development and testing.\n\nWe note that initial attempts to build this dataset through standard random splitting gave rise to certain systematic biases that were exploited by the choice-only baseline models described in the next section, and hence inflated overall model scores. After several efforts at filtering we found that, among other factors, using definitions from entries without example sentences as distractors (e.g., the first two entries in Table TABREF14) had a surprising correlation with such biases. This suggests that possible biases involving differences between dictionary entries with and without examples can taint the resulting automatically generated MCQA dataset (for more discussion on the pitfalls involved with automatic dataset construction, see Section SECREF5)."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"We emphasize that using synthetic versus naturalistic QA data comes with important trade-offs. While we are able to generate large amounts of systematically controlled data at virtually no cost or need for manual annotation, it is much harder to validate the quality of such data at such a scale and such varying levels of complexity. Conversely, with benchmark QA datasets, it is much harder to perform the type of careful manipulations and cluster-based analyses we report here. While we assume that the expert knowledge we employ, in virtue of being hand-curated by human experts, is generally correct, we know that such resources are fallible and error-prone. Initial crowd-sourcing experiments that look at validating samples of our data show high agreement across probes and that human scores correlate with the model trends across the probe categories. More details of these studies are left for future work."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We emphasize that using synthetic versus naturalistic QA data comes with important trade-offs. While we are able to generate large amounts of systematically controlled data at virtually no cost or need for manual annotation, it is much harder to validate the quality of such data at such a scale and such varying levels of complexity.",
"While we assume that the expert knowledge we employ, in virtue of being hand-curated by human experts, is generally correct, we know that such resources are fallible and error-prone. Initial crowd-sourcing experiments that look at validating samples of our data show high agreement across probes and that human scores correlate with the model trends across the probe categories. More details of these studies are left for future work."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"677abb25656df57ddd31fc63532d69a5b564465b",
"d5b111a5b380455dd399963fe23fb3966a3dc382"
],
"answer": [
{
"evidence": [
"Automatically answering questions, especially in the open-domain setting (i.e., where minimal or no contextual knowledge is explicitly provided), requires bringing to bear considerable amount of background knowledge and reasoning abilities. For example, knowing the answers to the two questions in Figure FIGREF1 requires identifying a specific ISA relation (i.e., that cooking is a type of learned behavior) as well as recalling the definition of a concept (i.e., that global warming is defined as a worldwide increase in temperature). In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate)."
],
"extractive_spans": [],
"free_form_answer": "MULTIPLE CHOICE QUESTION ANSWERING",
"highlighted_evidence": [
"In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\\textbf {q}$ and a set of answer choices or candidates $\\lbrace a_{1},...a_{N}\\rbrace $. This section describes in detail the 5 different datasets we build, which are drawn from two sources of expert knowledge, namely WordNet BIBREF35 and the GNU Collaborative International Dictionary of English (GCIDE). We describe each resource in turn, and explain how the resulting dataset probes, which we call WordNetQA and DictionaryQA, are constructed."
],
"extractive_spans": [
"multiple-choice"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\\textbf {q}$ and a set of answer choices or candidates $\\lbrace a_{1},...a_{N}\\rbrace $."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5d13c52c037b7c2a3f9358aec08acb0b4d0a35d8",
"b2412e3bd76309ae1225b0a90c78d714419cf50a"
],
"answer": [
{
"evidence": [
"Our comprehensive assessment reveals several interesting nuances to the overall positive trend. For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. State-of-the-art QA models thus have much room to improve even in some fundamental building blocks, namely definitions and taxonomic hierarchies, of more complex forms of reasoning."
],
"extractive_spans": [
"1-hop links to 2-hops"
],
"free_form_answer": "",
"highlighted_evidence": [
"For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our comprehensive assessment reveals several interesting nuances to the overall positive trend. For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. Further, the accuracy of even our best models on the WordNetQA probe drops by 14-44% under our cluster-based analysis, which assesses whether a model knows several facts about each individual concept, rather than just being good at answering isolated questions. State-of-the-art QA models thus have much room to improve even in some fundamental building blocks, namely definitions and taxonomic hierarchies, of more complex forms of reasoning."
],
"extractive_spans": [],
"free_form_answer": "one additional hop",
"highlighted_evidence": [
"For example, the performance of even the best QA models degrades substantially on our hyponym probes (by 8-15%) when going from 1-hop links to 2-hops. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
},
{
"annotation_id": [
"362cde4dd93b09e11c0c00c08825262aac6524ea",
"609c11ccbffb93f454f73be2a1d70306fcc618f1"
],
"answer": [
{
"evidence": [
"Probing Methodology and Modeling ::: Task Definition and Modeling ::: Baselines and Sanity Checks.",
"When creating synthetic datasets, it is important to ensure that systematic biases, or annotation artifacts BIBREF41, are not introduced into the resulting probes and that the target datasets are sufficiently challenging (or good, in the sense of BIBREF42). To test for this, we use several of the MCQA baseline models first introduced in BIBREF0, which take inspiration from the LSTM-based models used in BIBREF43 for NLI and various partial-input baselines based on these models."
],
"extractive_spans": [
" we use several of the MCQA baseline models first introduced in BIBREF0"
],
"free_form_answer": "",
"highlighted_evidence": [
"Probing Methodology and Modeling ::: Task Definition and Modeling ::: Baselines and Sanity Checks.\nWhen creating synthetic datasets, it is important to ensure that systematic biases, or annotation artifacts BIBREF41, are not introduced into the resulting probes and that the target datasets are sufficiently challenging (or good, in the sense of BIBREF42). To test for this, we use several of the MCQA baseline models first introduced in BIBREF0, which take inspiration from the LSTM-based models used in BIBREF43 for NLI and various partial-input baselines based on these models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"When creating synthetic datasets, it is important to ensure that systematic biases, or annotation artifacts BIBREF41, are not introduced into the resulting probes and that the target datasets are sufficiently challenging (or good, in the sense of BIBREF42). To test for this, we use several of the MCQA baseline models first introduced in BIBREF0, which take inspiration from the LSTM-based models used in BIBREF43 for NLI and various partial-input baselines based on these models.",
"Following the notation from BIBREF0, for any given sequence $s$ of tokens in $\\lbrace q^{(j)}, a_{1}^{(j)},...,a_{N}^{(j)}\\rbrace $ in $D$, an encoding of $s$ is given as $h_{s}^{(j)} = \\textbf {BiLSTM}(\\textsc {embed}(s)) \\in \\mathbb {R}^{|s| \\times 2h}$ (where $h$ is the dimension of the hidden state in each directional network, and embed$(\\cdot )$ is an embedding function that assigns token-level embeddings to each token in $s$). A contextual representation for each $s$ is then built by applying an element-wise max operation over $h_{s}$ as follows:",
"With these contextual representations, different baseline models can be constructed. For example, a Choice-Only model, which is a variant of the well-known hypothesis-only baseline used in NLI BIBREF46, scores each choice $c_{i}$ in the following way:",
"for $\\textbf {W}^{T} \\in \\mathbb {R}^{2h}$ independently of the question and assigns a probability to each answer $p_{i}^{(j)} \\propto e^{\\alpha _{i}^{(j)}}$.",
"A slight variant of this model, the Choice-to-choice model, tries to single out a given answer choice relative to other choices by scoring all choice pairs $\\alpha _{i,i^{\\prime }}^{(j)} = \\textsc {Att}(r^{(j)}_{c_{i}},r^{(j)}_{c_{i^{\\prime }}}) \\in \\mathbb {R}$ using a learned attention mechanism Att and finding the choice with the minimal similarity to other options (for full details, see their original paper). In using these partial-input baselines, which we train directly on each target probe, we can check whether systematic biases related to answer choices were introduced into the data creation process.",
"A Question-to-choice model, in contrast, uses the contextual representations for each question and individual choice and an attention model Att model to get a score $\\alpha ^{(j)}_{q,i} = \\textsc {Att}(r^{(j)}_{q},r^{(j)}_{c_{i}}) \\in \\mathbb {R}$ as above. Here we also experiment with using ESIM BIBREF47 to generate the contextual representations $r$, as well as a simpler VecSimilarity model that measures the average vector similarity between question and answer tokens: $\\alpha ^{(j)}_{q,i} = \\textsc {Sim}(\\textsc {embed}(q^{(j)}),\\textsc {embed}(c^{(j)}_{i}))$. In contrast to the models above, these sets of baselines are used to check for artifacts between questions and answers that are not captured in the partial-input baselines (see discussion in BIBREF49) and ensure that the overall MCQA tasks are sufficiently difficult for our transformer models."
],
"extractive_spans": [
"Choice-Only model, which is a variant of the well-known hypothesis-only baseline",
"Choice-to-choice model, tries to single out a given answer choice relative to other choices",
"Question-to-choice model, in contrast, uses the contextual representations for each question and individual choice and an attention model Att model to get a score"
],
"free_form_answer": "",
"highlighted_evidence": [
"When creating synthetic datasets, it is important to ensure that systematic biases, or annotation artifacts BIBREF41, are not introduced into the resulting probes and that the target datasets are sufficiently challenging (or good, in the sense of BIBREF42). To test for this, we use several of the MCQA baseline models first introduced in BIBREF0, which take inspiration from the LSTM-based models used in BIBREF43 for NLI and various partial-input baselines based on these models.\n\nFollowing the notation from BIBREF0, for any given sequence $s$ of tokens in $\\lbrace q^{(j)}, a_{1}^{(j)},...,a_{N}^{(j)}\\rbrace $ in $D$, an encoding of $s$ is given as $h_{s}^{(j)} = \\textbf {BiLSTM}(\\textsc {embed}(s)) \\in \\mathbb {R}^{|s| \\times 2h}$ (where $h$ is the dimension of the hidden state in each directional network, and embed$(\\cdot )$ is an embedding function that assigns token-level embeddings to each token in $s$). A contextual representation for each $s$ is then built by applying an element-wise max operation over $h_{s}$ as follows:\n\nWith these contextual representations, different baseline models can be constructed. For example, a Choice-Only model, which is a variant of the well-known hypothesis-only baseline used in NLI BIBREF46, scores each choice $c_{i}$ in the following way:\n\nfor $\\textbf {W}^{T} \\in \\mathbb {R}^{2h}$ independently of the question and assigns a probability to each answer $p_{i}^{(j)} \\propto e^{\\alpha _{i}^{(j)}}$.\n\nA slight variant of this model, the Choice-to-choice model, tries to single out a given answer choice relative to other choices by scoring all choice pairs $\\alpha _{i,i^{\\prime }}^{(j)} = \\textsc {Att}(r^{(j)}_{c_{i}},r^{(j)}_{c_{i^{\\prime }}}) \\in \\mathbb {R}$ using a learned attention mechanism Att and finding the choice with the minimal similarity to other options (for full details, see their original paper). In using these partial-input baselines, which we train directly on each target probe, we can check whether systematic biases related to answer choices were introduced into the data creation process.\n\nA Question-to-choice model, in contrast, uses the contextual representations for each question and individual choice and an attention model Att model to get a score $\\alpha ^{(j)}_{q,i} = \\textsc {Att}(r^{(j)}_{q},r^{(j)}_{c_{i}}) \\in \\mathbb {R}$ as above. Here we also experiment with using ESIM BIBREF47 to generate the contextual representations $r$, as well as a simpler VecSimilarity model that measures the average vector similarity between question and answer tokens: $\\alpha ^{(j)}_{q,i} = \\textsc {Sim}(\\textsc {embed}(q^{(j)}),\\textsc {embed}(c^{(j)}_{i}))$. In contrast to the models above, these sets of baselines are used to check for artifacts between questions and answers that are not captured in the partial-input baselines (see discussion in BIBREF49) and ensure that the overall MCQA tasks are sufficiently difficult for our transformer models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"9cf96ca8b584b5de948019dc75e305c9e7707b92",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b251506d26526696784acefed2e6aa17e7720be2",
"c9fcd4b9fb9a3782aa934f707d2b235103fec9a8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"While our methodology is amenable to any knowledge source and set of models/benchmark tasks, we focus on probing state-of-the-art transformer models BIBREF7, BIBREF9 in the domain of science MCQA. For sources of expert knowledge, we use WordNet, a comprehensive lexical ontology, and other publicly available dictionary resources. We devise probes that measure model competence in definition and taxonomic knowledge in different settings (including hypernymy, hyponymy, and synonymy detection, and word sense disambiguation). This choice is motivated by fact that the science domain is considered particularly challenging for QA BIBREF10, BIBREF11, BIBREF12, and existing science benchmarks are known to involve widespread use of such knowledge (see BIBREF1, BIBREF13 for analysis), which is also arguably fundamental to more complex forms of reasoning."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For sources of expert knowledge, we use WordNet, a comprehensive lexical ontology, and other publicly available dictionary resources."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"9cf96ca8b584b5de948019dc75e305c9e7707b92"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Are the automatically constructed datasets subject to quality control?",
"Do they focus on Reading Comprehension or multiple choice question answering?",
"After how many hops does accuracy decrease?",
"How do they control for annotation artificats?",
"Is WordNet useful for taxonomic reasoning for this task?"
],
"question_id": [
"e97186c51d4af490dba6faaf833d269c8256426c",
"5bb3c27606c59d73fd6944ba7382096de4fa58d8",
"8de9f14c7c4f37ab103bc8a639d6d80ade1bc27b",
"85590bb26fed01a802241bc537d85ba5ef1c6dc2",
"75ff6e425ce304a35f18c0230c0d13d3913a31a9"
],
"question_writer": [
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255",
"2a18a3656984d04249f100633e4c1003417a2255"
],
"search_query": [
"expert",
"expert",
"expert",
"expert",
"expert"
],
"topic_background": [
"research",
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Figure 1: An illustration of our experimental setup and probing methodology.",
"Table 1: A description of the different resources used to construct the probes in terms of abstract triples.",
"Figure 2: A portion of the WordNet ISA graph (top) and an example distractor function DISTR(τ) (bottom) used to generate distractor choices {a′1, a′2, a′3} for a question q based on information in the graph.",
"Table 2: Details of the GEN(τ) function used to construct gold question-answer pairs (q, a) from a triple graph G.",
"Table 3: Details of our dataset probes, which includes (for WordNetQA above) the number of unique (q, a) pairs, as well as the total number of all questions including perturbations w/ Perturb. (varied distractor choices).",
"Table 4: Example dictionary entries for the word gift.",
"Table 5: The MCQA training datasets used. #Question denotes the number of training samples in our version of each dataset, N the number of choices.",
"Table 6: Instance-level accuracy (%) results on all baselines and main models.",
"Figure 3: Combined model accuracies on the different WordNetQA datasets (divided by red lines) broken down (where possible) into number of hops k (rows) and types of distractor sets and hops k′ (rows) across the different stages of inoculation (# ex.). The dashed red lines show some trends related to multi-hop inference.",
"Figure 4: Inoculation plots showing accuracy of challenge tasks (red solid lines) and original tasks (red dashed lines) using the best aggregate model Ma,k∗ at each k number of challenge examples (x axis). We also plot the effect of using add-some inoculation shown in the blue (1x matching) and black (x2 matching) lines.",
"Table 7: Example questions and answers/inferences (involving ISA reasoning) that illustrate semantic clusters, as well as model predictions (shown as # correct questions/total # questions with perturbations).",
"Table 8: Cluster-level accuracies (%) on the WordNetQA dev. sets for inoculated models and best Choice-only model. ∆ show the absolute difference in percentage points with instance-level accuracies."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"9-Table6-1.png",
"10-Figure3-1.png",
"10-Figure4-1.png",
"11-Table7-1.png",
"11-Table8-1.png"
]
} | [
"Do they focus on Reading Comprehension or multiple choice question answering?",
"After how many hops does accuracy decrease?"
] | [
[
"1912.13337-Dataset Probes and Construction-0",
"1912.13337-Introduction-0"
],
[
"1912.13337-Introduction-8"
]
] | [
"MULTIPLE CHOICE QUESTION ANSWERING",
"one additional hop"
] | 28 |
1809.01541 | Copenhagen at CoNLL--SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding | This paper documents the Team Copenhagen system which placed first in the CoNLL--SIGMORPHON 2018 shared task on universal morphological reinflection, Task 2 with an overall accuracy of 49.87. Task 2 focuses on morphological inflection in context: generating an inflected word form, given the lemma of the word and the context it occurs in. Previous SIGMORPHON shared tasks have focused on context-agnostic inflection---the"inflection in context"task was introduced this year. We approach this with an encoder-decoder architecture over character sequences with three core innovations, all contributing to an improvement in performance: (1) a wide context window; (2) a multi-task learning approach with the auxiliary task of MSD prediction; (3) training models in a multilingual fashion. | {
"paragraphs": [
[
"This paper describes our approach and results for Task 2 of the CoNLL–SIGMORPHON 2018 shared task on universal morphological reinflection BIBREF0 . The task is to generate an inflected word form given its lemma and the context in which it occurs.",
"Morphological (re)inflection from context is of particular relevance to the field of computational linguistics: it is compelling to estimate how well a machine-learned system can capture the morphosyntactic properties of a word given its context, and map those properties to the correct surface form for a given lemma.",
"There are two tracks of Task 2 of CoNLL–SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available. See Table TABREF1 for an example. Task 2 is additionally split in three settings based on data size: high, medium and low, with high-resource datasets consisting of up to 70K instances per language, and low-resource datasets consisting of only about 1K instances.",
"The baseline provided by the shared task organisers is a seq2seq model with attention (similar to the winning system for reinflection in CoNLL–SIGMORPHON 2016, BIBREF1 ), which receives information about context through an embedding of the two words immediately adjacent to the target form. We use this baseline implementation as a starting point and achieve the best overall accuracy of 49.87 on Task 2 by introducing three augmentations to the provided baseline system: (1) We use an LSTM to encode the entire available context; (2) We employ a multi-task learning approach with the auxiliary objective of MSD prediction; and (3) We train the auxiliary component in a multilingual fashion, over sets of two to three languages.",
"In analysing the performance of our system, we found that encoding the full context improves performance considerably for all languages: 11.15 percentage points on average, although it also highly increases the variance in results. Multi-task learning, paired with multilingual training and subsequent monolingual finetuning, scored highest for five out of seven languages, improving accuracy by another 9.86% on average."
],
[
"Our system is a modification of the provided CoNLL–SIGMORPHON 2018 baseline system, so we begin this section with a reiteration of the baseline system architecture, followed by a description of the three augmentations we introduce."
],
[
"The CoNLL–SIGMORPHON 2018 baseline is described as follows:",
"",
"",
"The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism.",
"To that we add a few details regarding model size and training schedule:",
"the number of LSTM layers is one;",
"embedding size, LSTM layer size and attention layer size is 100;",
"models are trained for 20 epochs;",
"on every epoch, training data is subsampled at a rate of 0.3;",
"LSTM dropout is applied at a rate 0.3;",
"context word forms are randomly dropped at a rate of 0.1;",
"the Adam optimiser is used, with a default learning rate of 0.001; and",
"trained models are evaluated on the development data (the data for the shared task comes already split in train and dev sets)."
],
[
"Here we compare and contrast our system to the baseline system. A diagram of our system is shown in Figure FIGREF4 .",
"The idea behind this modification is to provide the encoder with access to all morpho-syntactic cues present in the sentence. In contrast to the baseline, which only encodes the immediately adjacent context of a target word, we encode the entire context. All context word forms, lemmas, and MSD tags (in Track 1) are embedded in their respective high-dimensional spaces as before, and their embeddings are concatenated. However, we now reduce the entire past context to a fixed-size vector by encoding it with a forward LSTM, and we similarly represent the future context by encoding it with a backwards LSTM.",
"We introduce an auxiliary objective that is meant to increase the morpho-syntactic awareness of the encoder and to regularise the learning process—the task is to predict the MSD tag of the target form. MSD tag predictions are conditioned on the context encoding, as described in UID15 . Tags are generated with an LSTM one component at a time, e.g. the tag PRO;NOM;SG;1 is predicted as a sequence of four components, INLINEFORM0 PRO, NOM, SG, 1 INLINEFORM1 .",
"For every training instance, we backpropagate the sum of the main loss and the auxiliary loss without any weighting.",
"As MSD tags are only available in Track 1, this augmentation only applies to this track.",
"The parameters of the entire MSD (auxiliary-task) decoder are shared across languages.",
"Since a grouping of the languages based on language family would have left several languages in single-member groups (e.g. Russian is the sole representative of the Slavic family), we experiment with random groupings of two to three languages. Multilingual training is performed by randomly alternating between languages for every new minibatch. We do not pass any information to the auxiliary decoder as to the source language of the signal it is receiving, as we assume abstract morpho-syntactic features are shared across languages.",
"After 20 epochs of multilingual training, we perform 5 epochs of monolingual finetuning for each language. For this phase, we reduce the learning rate to a tenth of the original learning rate, i.e. 0.0001, to ensure that the models are indeed being finetuned rather than retrained.",
"We keep all hyperparameters the same as in the baseline. Training data is split 90:10 for training and validation. We train our models for 50 epochs, adding early stopping with a tolerance of five epochs of no improvement in the validation loss. We do not subsample from the training data.",
"We train models for 50 different random combinations of two to three languages in Track 1, and 50 monolingual models for each language in Track 2. Instead of picking the single model that performs best on the development set and thus risking to select a model that highly overfits that data, we use an ensemble of the five best models, and make the final prediction for a given target form with a majority vote over the five predictions."
],
[
"Test results are listed in Table TABREF17 . Our system outperforms the baseline for all settings and languages in Track 1 and for almost all in Track 2—only in the high resource setting is our system not definitively superior to the baseline.",
"Interestingly, our results in the low resource setting are often higher for Track 2 than for Track 1, even though contextual information is less explicit in the Track 2 data and the multilingual multi-tasking approach does not apply to this track. We interpret this finding as an indicator that a simpler model with fewer parameters works better in a setting of limited training data. Nevertheless, we focus on the low resource setting in the analysis below due to time limitations. As our Track 1 results are still substantially higher than the baseline results, we consider this analysis valid and insightful."
],
[
"We analyse the incremental effect of the different features in our system, focusing on the low-resource setting in Track 1 and using development data.",
"Encoding the entire context with an LSTM highly increases the variance of the observed results. So we trained fifty models for each language and each architecture. Figure FIGREF23 visualises the means and standard deviations over the trained models. In addition, we visualise the average accuracy for the five best models for each language and architecture, as these are the models we use in the final ensemble prediction. Below we refer to these numbers only.",
"The results indicate that encoding the full context with an LSTM highly enhances the performance of the model, by 11.15% on average. This observation explains the high results we obtain also for Track 2.",
"Adding the auxiliary objective of MSD prediction has a variable effect: for four languages (de, en, es, and sv) the effect is positive, while for the rest it is negative. We consider this to be an issue of insufficient data for the training of the auxiliary component in the low resource setting we are working with.",
"We indeed see results improving drastically with the introduction of multilingual training, with multilingual results being 7.96% higher than monolingual ones on average.",
"We studied the five best models for each language as emerging from the multilingual training (listed in Table TABREF27 ) and found no strong linguistic patterns. The en–sv pairing seems to yield good models for these languages, which could be explained in terms of their common language family and similar morphology. The other natural pairings, however, fr–es, and de–sv, are not so frequent among the best models for these pairs of languages.",
"Finally, monolingual finetuning improves accuracy across the board, as one would expect, by 2.72% on average.",
"The final observation to be made based on this breakdown of results is that the multi-tasking approach paired with multilingual training and subsequent monolingual finetuning outperforms the other architectures for five out of seven languages: de, en, fr, ru and sv. For the other two languages in the dataset, es and fi, the difference between this approach and the approach that emerged as best for them is less than 1%. The overall improvement of the multilingual multi-tasking approach over the baseline is 18.30%."
],
[
"Here we study the errors produced by our system on the English test set to better understand the remaining shortcomings of the approach. A small portion of the wrong predictions point to an incorrect interpretation of the morpho-syntactic conditioning of the context, e.g. the system predicted plan instead of plans in the context Our _ include raising private capital. The majority of wrong predictions, however, are nonsensical, like bomb for job, fify for fixing, and gnderrate for understand. This observation suggests that generally the system did not learn to copy the characters of lemma into inflected form, which is all it needs to do in a large number of cases. This issue could be alleviated with simple data augmentation techniques that encourage autoencoding BIBREF2 ."
],
[
"Figure FIGREF32 summarises the average MSD-prediction accuracy for the multi-tasking experiments discussed above. Accuracy here is generally higher than on the main task, with the multilingual finetuned setup for Spanish and the monolingual setup for French scoring best: 66.59% and 65.35%, respectively. This observation illustrates the added difficulty of generating the correct surface form even when the morphosyntactic description has been identified correctly.",
"We observe some correlation between these numbers and accuracy on the main task: for de, en, ru and sv, the brown, pink and blue bars here pattern in the same way as the corresponding INLINEFORM0 's in Figure FIGREF23 . One notable exception to this pattern is fr where inflection gains a lot from multilingual training, while MSD prediction suffers greatly. Notice that the magnitude of change is not always the same, however, even when the general direction matches: for ru, for example, multilingual training benefits inflection much more than in benefits MSD prediction, even though the MSD decoder is the only component that is actually shared between languages. This observation illustrates the two-fold effect of multi-task training: an auxiliary task can either inform the main task through the parameters the two tasks share, or it can help the main task learning through its regularising effect."
],
[
"Our system is inspired by previous work on multi-task learning and multi-lingual learning, mainly building on two intuitions: (1) jointly learning related tasks tends to be beneficial BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 ; and (2) jointly learning related languages in an MTL-inspired framework tends to be beneficial BIBREF8 , BIBREF9 , BIBREF10 . In the context of computational morphology, multi-lingual approaches have previously been employed for morphological reinflection BIBREF2 and for paradigm completion BIBREF11 . In both of these cases, however, the available datasets covered more languages, 40 and 21, respectively, which allowed for linguistically-motivated language groupings and for parameter sharing directly on the level of characters. BIBREF10 explore parameter sharing between related languages for dependency parsing, and find that sharing is more beneficial in the case of closely related languages."
],
[
"In this paper we described our system for the CoNLL–SIGMORPHON 2018 shared task on Universal Morphological Reinflection, Task 2, which achieved the best performance out of all systems submitted, an overall accuracy of 49.87. We showed in an ablation study that this is due to three core innovations, which extend a character-based encoder-decoder model: (1) a wide context window, encoding the entire available context; (2) multi-task learning with the auxiliary task of MSD prediction, which acts as a regulariser; (3) a multilingual approach, exploiting information across languages. In future work we aim to gain better understanding of the increase in variance of the results introduced by each of our modifications and the reasons for the varying effect of multi-task learning for different languages."
],
[
"We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the Titan Xp GPU used for this research."
]
],
"section_name": [
"Introduction",
"System Description",
"Baseline",
"Our system",
"Results and Discussion",
"Ablation Study",
"Error analysis",
"MSD prediction",
"Related Work",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"4271841a60c3aab36d587346dc2a34438bab4ea6",
"b1d01dbd654994515e5a2fbdb914ef01022e960e"
],
"answer": [
{
"evidence": [
"The parameters of the entire MSD (auxiliary-task) decoder are shared across languages.",
"Since a grouping of the languages based on language family would have left several languages in single-member groups (e.g. Russian is the sole representative of the Slavic family), we experiment with random groupings of two to three languages. Multilingual training is performed by randomly alternating between languages for every new minibatch. We do not pass any information to the auxiliary decoder as to the source language of the signal it is receiving, as we assume abstract morpho-syntactic features are shared across languages."
],
"extractive_spans": [
"Multilingual training is performed by randomly alternating between languages for every new minibatch"
],
"free_form_answer": "",
"highlighted_evidence": [
"The parameters of the entire MSD (auxiliary-task) decoder are shared across languages.\n\nSince a grouping of the languages based on language family would have left several languages in single-member groups (e.g. Russian is the sole representative of the Slavic family), we experiment with random groupings of two to three languages. Multilingual training is performed by randomly alternating between languages for every new minibatch."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Since a grouping of the languages based on language family would have left several languages in single-member groups (e.g. Russian is the sole representative of the Slavic family), we experiment with random groupings of two to three languages. Multilingual training is performed by randomly alternating between languages for every new minibatch. We do not pass any information to the auxiliary decoder as to the source language of the signal it is receiving, as we assume abstract morpho-syntactic features are shared across languages."
],
"extractive_spans": [
"by randomly alternating between languages for every new minibatch"
],
"free_form_answer": "",
"highlighted_evidence": [
"Multilingual training is performed by randomly alternating between languages for every new minibatch. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"95209dc066db26db093bbc8ce91029c6e92dabf6",
"ad3d47db35fed8972e0044c94555eb756ceed1c7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Official shared task test set results."
],
"extractive_spans": [],
"free_form_answer": "German, English, Spanish, Finnish, French, Russian, Swedish.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Official shared task test set results."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c8565161d43a6246f0b812c033debae3392e6094",
"ef4505ce7cc4c7335b4b0e6eb1caf6e30066a333"
],
"answer": [
{
"evidence": [
"The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The baseline provided by the shared task organisers is a seq2seq model with attention (similar to the winning system for reinflection in CoNLL–SIGMORPHON 2016, BIBREF1 ), which receives information about context through an embedding of the two words immediately adjacent to the target form. We use this baseline implementation as a starting point and achieve the best overall accuracy of 49.87 on Task 2 by introducing three augmentations to the provided baseline system: (1) We use an LSTM to encode the entire available context; (2) We employ a multi-task learning approach with the auxiliary objective of MSD prediction; and (3) We train the auxiliary component in a multilingual fashion, over sets of two to three languages."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The baseline provided by the shared task organisers is a seq2seq model with attention (similar to the winning system for reinflection in CoNLL–SIGMORPHON 2016, BIBREF1 ), which receives information about context through an embedding of the two words immediately adjacent to the target form. We use this baseline implementation as a starting point and achieve the best overall accuracy of 49.87 on Task 2 by introducing three augmentations to the provided baseline system: (1) We use an LSTM to encode the entire available context; (2) We employ a multi-task learning approach with the auxiliary objective of MSD prediction; and (3) We train the auxiliary component in a multilingual fashion, over sets of two to three languages."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"367d92cf9ea6703b14dc1901c955ca66c061dab6",
"d186e529f7acf188b78ea1f147cea5e4b0c511c0"
],
"answer": [
{
"evidence": [
"The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism."
],
"extractive_spans": [
"LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism."
],
"extractive_spans": [
"LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4565b970ac022b800c018983c66079dd9febec3f",
"a1bdf3c4409798599bea060fabde1ec005666e53"
],
"answer": [
{
"evidence": [
"The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism."
],
"extractive_spans": [
"LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"The resulting sequence of vectors is encoded using an LSTM encoder. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [...] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [...] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism."
],
"extractive_spans": [
"LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"The resulting sequence of vectors is encoded using an LSTM encoder."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"64b73eba239410f2e0d7bfaf15bb1967ba48d382",
"b7fbb3498be6793e47192a8a5c101526970140df"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Example input sentence. Context MSD tags and lemmas, marked in gray, are only available in Track 1. The cyan square marks the main objective of predicting the word form made. The magenta square marks the auxiliary objective of predicting the MSD tag V;PST;V.PTCP;PASS."
],
"extractive_spans": [],
"free_form_answer": "The task of predicting MSD tags: V, PST, V.PCTP, PASS.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Example input sentence. Context MSD tags and lemmas, marked in gray, are only available in Track 1. The cyan square marks the main objective of predicting the word form made. The magenta square marks the auxiliary objective of predicting the MSD tag V;PST;V.PTCP;PASS."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"There are two tracks of Task 2 of CoNLL–SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available. See Table TABREF1 for an example. Task 2 is additionally split in three settings based on data size: high, medium and low, with high-resource datasets consisting of up to 70K instances per language, and low-resource datasets consisting of only about 1K instances."
],
"extractive_spans": [
"morphosyntactic descriptions (MSD)"
],
"free_form_answer": "",
"highlighted_evidence": [
"There are two tracks of Task 2 of CoNLL–SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4bfc70ebf8b1cf0ca123be1357e29e29b8bbd9fe",
"b44bc06229b798fb9fea3461fbadd11ca650d482"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How do they perform multilingual training?",
"What languages are evaluated?",
"Does the model have attention?",
"What architecture does the decoder have?",
"What architecture does the encoder have?",
"What is MSD prediction?",
"What type of inflections are considered?"
],
"question_id": [
"5cb610d3d5d7d447b4cd5736d6a7d8262140af58",
"c32adef59efcb9d1a5b10e1d7c999a825c9e6d9a",
"b9d168da5321a7d7b812c52bb102a05210fe45bd",
"0c234db3b380c27c4c70579a5d6948e1e3b24ff1",
"fa527becb8e2551f4fd2ae840dbd4a68971349e0",
"32a3c248b928d4066ce00bbb0053534ee62596e7",
"c9b8d3858c112859eabee54248b874331c48f71b"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Example input sentence. Context MSD tags and lemmas, marked in gray, are only available in Track 1. The cyan square marks the main objective of predicting the word form made. The magenta square marks the auxiliary objective of predicting the MSD tag V;PST;V.PTCP;PASS.",
"Figure 1: Schematic representation of our approach. The focus here is on the prediction of the final character, e, of the word form made. The attention matrix indicates that this character should be based on the final state of the encoder, which contains information about the final character of the input form, and the past and future context. The input and output of the auxiliary decoder are marked in magenta.",
"Table 2: Official shared task test set results.",
"Table 3: Five best multilingual models for each language.",
"Figure 2: Mean (•) and standard deviation (error bars) over 100 models trained for each language and architecture, and average (×) over the 5 best models. LSTM Enc refers to a model that encodes the full context with an LSTM; Multi-task builds on LSTM Enc with an auxiliary objective of MSD prediction; Multilingual refers to a model with an auxiliary component trained in a multilingual fashion; Finetuned refers to a multilingual model topped with monolingual finetuning.",
"Figure 3: Accuracy on the auxiliary task of MSD prediction with different models. See the caption of Figure 2 for more details."
],
"file": [
"2-Table1-1.png",
"2-Figure1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"5-Figure2-1.png",
"5-Figure3-1.png"
]
} | [
"What languages are evaluated?",
"What is MSD prediction?"
] | [
[
"1809.01541-3-Table2-1.png"
],
[
"1809.01541-Introduction-2",
"1809.01541-2-Table1-1.png"
]
] | [
"German, English, Spanish, Finnish, French, Russian, Swedish.",
"The task of predicting MSD tags: V, PST, V.PCTP, PASS."
] | 29 |
1809.09194 | Stochastic Answer Networks for SQuAD 2.0 | This paper presents an extension of the Stochastic Answer Network (SAN), one of the state-of-the-art machine reading comprehension models, to be able to judge whether a question is unanswerable or not. The extended SAN contains two components: a span detector and a binary classifier for judging whether the question is unanswerable, and both components are jointly optimized. Experiments show that SAN achieves the results competitive to the state-of-the-art on Stanford Question Answering Dataset (SQuAD) 2.0. To facilitate the research on this field, we release our code: https://github.com/kevinduh/san_mrc. | {
"paragraphs": [
[
"Teaching machine to read and comprehend a given passage/paragraph and answer its corresponding questions is a challenging task. It is also one of the long-term goals of natural language understanding, and has important applications in e.g., building intelligent agents for conversation and customer service support. In a real world setting, it is necessary to judge whether the given questions are answerable given the available knowledge, and then generate correct answers for the ones which are able to infer an answer in the passage or an empty answer (as an unanswerable question) otherwise.",
"In comparison with many existing MRC systems BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , which extract answers by finding a sub-string in the passages/paragraphs, we propose a model that not only extracts answers but also predicts whether such an answer should exist. Using a multi-task learning approach (c.f. BIBREF5 ), we extend the Stochastic Answer Network (SAN) BIBREF1 for MRC answer span detector to include a classifier that whether the question is unanswerable. The unanswerable classifier is a pair-wise classification model BIBREF6 which predicts a label indicating whether the given pair of a passage and a question is unanswerable. The two models share the same lower layer to save the number of parameters, and separate the top layers for different tasks (the span detector and binary classifier).",
"Our model is pretty simple and intuitive, yet efficient. Without relying on the large pre-trained language models (ELMo) BIBREF7 , the proposed model achieves competitive results to the state-of-the-art on Stanford Question Answering Dataset (SQuAD) 2.0.",
"The contribution of this work is summarized as follows. First, we propose a simple yet efficient model for MRC that handles unanswerable questions and is optimized jointly. Second, our model achieves competitive results on SQuAD v2.0."
],
[
"The Machine Reading Comprehension is a task which takes a question INLINEFORM0 and a passage/paragraph INLINEFORM1 as inputs, and aims to find an answer span INLINEFORM2 in INLINEFORM3 . We assume that if the question is answerable, the answer INLINEFORM4 exists in INLINEFORM5 as a contiguous text string; otherwise, INLINEFORM6 is an empty string indicating an unanswerable question. Note that to handle the unanswerable questions, we manually append a dumpy text string NULL at the end of each corresponding passage/paragraph. Formally, the answer is formulated as INLINEFORM7 . In case of unanswerable questions, INLINEFORM8 points to the last token of the passage.",
"Our model is a variation of SAN BIBREF1 , as shown in Figure FIGREF2 . The main difference is the additional binary classifier added in the model justifying whether the question is unanswerable. Roughly, the model includes two different layers: the shared layer and task specific layer. The shared layer is almost identical to the lower layers of SAN, which has a lexicon encoding layer, a contextual layer and a memory generation layer. On top of it, there are different answer modules for different tasks. We employ the SAN answer module for the span detector and a one-layer feed forward neural network for the binary classification task. It can also be viewed as a multi-task learning BIBREF8 , BIBREF5 , BIBREF9 . We will briefly describe the model from ground up as follows. Detailed descriptions can be found in BIBREF1 .",
"Lexicon Encoding Layer. We map the symbolic/surface feature of INLINEFORM0 and INLINEFORM1 into neural space via word embeddings , 16-dim part-of-speech (POS) tagging embeddings, 8-dim named-entity embeddings and 4-dim hard-rule features. Note that we use small embedding size of POS and NER to reduce model size and they mainly serve the role of coarse-grained word clusters. Additionally, we use question enhanced passages word embeddings which can viewwed as soft matching between questions and passages. At last, we use two separate two-layer position-wise Feed-Forward Networks (FFN) BIBREF11 , BIBREF1 to map both question and passage encodings into the same dimension. As results, we obtain the final lexicon embeddings for the tokens for INLINEFORM2 as a matrix INLINEFORM3 , and tokens in INLINEFORM4 as INLINEFORM5 .",
"Contextual Encoding Layer. A shared two-layers BiLSTM is used on the top to encode the contextual information of both passages and questions. To avoid overfitting, we concatenate a pre-trained 600-dimensional CoVe vectors BIBREF12 trained on German-English machine translation dataset, with the aforementioned lexicon embeddings as the final input of the contextual encoding layer, and also with the output of the first contextual encoding layer as the input of its second encoding layer. Thus, we obtain the final representation of the contextual encoding layer by a concatenation of the outputs of two BiLSTM: INLINEFORM0 for questions and INLINEFORM1 for passages.",
"Memory Generation Layer. In this layer, we generate a working memory by fusing information from both passages INLINEFORM0 and questions INLINEFORM1 . The attention function BIBREF11 is used to compute the similarity score between passages and questions as: INLINEFORM2 ",
"Note that INLINEFORM0 and INLINEFORM1 is transformed from INLINEFORM2 and INLINEFORM3 by one layer neural network INLINEFORM4 , respectively. A question-aware passage representation is computed as INLINEFORM5 . After that, we use the method of BIBREF13 to apply self attention to the passage: INLINEFORM6 ",
"where INLINEFORM0 means that we only drop diagonal elements on the similarity matrix (i.e., attention with itself). At last, INLINEFORM1 and INLINEFORM2 are concatenated and are passed through a BiLSTM to form the final memory: INLINEFORM3 .",
"Span detector. We adopt a multi-turn answer module for the span detector BIBREF1 . Formally, at time step INLINEFORM0 in the range of INLINEFORM1 , the state is defined by INLINEFORM2 . The initial state INLINEFORM3 is the summary of the INLINEFORM4 : INLINEFORM5 , where INLINEFORM6 . Here, INLINEFORM7 is computed from the previous state INLINEFORM8 and memory INLINEFORM9 : INLINEFORM10 and INLINEFORM11 . Finally, a bilinear function is used to find the begin and end point of answer spans at each reasoning step INLINEFORM12 : DISPLAYFORM0 DISPLAYFORM1 ",
"The final prediction is the average of each time step: INLINEFORM0 . We randomly apply dropout on the step level in each time step during training, as done in BIBREF1 .",
"Unanswerable classifier. We adopt a one-layer neural network as our unanswerable binary classifier: DISPLAYFORM0 ",
", where INLINEFORM0 is the summary of the memory: INLINEFORM1 , where INLINEFORM2 . INLINEFORM3 denotes the probability of the question which is unanswerable.",
"Objective The objective function of the joint model has two parts: DISPLAYFORM0 ",
"Following BIBREF0 , the span loss function is defined: DISPLAYFORM0 ",
"The objective function of the binary classifier is defined: DISPLAYFORM0 ",
"where INLINEFORM0 is a binary variable: INLINEFORM1 indicates the question is unanswerable and INLINEFORM2 denotes the question is answerable."
],
[
"We evaluate our system on SQuAD 2.0 dataset BIBREF14 , a new MRC dataset which is a combination of Stanford Question Answering Dataset (SQuAD) 1.0 BIBREF15 and additional unanswerable question-answer pairs. The answerable pairs are around 100K; while the unanswerable questions are around 53K. This dataset contains about 23K passages and they come from approximately 500 Wikipedia articles. All the questions and answers are obtained by crowd-sourcing. Two evaluation metrics are used: Exact Match (EM) and Macro-averaged F1 score (F1) BIBREF14 ."
],
[
"We utilize spaCy tool to tokenize the both passages and questions, and generate lemma, part-of-speech and named entity tags. The word embeddings are initialized with pre-trained 300-dimensional GloVe BIBREF10 . A 2-layer BiLSTM is used encoding the contextual information of both questions and passages. Regarding the hidden size of our model, we search greedily among INLINEFORM0 . During training, Adamax BIBREF16 is used as our optimizer. The min-batch size is set to 32. The learning rate is initialized to 0.002 and it is halved after every 10 epochs. The dropout rate is set to 0.1. To prevent overfitting, we also randomly set 0.5% words in both passages and questions as unknown words during the training. Here, we use a special token unk to indicate a word which doesn't appear in GloVe. INLINEFORM1 in Eq EQREF9 is set to 1."
],
[
"We would like to investigate effectiveness the proposed joint model. To do so, the same shared layer/architecture is employed in the following variants of the proposed model:",
"The results in terms of EM and F1 is summarized in Table TABREF20 . We observe that Joint SAN outperforms the SAN baseline with a large margin, e.g., 67.89 vs 69.27 (+1.38) and 70.68 vs 72.20 (+1.52) in terms of EM and F1 scores respectively, so it demonstrates the effectiveness of the joint optimization. By incorporating the output information of classifier into Joint SAN, it obtains a slight improvement, e.g., 72.2 vs 72.66 (+0.46) in terms of F1 score. By analyzing the results, we found that in most cases when our model extract an NULL string answer, the classifier also predicts it as an unanswerable question with a high probability.",
"Table TABREF21 reports comparison results in literature published . Our model achieves state-of-the-art on development dataset in setting without pre-trained large language model (ELMo). Comparing with the much complicated model R.M.-Reader + Verifier, which includes several components, our model still outperforms by 0.7 in terms of F1 score. Furthermore, we observe that ELMo gives a great boosting on the performance, e.g., 2.8 points in terms of F1 for DocQA. This encourages us to incorporate ELMo into our model in future.",
"Analysis. To better understand our model, we analyze the accuracy of the classifier in our joint model. We obtain 75.3 classification accuracy on the development with the threshold 0.5. By increasing value of INLINEFORM0 in Eq EQREF9 , the classification accuracy reached to 76.8 ( INLINEFORM1 ), however the final results of our model only have a small improvement (+0.2 in terms of F1 score). It shows that it is important to make balance between these two components: the span detector and unanswerable classifier."
],
[
"To sum up, we proposed a simple yet efficient model based on SAN. It showed that the joint learning algorithm boosted the performance on SQuAD 2.0. We also would like to incorporate ELMo into our model in future."
],
[
"We thank Yichong Xu, Shuohang Wang and Sheng Zhang for valuable discussions and comments. We also thank Robin Jia for the help on SQuAD evaluations. "
]
],
"section_name": [
"Background",
"Model",
"Setup",
"Implementation details",
"Results",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"67a4b963968f98cd753433e9dabb2b341c8303b3",
"be8469790f98aa595f33a8417b39a8b9568e431f"
],
"answer": [
{
"evidence": [
"Memory Generation Layer. In this layer, we generate a working memory by fusing information from both passages INLINEFORM0 and questions INLINEFORM1 . The attention function BIBREF11 is used to compute the similarity score between passages and questions as: INLINEFORM2"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The attention function BIBREF11 is used to compute the similarity score between passages and questions as: INLINEFORM2"
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Memory Generation Layer. In this layer, we generate a working memory by fusing information from both passages INLINEFORM0 and questions INLINEFORM1 . The attention function BIBREF11 is used to compute the similarity score between passages and questions as: INLINEFORM2"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Memory Generation Layer. In this layer, we generate a working memory by fusing information from both passages INLINEFORM0 and questions INLINEFORM1 . The attention function BIBREF11 is used to compute the similarity score between passages and questions as: INLINEFORM2"
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"36b40dc9a0efe05cf88605c54b6a15173cd7c70d",
"b246c3d8e40af4978cdd3da44a38badf1c2be87a"
],
"answer": [
{
"evidence": [
"Table TABREF21 reports comparison results in literature published . Our model achieves state-of-the-art on development dataset in setting without pre-trained large language model (ELMo). Comparing with the much complicated model R.M.-Reader + Verifier, which includes several components, our model still outperforms by 0.7 in terms of F1 score. Furthermore, we observe that ELMo gives a great boosting on the performance, e.g., 2.8 points in terms of F1 for DocQA. This encourages us to incorporate ELMo into our model in future.",
"The results in terms of EM and F1 is summarized in Table TABREF20 . We observe that Joint SAN outperforms the SAN baseline with a large margin, e.g., 67.89 vs 69.27 (+1.38) and 70.68 vs 72.20 (+1.52) in terms of EM and F1 scores respectively, so it demonstrates the effectiveness of the joint optimization. By incorporating the output information of classifier into Joint SAN, it obtains a slight improvement, e.g., 72.2 vs 72.66 (+0.46) in terms of F1 score. By analyzing the results, we found that in most cases when our model extract an NULL string answer, the classifier also predicts it as an unanswerable question with a high probability.",
"FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission."
],
"extractive_spans": [],
"free_form_answer": "SAN Baseline, BNA, DocQA, R.M-Reader, R.M-Reader+Verifier and DocQA+ELMo",
"highlighted_evidence": [
"Table TABREF21 reports comparison results in literature published .",
"The results in terms of EM and F1 is summarized in Table TABREF20 . We observe that Joint SAN outperforms the SAN baseline with a large margin, e.g., 67.89 vs 69.27 (+1.38) and 70.68 vs 72.20 (+1.52) in terms of EM and F1 scores respectively, so it demonstrates the effectiveness of the joint optimization.",
"FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission.",
"Table TABREF21 reports comparison results in literature published . Our model achieves state-of-the-art on development dataset in setting without pre-trained large language model (ELMo). Comparing with the much complicated model R.M.-Reader + Verifier, which includes several components, our model still outperforms by 0.7 in terms of F1 score. Furthermore, we observe that ELMo gives a great boosting on the performance, e.g., 2.8 points in terms of F1 for DocQA. This encourages us to incorporate ELMo into our model in future."
],
"extractive_spans": [],
"free_form_answer": "BNA, DocQA, R.M-Reader, R.M-Reader + Verifier, DocQA + ELMo, R.M-Reader+Verifier+ELMo",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission.",
"Table TABREF21 reports comparison results in literature published ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"4ab204417fa2beefb863f9603ff46d5def552e0b",
"833eda90ea11477732da1c6e6fe3e51bfa200e20"
],
"answer": [
{
"evidence": [
"Span detector. We adopt a multi-turn answer module for the span detector BIBREF1 . Formally, at time step INLINEFORM0 in the range of INLINEFORM1 , the state is defined by INLINEFORM2 . The initial state INLINEFORM3 is the summary of the INLINEFORM4 : INLINEFORM5 , where INLINEFORM6 . Here, INLINEFORM7 is computed from the previous state INLINEFORM8 and memory INLINEFORM9 : INLINEFORM10 and INLINEFORM11 . Finally, a bilinear function is used to find the begin and end point of answer spans at each reasoning step INLINEFORM12 : DISPLAYFORM0 DISPLAYFORM1",
"The final prediction is the average of each time step: INLINEFORM0 . We randomly apply dropout on the step level in each time step during training, as done in BIBREF1 ."
],
"extractive_spans": [
"adopt a multi-turn answer module for the span detector BIBREF1"
],
"free_form_answer": "",
"highlighted_evidence": [
"Span detector. We adopt a multi-turn answer module for the span detector BIBREF1 . Formally, at time step INLINEFORM0 in the range of INLINEFORM1 , the state is defined by INLINEFORM2 . The initial state INLINEFORM3 is the summary of the INLINEFORM4 : INLINEFORM5 , where INLINEFORM6 . Here, INLINEFORM7 is computed from the previous state INLINEFORM8 and memory INLINEFORM9 : INLINEFORM10 and INLINEFORM11 . Finally, a bilinear function is used to find the begin and end point of answer spans at each reasoning step INLINEFORM12 : DISPLAYFORM0 DISPLAYFORM1\n\nThe final prediction is the average of each time step: INLINEFORM0 . We randomly apply dropout on the step level in each time step during training, as done in BIBREF1 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they use attention?",
"What other models do they compare to?",
"What is the architecture of the span detector?"
],
"question_id": [
"45e9533586199bde19313cd43b3d0ecadcaf7a33",
"d3dbb5c22ef204d85707d2d24284cc77fa816b6c",
"a5e49cdb91d9fd0ca625cc1ede236d3d4672403c"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Examples from SQuAD v2.0. The first question is answerable which indicates its answer highlighted in blue can be found in the paragraph; while the second question is unanswerable and its plausible answer is highlighted in red.",
"Figure 2: Architecture of the proposed model for Reading Comprehension: It includes two components: a span detector (the upper left SAN answer module) and an unanswerable classifier (the upper right module). It contains two sets of layers: the shared layers including a lexicon encoding layer, contextual encoding layer and memory generation layer; and the task specific layers including the SAN answer module for span detection, and a binary classifier determining whether the question is unanswerable. The model is learned jointly.",
"Table 1: Performance on the SQuAD 2.0 development dataset.",
"Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} | [
"What other models do they compare to?"
] | [
[
"1809.09194-4-Table2-1.png",
"1809.09194-Results-2",
"1809.09194-Results-1"
]
] | [
"BNA, DocQA, R.M-Reader, R.M-Reader + Verifier, DocQA + ELMo, R.M-Reader+Verifier+ELMo"
] | 30 |
1604.05372 | Clustering Comparable Corpora of Russian and Ukrainian Academic Texts: Word Embeddings and Semantic Fingerprints | We present our experience in applying distributional semantics (neural word embeddings) to the problem of representing and clustering documents in a bilingual comparable corpus. Our data is a collection of Russian and Ukrainian academic texts, for which topics are their academic fields. In order to build language-independent semantic representations of these documents, we train neural distributional models on monolingual corpora and learn the optimal linear transformation of vectors from one language to another. The resulting vectors are then used to produce `semantic fingerprints' of documents, serving as input to a clustering algorithm. The presented method is compared to several baselines including `orthographic translation' with Levenshtein edit distance and outperforms them by a large margin. We also show that language-independent `semantic fingerprints' are superior to multi-lingual clustering algorithms proposed in the previous work, at the same time requiring less linguistic resources. | {
"paragraphs": [
[
"This research addresses the problem of representing the semantics of text documents in multi-lingual comparable corpora. We present a new approach to this problem, based on neural embeddings, and test it on the task of clustering texts into meaningful classes depending on their topics. The setting is unsupervised, meaning that one either does not have enough annotated data to train a supervised classifier or does not want to be limited with a pre-defined set of classes. There is a lot of sufficiently good approaches to this problem in the case of mono-lingual text collections, but the presence of multiple languages introduces complications.",
"When a text collection contains documents in several languages, it becomes impractical to simply represent the documents as vectors of words occurring in them (\"bag-of-words\"), as the words surface forms are different, even in closely-related languages. Thus, one has to invent means to cross the inter-lingual gap and bring all documents to some sort of shared representation, without losing information about their topics or categories.",
"Of course, one obvious way to solve this problem is to translate all documents into one language, and then apply any clustering algorithm. However, this requires either buying human/machine translation services (which can be expensive if you deal with large text collection) or training own statistical machine translation model (which as a rule requires big parallel corpus). This is the reason to search for other solutions.",
"In this paper, a novel way of reducing the problem of cross-lingual document representation to a monolingual setting is proposed. Essentially, we train Continuous Bag-of-Words models BIBREF0 on large comparable monolingual corpora for two languages our dataset consists of. This provides us with vector representations of words, allowing to measure their semantic similarity. Then, a linear transformation matrix from vectors of language A to vectors of language B is learned, using a small bilingual dictionary as training data. This matrix is then employed to `project' word and document representations from semantic space of language A to semantic space of language B. It allows not only quite accurate `translation' of words, but also of document `semantic fingerprints' (dense representations of document semantics, calculated as an average of the trained distributional vectors for all the words in document).",
"This approach is evaluated in a setting, where the input is a collection of documents in several languages and some number of topics to which these documents belong (we also have large monolingual corpora to train distributional models on). For each document, we are given its language, but not its topic. The task is to cluster this collection so that documents belonging to one topic were clustered together, independent of their language. Note that we are interested in clustering the collection as a whole, not each language separately (which is trivial).",
"Our evaluation data consists of comparable corpora of Russian and Ukrainian academic texts. On this material, we show that the `translated semantic fingerprints' method represents documents in different languages precisely enough to allow almost exact clustering according to document topics, with only 5% of incorrect assignments. It significantly outperforms both naive bag-of-words baseline and the not-so-naive method of `orthographic translation' based on Damerau-Levenshtein distance, even enriched with dictionary mappings. At the same time, it does not require large parallel corpora or a ready-made statistical machine translation model.",
"The rest of the paper is structured as follows. In Section \"Related Work\" we describe the foundations of our approach and the related work. Section \"Academic texts as Comparable Corpora\" introduces the employed corpora and the story behind them. Section \"Learning to Translate: Ukrainian-to-Russian transformations\" is dedicated to learning the transformation matrix, and Section \"Experiment Design and Evaluation\" describes our experimental setting and evaluation results. We discuss the findings in Section \"Discussion\" and conclude in Section \"Conclusion and Future Work\" , also suggesting directions for future work."
],
[
"Clustering multi-lingual documents has received much attention in natural language processing. Among approaches not using some form of machine translation, one can mention BIBREF1 , who essentially employ a bilingual dictionary to bring some words in the documents to a language-independent form and then to perform clustering. In the section \"Experiment Design and Evaluation\" we show that our approach based on neural embeddings significantly outperforms their reported results.",
" BIBREF2 proposed training joint multi-lingual neural embedding models. Theoretically, this can be used to achieve our aim of language-independent semantic representations for documents. Unfortunately, it demands a large word-aligned parallel corpus. This is not the case with the more recent Trans-gram approach introduced in BIBREF3 , also able to learn multi-lingual models. However, it still needs sentence-aligned corpora to train on (in the size of millions of paired sentences). Large parallel corpora (whether word- or sentence-aligned) are often a scarce resource, especially in the case of under-represented languages.",
"The approach described in this paper takes as an input only comparable monolingual corpora and bilingual dictionaries in the size of several thousand word pairs. Such resources are much easier to find and evaluate. We employ the idea of learning a linear transformation matrix to map or project word embeddings from the semantic space of one language to that of another. This idea was first proposed in BIBREF4 , who applied it to lexical translation between English, Spanish, Czech and Vietnamese. We extend it from continuous representations of single words or collocations to `semantic fingerprints' of documents as a whole."
],
[
"The Russian and Ukrainian languages are mainly spoken in Russian Federation and the Ukraine and belong to the East-Slavic group of the Indo-European language family. They share many common morphosyntactic features: both are SVO languages with free word order and rich morphology, both use the Cyrillic alphabet and share many common cognates.",
"Both Russia and the Ukraine have common academic tradition that makes it easier to collect corpora, which are comparable in terms of both genre and strictly defined academic fields. We work with such a corpus of Russian and Ukrainian academic texts, initially collected for the purposes of cross-lingual plagiarism detection. This data is available online through a number of library services, but unfortunately cannot be republished due to copyright limitations.",
"The Ukrainian subcorpus contains about 60 thousand extended summaries (Russian and Ukrainian russian`автореферат', `avtoreferat') of theses submitted between 1998 and 2011. The Russian subcorpus is smaller in the number of documents (about 16 thousand, approximately the same time period), but the documents are full texts of theses, thus the total volume of the Russian subcorpus is notably larger: 830 million tokens versus 250 million tokens in the Ukrainian one. Generally, the texts belong to one genre that can be defined as post-Soviet expository academic prose, submitted for academic degree award process.",
"The documents were converted to plain text files from MS Word format in the case of the Ukrainian subcorpus and mainly from OCRed PDF files in the case of the Russian subcorpus. Because of this, the Russian documents often suffer from OCR artifacts, such as words split with line breaks, incorrectly recognized characters and so on. However, it does not influence the resulting model much, as we show below.",
"Both Ukrainian and Russian documents come with meta data allowing to separate them into academic fields, with economics, medicine and law being most frequent topics for the Ukrainian data and economics, history and pedagogy dominating the Russian data.",
"For evaluation, 3 topics were used, distant enough from each other and abundantly presented in both subcorpora: economics, law and history. We randomly selected 100 texts in each language for each topic. As an average length of Russian texts is significantly higher (them being full theses), we cropped them, leaving only the first 5 thousand words, to mimic the size of the Ukrainian summaries. These 600 documents in 3 classes are used as a test set (see Section \"Experiment Design and Evaluation\" for the description of the conducted experiments).",
"The corpora (including test set) were PoS-tagged. Each word was replaced with its lemma followed by a PoS-tag (`russianдиссертация_S', `russianдиссертацiя_N'). Functional parts of speech (conjunctions, pronouns, prepositions, etc.) and numerals were removed from the texts."
],
[
"As already stated, our main proposal is using neural embedding models to `project' documents in one language into the semantic space of another language. For this, we first trained a Continuous Bag-of-Words (CBOW) and a Continuous SkipGram model BIBREF0 for each of our monolingual subcorpora. The models were trained with identical hyperparameters: vector size of 300 components, symmetric window of 2 words, negative sampling with 10 samples, 5 iterations over the corpus, no down-sampling. The only language-dependent difference was that for the Ukrainian model we ignored words with the corpus frequency less than 10 and for the Russian model this threshold was set to 15 (as the Russian corpus is 3 times larger). All in all, the final Ukrainian model recognizes 429 215 words and the Russian one 271 720 words. Training was performed using CBOW and SkipGram implementation in Gensim library BIBREF7 .",
"After the models were trained, we followed the path outlined in BIBREF4 to learn a linear transformation matrix from Ukrainian to Russian. First, we extracted all noun pairs from Russian-Ukrainian bilingual dictionary BIBREF8 , with the constraint that their frequency in our corpora was above the already mentioned thresholds 15 and 10 for Russian and Ukrainian words correspondingly. That made it a list of about 5 thousand pairs of nouns being translations of each other.",
"For all these words, their vectors were found in the models corresponding to the words' languages. It provided us with a matrix of 5 thousand of 300-dimensional Ukrainian vectors and the matrix of corresponding 5 thousand of 300-dimensional Russian vectors. This data served as a training set to learn an optimal transformation matrix. The latter is actually a 300x301 matrix of coefficients, such that when the initial Ukrainian matrix is multiplied by this transformation matrix, the result is maximally close to the corresponding Russian matrix. This transformation matrix has 301 (not 300) columns, because we add one component equal to 1 to each vector, as a bias term.",
"Producing the transformation matrix is a linear regression problem: the input is 301 components of Ukrainian vectors (including the bias term) and the output is 300 components of Russian vectors. As we need 300 values as an output, there are actually 300 linear regression problems and that's why the resulting matrix size is 300x301 (301 weights for each of 300 components).",
"There are two main ways to solve a linear regression problem: one can either learn the optimal weights in an iterative way using some variant of gradient descent, or one can solve it numerically without iteration, using normal equation. For English and Spanish, BIBREF4 used stochastic gradient descent. However, normal equation is actually less error-prone and is guaranteed to find the global optimum. Its only disadvantage is that it becomes very computationally expensive when the number of features is large (thousands and more). However, in our case the number of features is only 301, so computational complexity is not an issue.",
"Thus, we use normal equation to find the optimal transformation matrix. The algebraic solution to each of 300 normal equations (one for each vector component $i$ ) is shown in the Equation 3 : ",
"$$\\beta _i = (\\textbf {X}^\\intercal * \\textbf {X})^{-1} * \\textbf {X}^\\intercal * y_i$$ (Eq. 3) ",
"where $\\textbf {X}$ is the matrix of 5 thousand Ukrainian word vectors (input), $y_i$ is the vector of the $i$ th components of 5 thousand corresponding Russian words (correct predictions), and $\\beta _i$ is our aim: the vector of 301 optimal coefficients which transform the Ukrainian vectors into the $i$ th component of the Russian vectors.",
"After solving such normal equations for all the 300 components $i$ , we have the 300x301 linear transformation matrix which fits the data best.",
"This matrix basically maps the Ukrainian vectors into the Russian ones. It is based on the assumption that the relations between semantic concepts in different languages are in fact very similar (students are close to teachers, while pirates are close to corsairs, and so on). In continuous distributional models which strive to represent these semantic spaces, mutual `geometrical' relations between vectors representing particular words are also similar across models (if they are trained on comparable corpora), but the exact vectors for words denoting one and the same notion are different. This is because the models themselves are stochastic and the particular values of vectors (unlike their positions in relation to each other) depend a lot on technical factors, including the random seed used to initialize vectors prior to training. In order to migrate from a model A to another model B, one has to `rotate and scale' A vectors in a uniform linear way. To learn the optimal transformation matrix means to find out the exact directions of rotating and scaling, which minimize prediction errors.",
"Linguistically speaking, once we learned the transformation matrix, we can predict what a Russian vector would most probably be, given a Ukrainian one. This essentially means we are able to `translate' Ukrainian words into Russian, by calculating the word in the Russian model with the vector closest to the predicted one.",
"We had to choose between CBOW or Continuous SkipGram models to use when learning the transformation matrix. Also, there was a question of whether to employ regularized or standard normal equations. Regularization is an attempt to avoid over-fitting by trying to somehow decrease the values of learned weights. The regularized normal equation is shown in 4 : ",
"$$\\beta _i = (\\textbf {X}^\\intercal * \\textbf {X} + \\lambda * L)^{-1} * \\textbf {X}^\\intercal * y_i$$ (Eq. 4) ",
"Comparing to 3 , it adds the term $\\lambda * L$ , where $L$ is the identity matrix of the size equal to the number of features, with 0 at the top left cell, and $\\lambda $ is a real number used to tune the influence of regularization term (if $\\lambda = 0$ , there is no regularization).",
"To test all the possible combinations of parameters, we divided the bilingual dictionary into 4500 noun pairs used as a training set and 500 noun pairs used as a test set. We then learned transformation matrices on the training set using both training algorithms (CBOW and SkipGram) and several values of regularization $\\lambda $ from 0 to 5, with a step of 0.5. The resulting matrices were applied to the Ukrainian vectors from the test set and the corresponding Russian `translations' were calculated. The ratio of correct `translations' (matches) was used as an evaluation measure. It came out that regularization only worsened the results for both algorithms, so in the Table 1 we report the results without regularization.",
"For reference, we also report the accuracy of `quazi-translation' via Damerau-Levenshtein edit distance BIBREF9 , as a sort of a baseline. As already stated, the two languages share many cognates, and a lot of Ukrainian words can be orthographically transformed into their Russian translations (and vice versa) by one or two character replacements. Thus, we extracted 50,000 most frequent nouns from our Russian corpora; then for each Ukrainian noun in the bilingual dictionary we found the closest Russian noun (or 5 closest nouns for @5 metric) by edit distance and calculated how often it turned out to be the correct translation. As the Table 1 shows, notwithstanding the orthographic similarity of the two languages, CBOW consistently outperforms this approach even on the test set. On the training set, its superiority is even more obvious."
]
],
"section_name": [
"Introduction",
"Related Work",
"Academic texts as Comparable Corpora",
"Learning to Translate: Ukrainian-to-Russian transformations"
]
} | {
"answers": [
{
"annotation_id": [
"37a09e422d01622bb6f2e7b5596f88d6efb9ef06",
"8a940d24bfe575b8657c6919bfa0762351e0ace3"
],
"answer": [
{
"evidence": [
"To test all the possible combinations of parameters, we divided the bilingual dictionary into 4500 noun pairs used as a training set and 500 noun pairs used as a test set. We then learned transformation matrices on the training set using both training algorithms (CBOW and SkipGram) and several values of regularization $\\lambda $ from 0 to 5, with a step of 0.5. The resulting matrices were applied to the Ukrainian vectors from the test set and the corresponding Russian `translations' were calculated. The ratio of correct `translations' (matches) was used as an evaluation measure. It came out that regularization only worsened the results for both algorithms, so in the Table 1 we report the results without regularization."
],
"extractive_spans": [],
"free_form_answer": "Accuracy",
"highlighted_evidence": [
"The ratio of correct `translations' (matches) was used as an evaluation measure."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To test all the possible combinations of parameters, we divided the bilingual dictionary into 4500 noun pairs used as a training set and 500 noun pairs used as a test set. We then learned transformation matrices on the training set using both training algorithms (CBOW and SkipGram) and several values of regularization $\\lambda $ from 0 to 5, with a step of 0.5. The resulting matrices were applied to the Ukrainian vectors from the test set and the corresponding Russian `translations' were calculated. The ratio of correct `translations' (matches) was used as an evaluation measure. It came out that regularization only worsened the results for both algorithms, so in the Table 1 we report the results without regularization."
],
"extractive_spans": [
"ratio of correct `translations'"
],
"free_form_answer": "",
"highlighted_evidence": [
"The ratio of correct `translations' (matches) was used as an evaluation measure. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"f320efb1fbb744616e420aaf8da0f9622b75b2ed"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"What evaluation metric do they use?"
],
"question_id": [
"aefa333b2cf0a4000cd40566149816f5b36135e7"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Table 1: Translation accuracy",
"Table 2: Clustering correspondence to document topics",
"Figure 1: Naive baseline clustering",
"Figure 2: Matrix translation clustering",
"Figure 3: Semantic fingerprints clustering"
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png",
"6-Figure1-1.png",
"6-Figure2-1.png",
"7-Figure3-1.png"
]
} | [
"What evaluation metric do they use?"
] | [
[
"1604.05372-Learning to Translate: Ukrainian-to-Russian transformations-14"
]
] | [
"Accuracy"
] | 31 |
2002.08795 | How To Avoid Being Eaten By a Grue: Exploration Strategies for Text-Adventure Agents | Text-based games -- in which an agent interacts with the world through textual natural language -- present us with the problem of combinatorially-sized action-spaces. Most current reinforcement learning algorithms are not capable of effectively handling such a large number of possible actions per turn. Poor sample efficiency, consequently, results in agents that are unable to pass bottleneck states, where they are unable to proceed because they do not see the right action sequence to pass the bottleneck enough times to be sufficiently reinforced. Building on prior work using knowledge graphs in reinforcement learning, we introduce two new game state exploration strategies. We compare our exploration strategies against strong baselines on the classic text-adventure game, Zork1, where prior agent have been unable to get past a bottleneck where the agent is eaten by a Grue. | {
"paragraphs": [
[
"Many reinforcement learning algorithms are designed for relatively small discrete or continuous action spaces and so have trouble scaling. Text-adventure games—or interaction fictions—are simulations in which both an agents' state and action spaces are in textual natural language. An example of a one turn agent interaction in the popular text-game Zork1 can be seen in Fig. FIGREF1. Text-adventure games provide us with multiple challenges in the form of partial observability, commonsense reasoning, and a combinatorially-sized state-action space. Text-adventure games are structured as long puzzles or quests, interspersed with bottlenecks. The quests can usually be completed through multiple branching paths. However, games can also feature one or more bottlenecks. Bottlenecks are areas that an agent must pass through in order to progress to the next section of the game regardless of what path the agent has taken to complete that section of the quest BIBREF0. In this work, we focus on more effectively exploring this space and surpassing these bottlenecks—building on prior work that focuses on tackling the other problems.",
"Formally, we use the definition of text-adventure games as seen in BIBREF1 and BIBREF2. These games are partially observable Markov decision processes (POMDPs), represented as a 7-tuple of $\\langle S,T,A,\\Omega , O,R, \\gamma \\rangle $ representing the set of environment states, mostly deterministic conditional transition probabilities between states, the vocabulary or words used to compose text commands, observations returned by the game, observation conditional probabilities, reward function, and the discount factor respectively. For our purposes, understanding the exact state and action spaces we use in this work is critical and so we define each of these in relative depth.",
"Action-Space. To solve Zork1, the cannonical text-adventure games, requires the generation of actions consisting of up to five-words from a relatively modest vocabulary of 697 words recognized by the game’s parser. This results in $\\mathcal {O}(697^5)={1.64e14}$ possible actions at every step. To facilitate text-adventure game playing, BIBREF2 introduce Jericho, a framework for interacting with text-games. They propose a template-based action space in which the agent first selects a template, consisting of an action verb and preposition, and then filling that in with relevant entities $($e.g. $[get]$ $ [from] $ $)$. Zork1 has 237 templates, each with up to two blanks, yielding a template-action space of size $\\mathcal {O}(237 \\times 697^2)={1.15e8}$. This space is still far larger than most used by previous approaches applying reinforcement learning to text-based games.",
"State-Representation. Prior work has shown that knowledge graphs are effective in terms of dealing with the challenges of partial observability $($BIBREF3 BIBREF3; BIBREF4$)$. A knowledge graph is a set of 3-tuples of the form $\\langle subject, relation, object \\rangle $. These triples are extracted from the observations using Stanford's Open Information Extraction (OpenIE) BIBREF5. Human-made text-adventure games often contain relatively complex semi-structured information that OpenIE is not designed to parse and so they add additional rules to ensure that the correct information is parsed. The graph itself is more or less a map of the world, with information about objects' affordances and attributes linked to the rooms that they are place in a map. The graph also makes a distinction with respect to items that are in the agent's possession or in their immediate surrounding environment. An example of what the knowledge graph looks like and specific implementation details can be found in Appendix SECREF14.",
"BIBREF6 introduce the KG-A2C, which uses a knowledge graph based state-representation to aid in the section of actions in a combinatorially-sized action-space—specifically they use the knowledge graph to constrain the kinds of entities that can be filled in the blanks in the template action-space. They test their approach on Zork1, showing the combination of the knowledge graph and template action selection resulted in improvements over existing methods. They note that their approach reaches a score of 40 which corresponds to a bottleneck in Zork1 where the player is eaten by a “grue” (resulting in negative reward) if the player has not first lit a lamp. The lamp must be lit many steps after first being encountered, in a different section of the game; this action is necessary to continue exploring but doesn’t immediately produce any positive reward. That is, there is a long term dependency between actions that is not immediately rewarded, as seen in Figure FIGREF1. Others using artificially constrained action spaces also report an inability to pass through this bottleneck BIBREF7, BIBREF8. They pose a significant challenge for these methods because the agent does not see the correct action sequence to pass the bottleneck enough times. This is in part due to the fact that for that sequence to be reinforced, the agent needs to reach the next possible reward beyond the bottleneck.",
"More efficient exploration strategies are required to pass bottlenecks. Our contributions are two-fold. We first introduce a method that detects bottlenecks in text-games using the overall reward gained and the knowledge graph state. This method freezes the policy used to reach the bottleneck and restarts the training from there on out, additionally conducting a backtracking search to ensure that a sub-optimal policy has not been frozen. The second contribution explore how to leverage knowledge graphs to improve existing exploration algorithms for dealing with combinatorial action-spaces such as Go-Explore BIBREF9. We additionally present a comparative ablation study analyzing the performance of these methods on the popular text-game Zork1."
],
[
"In this section, we describe methods to explore combinatorially sized action spaces such as text-games—focusing especially on methods that can deal with their inherent bottleneck structure. We first describe our method that explicitly attempts to detect bottlenecks and then describe how an exploration algorithm such as Go Explore BIBREF9 can leverage knowledge graphs.",
"KG-A2C-chained An example of a bottleneck can be seen in Figure FIGREF1. We extend the KG-A2C algorithm as follows. First, we detect bottlenecks as states where the agent is unable to progress any further. We set a patience parameter and if the agent has not seen a higher score in patience steps, the agent assumes it has been limited by a bottleneck. Second, when a bottleneck is found, we freeze the policy that gets the agent to the state with the highest score. The agent then begins training a new policy from that particular state.",
"Simply freezing the policy that led to the bottleneck, however, can potentially result in a policy one that is globally sub-optimal. We therefore employ a backtracking strategy that restarts exploration from each of the $n$ previous steps—searching for a more optimal policy that reaches that bottleneck. At each step, we keep track of a buffer of $n$ states and admissible actions that led up to that locally optimal state. We force the agent to explore from this state to attempt to drive it out of the local optima. If it is further unable to find itself out of this local optima, we refresh the training process again, but starting at the state immediately before the agent reaches the local optima. If this continues to fail, we continue to iterate through this buffer of seen states states up to that local optima until we either find a more optimal state or we run out of states to refresh from, in which we terminate the training algorithm.",
"KG-A2C-Explore Go-Explore BIBREF9 is an algorithm that is designed to keep track of sub-optimal and under-explored states in order to allow the agent to explore upon more optimal states that may be a result of sparse rewards. The Go-Explore algorithm consists of two phases, the first to continuously explore until a set of promising states and corresponding trajectories are found on the basis of total score, and the second to robustify this found policy against potential stochasticity in the game. Promising states are defined as those states when explored from will likely result in higher reward trajectories. Since the text games we are dealing with are mostly deterministic, with the exception of Zork in later stages, we only focus on using Phase 1 of the Go-Explore algorithm to find an optimal policy. BIBREF10 look at applying Go-Explore to text-games on a set of simpler games generated using the game generation framework TextWorld BIBREF1. Instead of training a policy network in parallel to generate actions used for exploration, they use a small set of “admissible actions”—actions guaranteed to change the world state at any given step during Phase 1—to explore and find high reward trajectories. This space of actions is relatively small (of the order of $10^2$ per step) and so finding high reward trajectories in larger action-spaces such as in Zork would be infeasible",
"Go-Explore maintains an archive of cells—defined as a set of states that map to a single representation—to keep track of promising states. BIBREF9 simply encodes each cell by keeping track of the agent's position and BIBREF10 use the textual observations encoded by recurrent neural network as a cell representation. We improve on this implementation by training the KG-A2C network in parallel, using the snapshot of the knowledge graph in conjunction with the game state to further encode the current state and use this as a cell representation. At each step, Go-Explore chooses a cell to explore at random (weighted by score to prefer more advanced cells). The KG-A2C will run for a number of steps, starting with the knowledge graph state and the last seen state of the game from the cell. This will generate a trajectory for the agent while further training the KG-A2C at each iteration, creating a new representation for the knowledge graph as well as a new game state for the cell. After expanding a cell, Go-Explore will continue to sample cells by weight to continue expanding its known states. At the same time, KG-A2C will benefit from the heuristics of selecting preferred cells and be trained on promising states more often."
],
[
"We compare our two exploration strategies to the following baselines and ablations:",
"KG-A2C This is the exact same method presented in BIBREF6 with no modifications.",
"A2C Represents the same approach as KG-A2C but with all the knowledge graph components removed. The state representation is text only encoded using recurrent networks.",
"A2C-chained Is a variation on KG-A2C-chained where we use our policy chaining approach with the A2C method to train the agent instead of KG-A2C.",
"A2C-Explore Uses A2C in addition to the exploration strategy seen in KG-A2C-Explore. The cell representations here are defined in terms of the recurrent network based encoding of the textual observation.",
"Figure FIGREF10 shows that agents utilizing knowledge-graphs in addition to either enhanced exploration method far outperform the baseline A2C and KG-A2C. KG-A2C-chained and KG-A2C-Explore both pass the bottleneck of a score of 40, whereas A2C-Explore gets to the bottleneck but cannot surpass it.",
"There are a couple of key insights that can be drawn from these results The first is that the knowledge graph appears to be critical; it is theorized to help with partial observability. However the knowledge graph representation isn't sufficient in that the knowledge graph representation without enhanced exploration methods cannot surpass the bottleneck. A2C-chained—which explores without a knowledge graph—fails to even outperform the baseline A2C. We hypothesize that this is due to the knowledge graph aiding implicitly in the sample efficiency of bottleneck detection and subsequent exploration. That is, exploring after backtracking from a potentially detected bottleneck is much more efficient in the knowledge graph based agent.",
"The Go-Explore based exploration algorithm sees less of a difference between agents. A2C-Explore converges more quickly, but to a lower reward trajectory that fails to pass the bottleneck, whereas KG-A2C-Explore takes longer to reach a similar reward but consistently makes it through the bottleneck. The knowledge graph cell representation appears to thus be a better indication of what a promising state is as opposed to just the textual observation.",
"Comparing the advanced exploration methods when using the knowledge graph, we see that both agents successfully pass the bottleneck corresponding to entering the cellar and lighting the lamp and reach comparable scores within a margin of error. KG-A2C-chained is significantly more sample efficient and converges faster. We can infer that chaining policies by explicitly detecting bottlenecks lets us pass it more quickly than attempting to find promising cell representations with Go-Explore. This form of chained exploration with backtracking is particularly suited to sequential decision making problems that can be represented as acyclic directed graphs as in Figure FIGREF1."
],
[
"Zork1 is one of the first text-adventure games and heavily influences games released later in terms of narrative style and game structure. It is a dungeon crawler where the player must explore a vast world and collect a series of treasures. It was identified by BIBREF2 as a moonshot game and has been the subject of much work in leaning agents BIBREF12, BIBREF7, BIBREF11, BIBREF8. Rewards are given to the player when they collect treasures as well as when important intermediate milestones needed to further explore the world are passed. Figure FIGREF15 and Figure FIGREF1 show us a map of the world of Zork1 and the corresponding quest structure.",
"The bottleneck seen at a score of around 40 is when the player first enters the cellar on the right side of the map. The cellar is dark and you need to immediately light the lamp to see anything. Attempting to explore the cellar in the dark results in you being instantly killed by a monster known as a “grue”."
],
[
"We make no changes from the graph update rules used by BIBREF6. Candidate interactive objects are identified by performing part-of-speech tagging on the current observation, identifying singular and proper nouns as well as adjectives, and are then filtered by checking if they can be examined using the command $examine$ $OBJ$. Only the interactive objects not found in the inventory are linked to the node corresponding to the current room and the inventory items are linked to the “you” node. The only other rule applied uses the navigational actions performed by the agent to infer the relative positions of rooms, e.g. $\\langle kitchen,down,cellar \\rangle $ when the agent performs $go$ $down$ when in the kitchen to move to the cellar."
],
[
"Hyperparameters used for our agents are given below. Patience and buffer size are used for the policy chaining method as described in Section SECREF2. Cell step size is a parameter used for Go-Explore and describes how many steps are taken when exploring in a given cell state. Base hyperparameters for KG-A2C are taken from BIBREF6 and the same parameters are used for A2C.",
""
]
],
"section_name": [
"Introduction and Background",
"Exploration Methods",
"Evaluation",
"Appendix ::: Zork1",
"Appendix ::: Knowledge Graph Rules",
"Appendix ::: Hyperparameters"
]
} | {
"answers": [
{
"annotation_id": [
"3c5b848e4c3adce404df5db8ad33e8c9d7beaa55",
"955ce5af24c0a73ea5493aa343c318e8d91c2762"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 2: Ablation results on Zork1, averaged across 5 independent runs."
],
"extractive_spans": [],
"free_form_answer": "Reward of 11.8 for the A2C-chained model, 41.8 for the KG-A2C-chained model, 40 for A2C-Explore and 44 for KG-A2C-Explore.",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 2: Ablation results on Zork1, averaged across 5 independent runs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Figure FIGREF10 shows that agents utilizing knowledge-graphs in addition to either enhanced exploration method far outperform the baseline A2C and KG-A2C. KG-A2C-chained and KG-A2C-Explore both pass the bottleneck of a score of 40, whereas A2C-Explore gets to the bottleneck but cannot surpass it."
],
"extractive_spans": [
"KG-A2C-chained and KG-A2C-Explore both pass the bottleneck of a score of 40"
],
"free_form_answer": "",
"highlighted_evidence": [
"KG-A2C-chained and KG-A2C-Explore both pass the bottleneck of a score of 40, whereas A2C-Explore gets to the bottleneck but cannot surpass it."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"37beb3774c7b17f9ec0375e39f180769f097002c",
"5a8e8e096a72b4f1b48782c5f7df9015fb1a39d2"
],
"answer": [
{
"evidence": [
"BIBREF6 introduce the KG-A2C, which uses a knowledge graph based state-representation to aid in the section of actions in a combinatorially-sized action-space—specifically they use the knowledge graph to constrain the kinds of entities that can be filled in the blanks in the template action-space. They test their approach on Zork1, showing the combination of the knowledge graph and template action selection resulted in improvements over existing methods. They note that their approach reaches a score of 40 which corresponds to a bottleneck in Zork1 where the player is eaten by a “grue” (resulting in negative reward) if the player has not first lit a lamp. The lamp must be lit many steps after first being encountered, in a different section of the game; this action is necessary to continue exploring but doesn’t immediately produce any positive reward. That is, there is a long term dependency between actions that is not immediately rewarded, as seen in Figure FIGREF1. Others using artificially constrained action spaces also report an inability to pass through this bottleneck BIBREF7, BIBREF8. They pose a significant challenge for these methods because the agent does not see the correct action sequence to pass the bottleneck enough times. This is in part due to the fact that for that sequence to be reinforced, the agent needs to reach the next possible reward beyond the bottleneck."
],
"extractive_spans": [
"a score of 40"
],
"free_form_answer": "",
"highlighted_evidence": [
"BIBREF6 introduce the KG-A2C, which uses a knowledge graph based state-representation to aid in the section of actions in a combinatorially-sized action-space—specifically they use the knowledge graph to constrain the kinds of entities that can be filled in the blanks in the template action-space. They test their approach on Zork1, showing the combination of the knowledge graph and template action selection resulted in improvements over existing methods. They note that their approach reaches a score of 40 which corresponds to a bottleneck in Zork1 where the player is eaten by a “grue” (resulting in negative reward) if the player has not first lit a lamp."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compare our two exploration strategies to the following baselines and ablations:",
"KG-A2C This is the exact same method presented in BIBREF6 with no modifications.",
"A2C Represents the same approach as KG-A2C but with all the knowledge graph components removed. The state representation is text only encoded using recurrent networks.",
"A2C-chained Is a variation on KG-A2C-chained where we use our policy chaining approach with the A2C method to train the agent instead of KG-A2C.",
"A2C-Explore Uses A2C in addition to the exploration strategy seen in KG-A2C-Explore. The cell representations here are defined in terms of the recurrent network based encoding of the textual observation."
],
"extractive_spans": [
"KG-A2C",
"A2C",
"A2C-chained",
"A2C-Explore"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare our two exploration strategies to the following baselines and ablations:\n\nKG-A2C This is the exact same method presented in BIBREF6 with no modifications.\n\nA2C Represents the same approach as KG-A2C but with all the knowledge graph components removed. The state representation is text only encoded using recurrent networks.\n\nA2C-chained Is a variation on KG-A2C-chained where we use our policy chaining approach with the A2C method to train the agent instead of KG-A2C.\n\nA2C-Explore Uses A2C in addition to the exploration strategy seen in KG-A2C-Explore. The cell representations here are defined in terms of the recurrent network based encoding of the textual observation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6c0e94de8a0b7b1f1571a6c33387b933790c81c4",
"cdb638a1e0595ba77448002cc97ac839bd21ed9c"
],
"answer": [
{
"evidence": [
"More efficient exploration strategies are required to pass bottlenecks. Our contributions are two-fold. We first introduce a method that detects bottlenecks in text-games using the overall reward gained and the knowledge graph state. This method freezes the policy used to reach the bottleneck and restarts the training from there on out, additionally conducting a backtracking search to ensure that a sub-optimal policy has not been frozen. The second contribution explore how to leverage knowledge graphs to improve existing exploration algorithms for dealing with combinatorial action-spaces such as Go-Explore BIBREF9. We additionally present a comparative ablation study analyzing the performance of these methods on the popular text-game Zork1."
],
"extractive_spans": [
"a method that detects bottlenecks in text-games using the overall reward gained and the knowledge graph state",
"to leverage knowledge graphs to improve existing exploration algorithms for dealing with combinatorial action-space"
],
"free_form_answer": "",
"highlighted_evidence": [
"We first introduce a method that detects bottlenecks in text-games using the overall reward gained and the knowledge graph state. This method freezes the policy used to reach the bottleneck and restarts the training from there on out, additionally conducting a backtracking search to ensure that a sub-optimal policy has not been frozen. The second contribution explore how to leverage knowledge graphs to improve existing exploration algorithms for dealing with combinatorial action-spaces such as Go-Explore BIBREF9. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"KG-A2C-Explore Go-Explore BIBREF9 is an algorithm that is designed to keep track of sub-optimal and under-explored states in order to allow the agent to explore upon more optimal states that may be a result of sparse rewards. The Go-Explore algorithm consists of two phases, the first to continuously explore until a set of promising states and corresponding trajectories are found on the basis of total score, and the second to robustify this found policy against potential stochasticity in the game. Promising states are defined as those states when explored from will likely result in higher reward trajectories. Since the text games we are dealing with are mostly deterministic, with the exception of Zork in later stages, we only focus on using Phase 1 of the Go-Explore algorithm to find an optimal policy. BIBREF10 look at applying Go-Explore to text-games on a set of simpler games generated using the game generation framework TextWorld BIBREF1. Instead of training a policy network in parallel to generate actions used for exploration, they use a small set of “admissible actions”—actions guaranteed to change the world state at any given step during Phase 1—to explore and find high reward trajectories. This space of actions is relatively small (of the order of $10^2$ per step) and so finding high reward trajectories in larger action-spaces such as in Zork would be infeasible",
"Go-Explore maintains an archive of cells—defined as a set of states that map to a single representation—to keep track of promising states. BIBREF9 simply encodes each cell by keeping track of the agent's position and BIBREF10 use the textual observations encoded by recurrent neural network as a cell representation. We improve on this implementation by training the KG-A2C network in parallel, using the snapshot of the knowledge graph in conjunction with the game state to further encode the current state and use this as a cell representation. At each step, Go-Explore chooses a cell to explore at random (weighted by score to prefer more advanced cells). The KG-A2C will run for a number of steps, starting with the knowledge graph state and the last seen state of the game from the cell. This will generate a trajectory for the agent while further training the KG-A2C at each iteration, creating a new representation for the knowledge graph as well as a new game state for the cell. After expanding a cell, Go-Explore will continue to sample cells by weight to continue expanding its known states. At the same time, KG-A2C will benefit from the heuristics of selecting preferred cells and be trained on promising states more often."
],
"extractive_spans": [
"KG-A2C-chained",
"KG-A2C-Explore"
],
"free_form_answer": "",
"highlighted_evidence": [
"KG-A2C-Explore Go-Explore BIBREF9 is an algorithm that is designed to keep track of sub-optimal and under-explored states in order to allow the agent to explore upon more optimal states that may be a result of sparse rewards.",
"We improve on this implementation by training the KG-A2C network in parallel, using the snapshot of the knowledge graph in conjunction with the game state to further encode the current state and use this as a cell representation. At each step, Go-Explore chooses a cell to explore at random (weighted by score to prefer more advanced cells)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What are the results from these proposed strategies?",
"What are the baselines?",
"What are the two new strategies?"
],
"question_id": [
"c5abe97625b9e1c8de8208e15d59c704a597b88c",
"eb2d5edcdfe18bd708348283f92a32294bb193a5",
"88ab7811662157680144ed3fdd00939e36552672"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: An overall example of an excerpt and quest structure of Zork1.",
"Figure 2: Ablation results on Zork1, averaged across 5 independent runs.",
"Figure 3: Map of Zork1 annotated with rewards. These rewards correspond to the quest structure seen in Figure 1b. Taken from Ammanabrolu & Hausknecht (2020)."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png",
"6-Figure3-1.png"
]
} | [
"What are the results from these proposed strategies?"
] | [
[
"2002.08795-Evaluation-5",
"2002.08795-4-Figure2-1.png"
]
] | [
"Reward of 11.8 for the A2C-chained model, 41.8 for the KG-A2C-chained model, 40 for A2C-Explore and 44 for KG-A2C-Explore."
] | 32 |
1802.06024 | Towards a Continuous Knowledge Learning Engine for Chatbots | Although chatbots have been very popular in recent years, they still have some serious weaknesses which limit the scope of their applications. One major weakness is that they cannot learn new knowledge during the conversation process, i.e., their knowledge is fixed beforehand and cannot be expanded or updated during conversation. In this paper, we propose to build a general knowledge learning engine for chatbots to enable them to continuously and interactively learn new knowledge during conversations. As time goes by, they become more and more knowledgeable and better and better at learning and conversation. We model the task as an open-world knowledge base completion problem and propose a novel technique called lifelong interactive learning and inference (LiLi) to solve it. LiLi works by imitating how humans acquire knowledge and perform inference during an interactive conversation. Our experimental results show LiLi is highly promising. | {
"paragraphs": [
[
"Chatbots such as dialog and question-answering systems have a long history in AI and natural language processing. Early such systems were mostly built using markup languages such as AIML, handcrafted conversation generation rules, and/or information retrieval techniques BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Recent neural conversation models BIBREF4 , BIBREF5 , BIBREF6 are even able to perform open-ended conversations. However, since they do not use explicit knowledge bases and do not perform inference, they often suffer from generic and dull responses BIBREF5 , BIBREF7 . More recently, BIBREF8 and BIBREF9 proposed to use knowledge bases (KBs) to help generate responses for knowledge-grounded conversation. However, one major weakness of all existing chat systems is that they do not explicitly or implicitly learn new knowledge in the conversation process. This seriously limits the scope of their applications. In contrast, we humans constantly learn new knowledge in our conversations. Even if some existing systems can use very large knowledge bases either harvested from a large data source such as the Web or built manually, these KBs still miss a large number of facts (knowledge) BIBREF10 . It is thus important for a chatbot to continuously learn new knowledge in the conversation process to expand its KB and to improve its conversation ability.",
"In recent years, researchers have studied the problem of KB completion, i.e., inferring new facts (knowledge) automatically from existing facts in a KB. KB completion (KBC) is defined as a binary classification problem: Given a query triple, ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ), we want to predict whether the source entity INLINEFORM3 and target entity INLINEFORM4 can be linked by the relation INLINEFORM5 . However, existing approaches BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 solve this problem under the closed-world assumption, i.e., INLINEFORM6 , INLINEFORM7 and INLINEFORM8 are all known to exist in the KB. This is a major weakness because it means that no new knowledge or facts may contain unknown entities or relations. Due to this limitation, KBC is clearly not sufficient for knowledge learning in conversations because in a conversation, the user can say anything, which may contain entities and relations that are not already in the KB.",
"In this paper, we remove this assumption of KBC, and allow all INLINEFORM0 , INLINEFORM1 and INLINEFORM2 to be unknown. We call the new problem open-world knowledge base completion (OKBC). OKBC generalizes KBC. Below, we show that solving OKBC naturally provides the ground for knowledge learning and inference in conversations. In essence, we formulate an abstract problem of knowledge learning and inference in conversations as a well-defined OKBC problem in the interactive setting.",
"From the perspective of knowledge learning in conversations, essentially we can extract two key types of information, true facts and queries, from the user utterances. Queries are facts whose truth values need to be determined. Note that we do not study fact or relation extraction in this paper as there is an extensive work on the topic. (1) For a true fact, we will incorporate it into the KB. Here we need to make sure that it is not already in the KB, which involves relation resolution and entity linking. After a fact is added to the KB, we may predict that some related facts involving some existing relations in the KB may also be true (not logical implications as they can be automatically inferred). For example, if the user says “Obama was born in USA,” the system may guess that (Obama, CitizenOf, USA) (meaning that Obama is a citizen of USA) could also be true based on the current KB. To verify this fact, it needs to solve a KBC problem by treating (Obama, CitizenOf, USA) as a query. This is a KBC problem because the fact (Obama, BornIn, USA) extracted from the original sentence has been added to the KB. Then Obama and USA are in the KB. If the KBC problem is solved, it learns a new fact (Obama, CitizenOf, USA) in addition to the extracted fact (Obama, BornIn, USA). (2) For a query fact, e.g., (Obama, BornIn, USA) extracted from the user question “Was Obama born in USA?” we need to solve the OKBC problem if any of “Obama, “BornIn”, or “USA\" is not already in the KB.",
"We can see that OKBC is the core of a knowledge learning engine for conversation. Thus, in this paper, we focus on solving it. We assume that other tasks such as fact/relation extraction and resolution and guessing of related facts of an extracted fact are solved by other sub-systems.",
"We solve the OKBC problem by mimicking how humans acquire knowledge and perform reasoning in an interactive conversation. Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. The process typically involves an inference strategy (a sequence of actions), which interleaves a sequence of processing and interactive actions. A processing action can be the selection of related facts, deriving inference chain, etc., that advances the inference process. An interactive action can be deciding what to ask, formulating a suitable question, etc., that enable us to interact. The process helps grow the knowledge over time and the gained knowledge enables us to communicate better in the future. We call this lifelong interactive learning and inference (LiLi). Lifelong learning is reflected by the facts that the newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning. LiLi should have the following capabilities:",
"This setting is ideal for many NLP applications like dialog and question-answering systems that naturally provide the scope for human interaction and demand real-time inference.",
"LiLi starts with the closed-world KBC approach path-ranking (PR) BIBREF11 , BIBREF17 and extends KBC in a major way to open-world knowledge base completion (OKBC). For a relation INLINEFORM0 , PR works by enumerating paths (except single-link path INLINEFORM1 ) between entity-pairs linked by INLINEFORM2 in the KB and use them as features to train a binary classifier to predict whether a query INLINEFORM3 should be in the KB. Here, a path between two entities is a sequence of relations linking them. In our work, we adopt the latest PR method, C-PR BIBREF16 and extend it to make it work in the open-world setting. C-PR enumerates paths by performing bidirectional random walks over the KB graph while leveraging the context of the source-target entity-pair. We also adopt and extend the compositional vector space model BIBREF20 , BIBREF21 with continual learning capability for prediction.",
"Given an OKBC query ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (e.g., “(Obama, CitizenOf, USA), which means whether Obama a citizen of USA), LiLi interacts with the user (if needed) by dynamically formulating questions (see the interaction example in Figure 1, which will be further explained in §3) and leverages the interactively acquired knowledge (supporting facts (SFs) in the figure) for continued inference. To do so, LiLi formulates a query-specific inference strategy and executes it. We design LiLi in a Reinforcement Learning (RL) setting that performs sub-tasks like formulating and executing strategy, training a prediction model for inference, and knowledge retention for future use. To the best of our knowledge, our work is the first to address the OKBC problem and to propose an interactive learning mechanism to solve it in a continuous or lifelong manner. We empirically verify the effectiveness of LiLi on two standard real-world KBs: Freebase and WordNet. Experimental results show that LiLi is highly effective in terms of its predictive performance and strategy formulation ability."
],
[
"To the best of our knowledge, we are not aware of any knowledge learning system that can learn new knowledge in the conversation process. This section thus discusses other related work.",
"Among existing KB completion approaches, BIBREF20 extended the vector space model for zero-shot KB inference. However, the model cannot handle unknown entities and can only work on fixed set of unknown relations with known embeddings. Recently, BIBREF22 proposed a method using external text corpus to perform inference on unknown entities. However, the method cannot handle unknown relations. Thus, these methods are not suitable for our open-world setting. None of the existing KB inference methods perform interactive knowledge learning like LiLi. NELL BIBREF23 continuously updates its KB using facts extracted from the Web. Our task is very different as we do not do Web fact extraction (which is also useful). We focus on user interactions in this paper. Our work is related to interactive language learning (ILL) BIBREF24 , BIBREF25 , but these are not about KB completion. The work in BIBREF26 allows a learner to ask questions in dialogue. However, this work used RL to learn about whether to ask the user or not. The “what to ask aspect\" was manually designed by modeling synthetic tasks. LiLi formulates query-specific inference strategies which embed interaction behaviors. Also, no existing dialogue systems BIBREF4 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 employ lifelong learning to train prediction models by using information/knowledge retained in the past.",
"Our work is related to general lifelong learning in BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . However, they learn only one type of tasks, e.g., supervised, topic modeling or reinforcement learning (RL) tasks. None of them is suitable for our setting, which involves interleaving of RL, supervised and interactive learning. More details about lifelong learning can be found in the book BIBREF31 ."
],
[
"We design LiLi as a combination of two interconnected models: (1) a RL model that learns to formulate a query-specific inference strategy for performing the OKBC task, and (2) a lifelong prediction model to predict whether a triple should be in the KB, which is invoked by an action while executing the inference strategy and is learned for each relation as in C-PR. The framework improves its performance over time through user interaction and knowledge retention. Compared to the existing KB inference methods, LiLi overcomes the following three challenges for OKBC:",
"1. Mapping open-world to close-world. Being a closed-world method, C-PR cannot extract path features and learn a prediction model when any of INLINEFORM0 , INLINEFORM1 or INLINEFORM2 is unknown. LiLi solves this problem through interactive knowledge acquisition. If INLINEFORM3 is unknown, LiLi asks the user to provide a clue (an example of INLINEFORM4 ). And if INLINEFORM5 or INLINEFORM6 is unknown, LiLi asks the user to provide a link (relation) to connect the unknown entity with an existing entity (automatically selected) in the KB. We refer to such a query as a connecting link query (CLQ). The acquired knowledge reduces OKBC to KBC and makes the inference task feasible.",
"2. Spareseness of KB. A main issue of all PR methods like C-PR is the connectivity of the KB graph. If there is no path connecting INLINEFORM0 and INLINEFORM1 in the graph, path enumeration of C-PR gets stuck and inference becomes infeasible. In such cases, LiLi uses a template relation (“@-?-@\") as the missing link marker to connect entity-pairs and continues feature extraction. A path containing “@-?-@\" is called an incomplete path. Thus, the extracted feature set contains both complete (no missing link) and incomplete paths. Next, LiLi selects an incomplete path from the feature set and asks the user to provide a link for path completion. We refer to such a query as missing link query (MLQ).",
"3. Limitation in user knowledge. If the user is unable to respond to MLQs or CLQs, LiLi uses a guessing mechanism (discussed later) to fill the gap. This enables LiLi to continue its inference even if the user cannot answer a system question."
],
[
"As lifelong learning needs to retain knowledge learned from past tasks and use it to help future learning BIBREF31 , LiLi uses a Knowledge Store (KS) for knowledge retention. KS has four components: (i) Knowledge Graph ( INLINEFORM0 ): INLINEFORM1 (the KB) is initialized with base KB triples (see §4) and gets updated over time with the acquired knowledge. (ii) Relation-Entity Matrix ( INLINEFORM2 ): INLINEFORM3 is a sparse matrix, with rows as relations and columns as entity-pairs and is used by the prediction model. Given a triple ( INLINEFORM4 , INLINEFORM5 , INLINEFORM6 ) INLINEFORM7 , we set INLINEFORM8 [ INLINEFORM9 , ( INLINEFORM10 , INLINEFORM11 )] = 1 indicating INLINEFORM12 occurs for pair ( INLINEFORM13 , INLINEFORM14 ). (iii) Task Experience Store ( INLINEFORM15 ): INLINEFORM16 stores the predictive performance of LiLi on past learned tasks in terms of Matthews correlation coefficient (MCC) that measures the quality of binary classification. So, for two tasks INLINEFORM17 and INLINEFORM18 (each relation is a task), if INLINEFORM19 [ INLINEFORM20 ] INLINEFORM21 INLINEFORM22 [ INLINEFORM23 ] [where INLINEFORM24 [ INLINEFORM25 ]=MCC( INLINEFORM26 )], we say C-PR has learned INLINEFORM27 well compared to INLINEFORM28 . (iv) Incomplete Feature DB ( INLINEFORM29 ): INLINEFORM30 stores the frequency of an incomplete path INLINEFORM31 in the form of a tuple ( INLINEFORM32 , INLINEFORM33 , INLINEFORM34 ) and is used in formulating MLQs. INLINEFORM35 [( INLINEFORM36 , INLINEFORM37 , INLINEFORM38 )] = INLINEFORM39 implies LiLi has extracted incomplete path INLINEFORM40 INLINEFORM41 times involving entity-pair INLINEFORM42 [( INLINEFORM43 , INLINEFORM44 )] for query relation INLINEFORM45 .",
"The RL model learns even after training whenever it encounters an unseen state (in testing) and thus, gets updated over time. KS is updated continuously over time as a result of the execution of LiLi and takes part in future learning. The prediction model uses lifelong learning (LL), where we transfer knowledge (parameter values) from the model for a past most similar task to help learn for the current task. Similar tasks are identified by factorizing INLINEFORM0 and computing a task similarity matrix INLINEFORM1 . Besides LL, LiLi uses INLINEFORM2 to identify poorly learned past tasks and acquire more clues for them to improve its skillset over time.",
"LiLi also uses a stack, called Inference Stack ( INLINEFORM0 ) to hold query and its state information for RL. LiLi always processes stack top ( INLINEFORM1 [top]). The clues from the user get stored in INLINEFORM2 on top of the query during strategy execution and processed first. Thus, the prediction model for INLINEFORM3 is learned before performing inference on query, transforming OKBC to a KBC problem. Table 1 shows the parameters of LiLi used in the following sections."
],
[
"Given an OKBC query ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ), we represent it as a data instance INLINEFORM3 . INLINEFORM4 consists of INLINEFORM5 (the query triple), INLINEFORM6 (interaction limit set for INLINEFORM7 ), INLINEFORM8 (experience list storing the transition history of MDP for INLINEFORM9 in RL) and INLINEFORM10 (mode of INLINEFORM11 ) denoting if INLINEFORM12 is ` INLINEFORM13 ' (training), ` INLINEFORM14 ' (validation), ` INLINEFORM15 ' (evaluation) or ` INLINEFORM16 ' (clue) instance and INLINEFORM17 (feature set). We denote INLINEFORM18 ( INLINEFORM19 ) as the set of all complete (incomplete) path features in INLINEFORM20 . Given a data instance INLINEFORM21 , LiLi starts its initialization as follows: it sets the state as INLINEFORM22 (based on INLINEFORM23 , explained later), pushes the query tuple ( INLINEFORM24 , INLINEFORM25 ) into INLINEFORM26 and feeds INLINEFORM27 [top] to the RL-model for strategy formulation from INLINEFORM28 .",
"Inference Strategy Formulation. We view solving the strategy formulation problem as learning to play an inference game, where the goal is to formulate a strategy that \"makes the inference task possible\". Considering PR methods, inference is possible, iff (1) INLINEFORM0 becomes known to its KB (by acquiring clues when INLINEFORM1 is unknown) and (2) path features are extracted between INLINEFORM2 and INLINEFORM3 (which inturn requires INLINEFORM4 and INLINEFORM5 to be known to KB). If these conditions are met at the end of an episode (when strategy formulation finishes for a given query) of the game, LiLi wins and thus, it trains the prediction model for INLINEFORM6 and uses it for inference.",
"LiLi's strategy formulation is modeled as a Markov Decision Process (MDP) with finite state ( INLINEFORM0 ) and action ( INLINEFORM1 ) spaces. A state INLINEFORM2 consists of 10 binary state variables (Table 2), each of which keeps track of results of an action INLINEFORM3 taken by LiLi and thus, records the progress in inference process made so far. INLINEFORM4 is the initial state with all state bits set as 0. If the data instance (query) is a clue [ INLINEFORM5 ], INLINEFORM6 [CLUE] is set as 1. INLINEFORM7 consists of 6 actions (Table 3). INLINEFORM8 , INLINEFORM9 , INLINEFORM10 are processing actions and INLINEFORM11 , INLINEFORM12 , INLINEFORM13 are interactive actions. Whenever INLINEFORM14 is executed, the MDP reaches the terminal state. Given an action INLINEFORM15 in state INLINEFORM16 , if INLINEFORM17 is invalid in INLINEFORM21 or the objective of INLINEFORM22 is unsatisfied (* marked the condition in INLINEFORM23 ), RL receives a negative reward (empirically set); else receives a positive reward.. We use Q-learning BIBREF38 with INLINEFORM24 -greedy strategy to learn the optimal policy for training the RL model. Note that, the inference strategy is independent of KB type and correctness of prediction. Thus, the RL-model is trained only once from scratch (reused thereafter for other KBs) and also, independently of the prediction model.",
"Sometimes the training dataset may not be enough to learn optimal policy for all INLINEFORM0 . Thus, encountering an unseen state during test can make RL-model clueless about the action. Given a state INLINEFORM1 , whenever an invalid INLINEFORM2 is chosen, LiLi remains in INLINEFORM3 . For INLINEFORM4 , LiLi remains in INLINEFORM5 untill INLINEFORM6 (see Table 1 for INLINEFORM7 ). So, if the state remains the same for ( INLINEFORM8 +1) times, it implies LiLi has encountered a fault (an unseen state). RL-model instantly switches to the training mode and randomly explores INLINEFORM9 to learn the optimal action (fault-tolerant learning). While exploring INLINEFORM10 , the model chooses INLINEFORM11 only when it has tried all other INLINEFORM12 to avoid abrupt end of episode.",
"Execution of Actions. At any given point in time, let ( INLINEFORM0 , INLINEFORM1 ) be the current INLINEFORM2 [top], INLINEFORM3 is the chosen action and the current version of KS components are INLINEFORM4 , INLINEFORM5 , INLINEFORM6 and INLINEFORM7 . Then, if INLINEFORM8 is invalid in INLINEFORM9 , LiLi only updates INLINEFORM10 [top] with ( INLINEFORM11 , INLINEFORM12 ) and returns INLINEFORM13 [top] to RL-model. In this process, LiLi adds experience ( INLINEFORM14 , INLINEFORM15 , INLINEFORM16 , INLINEFORM17 ) in INLINEFORM18 and then, replaces INLINEFORM19 [top] with ( INLINEFORM20 , INLINEFORM21 ). If INLINEFORM22 is valid in INLINEFORM23 , LiLi first sets the next state INLINEFORM24 and performs a sequence of operations INLINEFORM25 based on INLINEFORM26 (discussed below). Unless specified, in INLINEFORM27 , LiLi always monitors INLINEFORM28 and if INLINEFORM29 becomes 0, LiLi sets INLINEFORM30 . Also, whenever LiLi asks the user a query, INLINEFORM31 is decremented by 1. Once INLINEFORM32 ends, LiLi updates INLINEFORM33 [top] with ( INLINEFORM34 , INLINEFORM35 ) and returns INLINEFORM36 [top] to RL-model for choosing the next action.",
"In INLINEFORM0 , LiLi searches INLINEFORM1 , INLINEFORM2 , INLINEFORM3 in INLINEFORM4 and sets appropriate bits in INLINEFORM5 (see Table 2). If INLINEFORM6 was unknown before and is just added to INLINEFORM7 or is in the bottom INLINEFORM8 % (see Table 1 for INLINEFORM9 ) of INLINEFORM10 , LiLi randomly sets INLINEFORM14 with probability INLINEFORM15 . If INLINEFORM16 is a clue and INLINEFORM17 , LiLi updates KS with triple INLINEFORM18 , where ( INLINEFORM19 , INLINEFORM20 , INLINEFORM21 ) and ( INLINEFORM22 , INLINEFORM23 , INLINEFORM24 ) gets added to INLINEFORM25 and INLINEFORM26 , INLINEFORM27 are set as 1.",
"In INLINEFORM0 , LiLi asks the user to provide a clue (+ve instance) for INLINEFORM1 and corrupts INLINEFORM2 and INLINEFORM3 of the clue once at a time, to generate -ve instances by sampling nodes from INLINEFORM4 . These instances help in training prediction model for INLINEFORM5 while executing INLINEFORM6 .",
"In INLINEFORM0 , LiLi selects an incomplete path INLINEFORM1 from INLINEFORM2 to formulate MLQ, such that INLINEFORM3 is most frequently observed for INLINEFORM4 and INLINEFORM5 is high, given by INLINEFORM6 . Here, INLINEFORM7 denotes the contextual similarity BIBREF16 of entity-pair INLINEFORM8 . If INLINEFORM9 is high, INLINEFORM10 is more likely to possess a relation between them and so, is a good candidate for formulating MLQ. When the user does not respond to MLQ (or CLQ in INLINEFORM11 ), the guessing mechanism is used, which works as follows: Since contextual similarity of entity-pairs is highly correlated with their class labels BIBREF16 , LiLi divides the similarity range [-1, 1] into three segments, using a low ( INLINEFORM12 ) and high ( INLINEFORM13 ) similarity threshold and replaces the missing link with INLINEFORM14 in INLINEFORM15 to make it complete as follows: If INLINEFORM16 , INLINEFORM17 = “@-LooselyRelatedTo-@\"; else if INLINEFORM18 , INLINEFORM19 =“@-NotRelatedTo-@\"; Otherwise, INLINEFORM20 =“@-RelatedTo-@\".",
"In INLINEFORM0 , LiLi asks CLQs for connecting unknown entities INLINEFORM1 and/or INLINEFORM2 with INLINEFORM3 by selecting the most contextually relevant node (wrt INLINEFORM4 , INLINEFORM5 ) from INLINEFORM6 , given by link INLINEFORM7 . We adopt the contextual relevance idea in BIBREF16 which is computed using word embedding BIBREF39 ",
"In INLINEFORM0 , LiLi extracts path features INLINEFORM1 between ( INLINEFORM2 , INLINEFORM3 ) and updates INLINEFORM4 with incomplete features from INLINEFORM5 . LiLi always trains the prediction model with complete features INLINEFORM6 and once INLINEFORM7 or INLINEFORM8 , LiLi stops asking MLQs. Thus, in both INLINEFORM9 and INLINEFORM10 , LiLi always monitors INLINEFORM11 to check for the said requirements and sets INLINEFORM12 to control interactions.",
"In INLINEFORM0 , if LiLi wins the episode, it adds INLINEFORM1 in one of data buffers INLINEFORM2 based on its mode INLINEFORM3 . E.g., if INLINEFORM4 or INLINEFORM5 , INLINEFORM6 is used for training and added to INLINEFORM7 . Similarly validation buffer INLINEFORM8 and evaluation buffer INLINEFORM9 are populated. If INLINEFORM10 , LiLi invokes the prediction model for INLINEFORM11 .",
"Lifelong Relation Prediction. Given a relation INLINEFORM0 , LiLi uses INLINEFORM1 and INLINEFORM2 (see INLINEFORM3 ) to train a prediction model (say, INLINEFORM4 ) with parameters INLINEFORM5 . For a unknown INLINEFORM6 , the clue instances get stored in INLINEFORM7 and INLINEFORM8 . Thus, LiLi populates INLINEFORM9 by taking 10% (see §4) of the instances from INLINEFORM10 and starts the training. For INLINEFORM11 , LiLi uses a LSTM BIBREF40 to compose the vector representation of each feature INLINEFORM12 as INLINEFORM13 and vector representation of INLINEFORM14 as INLINEFORM15 . Next, LiLi computes the prediction value, INLINEFORM16 as sigmoid of the mean cosine similarity of all features and INLINEFORM17 , given by INLINEFORM18 ) and maximize the log-likelihood of INLINEFORM19 for training. Once INLINEFORM20 is trained, LiLi updates INLINEFORM21 [ INLINEFORM22 ] using INLINEFORM23 . We also train an inverse model for INLINEFORM24 , INLINEFORM25 by reversing the path features in INLINEFORM26 and INLINEFORM27 which help in lifelong learning (discussed below). Unlike BIBREF20 , BIBREF21 , while predicting the label for INLINEFORM28 , we compute a relation-specific prediction threshold INLINEFORM29 corresponding to INLINEFORM30 using INLINEFORM31 as: INLINEFORM32 and infer INLINEFORM33 as +ve if INLINEFORM34 and -ve otherwise. Here, INLINEFORM35 ( INLINEFORM36 ) is the mean prediction value for all +ve (-ve) examples in INLINEFORM37 .",
"Models trained on a few examples (e.g., clues acquired for unknown INLINEFORM0 ) with randomly initialized weights often perform poorly due to underfitting. Thus, we transfer knowledge (weights) from the past most similar (wrt INLINEFORM1 ) task in a lifelong learning manner BIBREF31 . LiLi uses INLINEFORM2 to find the past most similar task for INLINEFORM3 as follows: LiLi computes trancated SVD of INLINEFORM4 as INLINEFORM5 and then, the similarity matrix INLINEFORM6 . INLINEFORM7 provides the similarity between relations INLINEFORM8 and INLINEFORM9 in INLINEFORM10 . Thus, LiLi chooses a source relation INLINEFORM11 to transfer weights. Here, INLINEFORM12 is the set of all INLINEFORM13 and INLINEFORM14 for which LiLi has already learned a prediction model. Now, if INLINEFORM15 or INLINEFORM16 , LiLi randomly initializes the weights INLINEFORM17 for INLINEFORM18 and proceeds with the training. Otherwise, LiLi uses INLINEFORM19 as initial weights and fine-tunes INLINEFORM20 with a low learning rate.",
"A Running Example. Considering the example shown in Figure 1, LiLi works as follows: first, LiLi executes INLINEFORM0 and detects that the source entity “Obama\" and query relation “CitizenOf\" are unknown. Thus, LiLi executes INLINEFORM1 to acquire clue (SF1) for “CitizenOf\" and pushes the clue (+ve example) and two generated -ve examples into INLINEFORM2 . Once the clues are processed and a prediction model is trained for “CitizenOf\" by formulating separate strategies for them, LiLi becomes aware of “CitizenOf\". Now, as the clues have already been popped from INLINEFORM3 , the query becomes INLINEFORM4 and the strategy formulation process for the query resumes. Next, LiLi asks user to provide a connecting link for “Obama\" by performing INLINEFORM5 . Now, the query entities and relation being known, LiLi enumerates paths between “Obama\" and “USA\" by performing INLINEFORM6 . Let an extracted path be “ INLINEFORM7 \" with missing link between ( INLINEFORM8 , INLINEFORM9 ). LiLi asks the user to fill the link by performing INLINEFORM10 and then, extracts the complete feature “ INLINEFORM11 \". The feature set is then fed to the prediction model and inference is made as a result of INLINEFORM12 . Thus, the formulated inference strategy is: “ INLINEFORM13 \"."
],
[
"We now evaluate LiLi in terms of its predictive performance and strategy formulation abilities.",
"Data: We use two standard datasets (see Table 4): (1) Freebase FB15k, and (2) WordNet INLINEFORM0 . Using each dataset, we build a fairly large graph and use it as the original KB ( INLINEFORM1 ) for evaluation. We also augment INLINEFORM2 with inverse triples ( INLINEFORM3 , INLINEFORM4 , INLINEFORM5 ) for each ( INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ) following existing KBC methods.",
"Parameter Settings. Unless specified, the empirically set parameters (see Table 1) of LiLi are: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 , INLINEFORM9 , INLINEFORM10 . For training RL-model with INLINEFORM11 -greedy strategy, we use INLINEFORM12 , INLINEFORM13 , pre-training steps=50000. We used Keras deep learning library to implement and train the prediction model. We set batch-size as 128, max. training epoch as 150, dropout as 0.2, hidden units and embedding size as 300 and learning rate as 5e-3 which is reduced gradually on plateau with factor 0.5 and patience 5. Adam optimizer and early stopping were used in training. We also shuffle INLINEFORM14 in each epoch and adjust class weights inversely proportional to class frequencies in INLINEFORM15 .",
"Labeled Dataset Generation and Simulated User Creation. We create a simulated user for each KB to evaluate LiLi. We create the labeled datasets, the simulated user’s knowledge base ( INLINEFORM0 ), and the base KB ( INLINEFORM1 ) from INLINEFORM2 . INLINEFORM3 used as the initial KB graph ( INLINEFORM4 ) of LiLi.",
"We followed BIBREF16 for labeled dataset generation. For Freebase, we found 86 relations with INLINEFORM0 triples and randomly selected 50 from various domains. We randomly shuffle the list of 50 relations, select 25% of them as unknown relations and consider the rest (75%) as known relations. For each known relation INLINEFORM1 , we randomly shuffle the list of distinct triples for INLINEFORM2 , choose 1000 triples and split them into 60% training, 10% validation and 20% test. Rest 10% along with the leftover (not included in the list of 1000) triples are added to INLINEFORM3 . For each unknown relation INLINEFORM4 , we remove all triples of INLINEFORM5 from INLINEFORM6 and add them to INLINEFORM7 . In this process, we also randomly choose 20% triples as test instances for unknown INLINEFORM8 which are excluded from INLINEFORM9 . Note that, now INLINEFORM10 has at least 10% of chosen triples for each INLINEFORM11 (known and unknown) and so, user is always able to provide clues for both cases. For each labeled dataset, we randomly choose 10% of the entities present in dataset triples, remove triples involving those entities from INLINEFORM12 and add to INLINEFORM13 . At this point, INLINEFORM14 gets reduced to INLINEFORM15 and is used as INLINEFORM16 for LiLi. The dataset stats in Table 4 shows that the base KB (60% triples of INLINEFORM17 ) is highly sparse (compared to original KB) which makes the inference task much harder. WordNet dataset being small, we select all 18 relations for evaluation and create labeled dataset, INLINEFORM18 and INLINEFORM19 following Freebase. Although the user may provide clues 100% of the time, it often cannot respond to MLQs and CLQs (due to lack of required triples/facts). Thus, we further enrich INLINEFORM20 with external KB triples.",
"Given a relation INLINEFORM0 and an observed triple ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ) in training or testing, the pair ( INLINEFORM4 , INLINEFORM5 ) is regarded as a +ve instance for INLINEFORM6 . Following BIBREF18 , for each +ve instance ( INLINEFORM7 , INLINEFORM8 ), we generate two negative ones, one by randomly corrupting the source INLINEFORM9 , and the other by corrupting the target INLINEFORM10 . Note that, the test triples are not in INLINEFORM11 or INLINEFORM12 and none of the -ve instances overlap with the +ve ones.",
"Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.",
"Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.",
"Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.",
"F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .",
"BG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@\" blindly, no guessing mechanism.",
"w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.",
"Evaluation Metrics. To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score."
],
[
"Evaluation-I: Strategy Formulation Ability. Table 5 shows the list of inference strategies formulated by LiLi for various INLINEFORM0 and INLINEFORM1 , which control the strategy formulation of LiLi. When INLINEFORM2 , LiLi cannot interact with user and works like a closed-world method. Thus, INLINEFORM3 drops significantly (0.47). When INLINEFORM4 , i.e. with only one interaction per query, LiLi acquires knowledge well for instances where either of the entities or relation is unknown. However, as one unknown entity may appear in multiple test triples, once the entity becomes known, LiLi doesn’t need to ask for it again and can perform inference on future triples causing significant increase in INLINEFORM5 (0.97). When INLINEFORM6 , LiLi is able to perform inference on all instances and INLINEFORM7 becomes 1. For INLINEFORM8 , LiLi uses INLINEFORM9 only once (as only one MLQ satisfies INLINEFORM10 ) compared to INLINEFORM11 . In summary, LiLi’s RL-model can effectively formulate query-specific inference strategies (based on specified parameter values). Evaluation-II: Predictive Performance. Table 6 shows the comparative performance of LiLi with baselines. To judge the overall improvements, we performed paired t-test considering +ve F1 scores on each relation as paired data. Considering both KBs and all relation types, LiLi outperforms Sep with INLINEFORM12 . If we set INLINEFORM13 (training with very few clues), LiLi outperforms Sep with INLINEFORM14 on Freebase considering MCC. Thus, the lifelong learning mechanism is effective in transferring helpful knowledge. Single model performs better than Sep for unknown relations due to the sharing of knowledge (weights) across tasks. However, for known relations, performance drops because, as a new relation arrives to the system, old weights get corrupted and catastrophic forgetting occurs. For unknown relations, as the relations are evaluated just after training, there is no chance for catastrophic forgetting. The performance improvement ( INLINEFORM15 ) of LiLi over F-th on Freebase signifies that the relation-specific threshold INLINEFORM16 works better than fixed threshold 0.5 because, if all prediction values for test instances lie above (or below) 0.5, F-th predicts all instances as +ve (-ve) which degrades its performance. Due to the utilization of contextual similarity (highly correlated with class labels) of entity-pairs, LiLi’s guessing mechanism works better ( INLINEFORM17 ) than blind guessing (BG). The past task selection mechanism of LiLi also improves its performance over w/o PTS, as it acquires more clues during testing for poorly performed tasks (evaluated on validation set). For Freebase, due to a large number of past tasks [9 (25% of 38)], the performance difference is more significant ( INLINEFORM18 ). For WordNet, the number is relatively small [3 (25% of 14)] and hence, the difference is not significant.",
"Evaluation-III: User Interaction vs. Performance. Table 7 shows the results of LiLi by varying clue acquisition rate ( INLINEFORM0 ). We use Freebase for tuning INLINEFORM1 due to its higher number of unknown test relations compared to WordNet. LiLi’s performance improves significantly as it acquires more clues from the user. The results on INLINEFORM2 outperforms ( INLINEFORM3 ) that on INLINEFORM4 . Table 8 shows the results of LiLi on user responses to MLQ’s and CLQ’s. Answering MLQ’s and CLQ’s is very hard for simulated users (unlike crowd-sourcing) as often INLINEFORM5 lacks the required triple. Thus, we attempt to analyze how the performance is effected if the user does not respond at all. The results show a clear trend in overall performance improvement when the user responds. However, the improvement is not significant as the simulated user’s query satisfaction rate (1% MLQs and 10% CLQs) is very small. But, the analysis shows the effectiveness of LiLi’s guessing mechanism and continual learning ability that help in achieving avg. +ve F1 of 0.57 and 0.62 on FB and WN respectively with minimal participation of the user."
],
[
" In this paper, we are interested in building a generic engine for continuous knowledge learning in human-machine conversations. We first showed that the problem underlying the engine can be formulated as an open-world knowledge base completion (OKBC) problem. We then proposed an lifelong interactive learning and inference (LiLi) approach to solving the OKBC problem. OKBC is a generalization of KBC. LiLi solves the OKBC problem by first formulating a query-specific inference strategy using RL and then executing it to solve the problem by interacting with the user in a lifelong learning manner. Experimental results showed the effectiveness of LiLi in terms of both predictive quality and strategy formulation ability. We believe that a system with the LiLi approach can serve as a knowledge learning engine for conversations. Our future work will improve LiLi to make more accurate."
],
[
"This work was supported in part by National Science Foundation (NSF) under grant no. IIS-1407927 and IIS-1650900, and a gift from Huawei Technologies Co Ltd."
]
],
"section_name": [
"Introduction",
"Related Work",
"Interactive Knowledge Learning (LiLi)",
"Components of LiLi",
"Working of LiLi",
"Experiments",
"Results and Analysis",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"8c5faa91a3736810aa4747688b8a5eea5cf19c21",
"cdca429774324ce7ceaecebbef456c09f81c9b6f"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"8358504a434f7414031f550d6b2d7eeaf258096c",
"eeba7d1dafb368f928402450a7d34b5462e7e526"
],
"answer": [
{
"evidence": [
"Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.",
"Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.",
"Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.",
"F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .",
"BG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@\" blindly, no guessing mechanism.",
"w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.",
"Evaluation-I: Strategy Formulation Ability. Table 5 shows the list of inference strategies formulated by LiLi for various INLINEFORM0 and INLINEFORM1 , which control the strategy formulation of LiLi. When INLINEFORM2 , LiLi cannot interact with user and works like a closed-world method. Thus, INLINEFORM3 drops significantly (0.47). When INLINEFORM4 , i.e. with only one interaction per query, LiLi acquires knowledge well for instances where either of the entities or relation is unknown. However, as one unknown entity may appear in multiple test triples, once the entity becomes known, LiLi doesn’t need to ask for it again and can perform inference on future triples causing significant increase in INLINEFORM5 (0.97). When INLINEFORM6 , LiLi is able to perform inference on all instances and INLINEFORM7 becomes 1. For INLINEFORM8 , LiLi uses INLINEFORM9 only once (as only one MLQ satisfies INLINEFORM10 ) compared to INLINEFORM11 . In summary, LiLi’s RL-model can effectively formulate query-specific inference strategies (based on specified parameter values). Evaluation-II: Predictive Performance. Table 6 shows the comparative performance of LiLi with baselines. To judge the overall improvements, we performed paired t-test considering +ve F1 scores on each relation as paired data. Considering both KBs and all relation types, LiLi outperforms Sep with INLINEFORM12 . If we set INLINEFORM13 (training with very few clues), LiLi outperforms Sep with INLINEFORM14 on Freebase considering MCC. Thus, the lifelong learning mechanism is effective in transferring helpful knowledge. Single model performs better than Sep for unknown relations due to the sharing of knowledge (weights) across tasks. However, for known relations, performance drops because, as a new relation arrives to the system, old weights get corrupted and catastrophic forgetting occurs. For unknown relations, as the relations are evaluated just after training, there is no chance for catastrophic forgetting. The performance improvement ( INLINEFORM15 ) of LiLi over F-th on Freebase signifies that the relation-specific threshold INLINEFORM16 works better than fixed threshold 0.5 because, if all prediction values for test instances lie above (or below) 0.5, F-th predicts all instances as +ve (-ve) which degrades its performance. Due to the utilization of contextual similarity (highly correlated with class labels) of entity-pairs, LiLi’s guessing mechanism works better ( INLINEFORM17 ) than blind guessing (BG). The past task selection mechanism of LiLi also improves its performance over w/o PTS, as it acquires more clues during testing for poorly performed tasks (evaluated on validation set). For Freebase, due to a large number of past tasks [9 (25% of 38)], the performance difference is more significant ( INLINEFORM18 ). For WordNet, the number is relatively small [3 (25% of 14)] and hence, the difference is not significant.",
"FLOAT SELECTED: Table 6: Comparison of predictive performance of various versions of LiLi [kwn = known, unk = unknown, all = overall]."
],
"extractive_spans": [],
"free_form_answer": "In case of Freebase knowledge base, LiLi model had better F1 score than the single model by 0.20 , 0.01, 0.159 for kwn, unk, and all test Rel type. The values for WordNet are 0.25, 0.1, 0.2. \n",
"highlighted_evidence": [
"Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.\n\nSingle: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.\n\nSep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.\n\nF-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .\n\nBG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@\" blindly, no guessing mechanism.\n\nw/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.",
"Table 6 shows the comparative performance of LiLi with baselines. To judge the overall improvements, we performed paired t-test considering +ve F1 scores on each relation as paired data. Considering both KBs and all relation types, LiLi outperforms Sep with INLINEFORM12 . If we set INLINEFORM13 (training with very few clues), LiLi outperforms Sep with INLINEFORM14 on Freebase considering MCC. Thus, the lifelong learning mechanism is effective in transferring helpful knowledge. ",
"FLOAT SELECTED: Table 6: Comparison of predictive performance of various versions of LiLi [kwn = known, unk = unknown, all = overall]."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3857990188d472168e51eba265a29b997d109b72",
"8bb504ad711785740c881d1987f2ee6d1e9ac7d5"
],
"answer": [
{
"evidence": [
"Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.",
"Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.",
"Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.",
"F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .",
"BG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@\" blindly, no guessing mechanism.",
"w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement."
],
"extractive_spans": [
"versions of LiLi"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.\n\nSingle: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.\n\nSep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.\n\nF-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .\n\nBG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@\" blindly, no guessing mechanism.\n\nw/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.",
"Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.",
"Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.",
"F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .",
"BG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@\" blindly, no guessing mechanism.",
"w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement."
],
"extractive_spans": [
"various versions of LiLi as baselines",
"Single",
"Sep",
"F-th",
"BG",
"w/o PTS"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.\n\nSingle: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.\n\nSep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.\n\nF-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .\n\nBG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@\" blindly, no guessing mechanism.\n\nw/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7c018db97e98972e0d0f26697a7830456982dff5",
"a68d8bd0218ad29e5b886400df91deee1d28e9a4"
],
"answer": [
{
"evidence": [
"We solve the OKBC problem by mimicking how humans acquire knowledge and perform reasoning in an interactive conversation. Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. The process typically involves an inference strategy (a sequence of actions), which interleaves a sequence of processing and interactive actions. A processing action can be the selection of related facts, deriving inference chain, etc., that advances the inference process. An interactive action can be deciding what to ask, formulating a suitable question, etc., that enable us to interact. The process helps grow the knowledge over time and the gained knowledge enables us to communicate better in the future. We call this lifelong interactive learning and inference (LiLi). Lifelong learning is reflected by the facts that the newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning. LiLi should have the following capabilities:"
],
"extractive_spans": [
"newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning"
],
"free_form_answer": "",
"highlighted_evidence": [
"We call this lifelong interactive learning and inference (LiLi). Lifelong learning is reflected by the facts that the newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We solve the OKBC problem by mimicking how humans acquire knowledge and perform reasoning in an interactive conversation. Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. The process typically involves an inference strategy (a sequence of actions), which interleaves a sequence of processing and interactive actions. A processing action can be the selection of related facts, deriving inference chain, etc., that advances the inference process. An interactive action can be deciding what to ask, formulating a suitable question, etc., that enable us to interact. The process helps grow the knowledge over time and the gained knowledge enables us to communicate better in the future. We call this lifelong interactive learning and inference (LiLi). Lifelong learning is reflected by the facts that the newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning. LiLi should have the following capabilities:"
],
"extractive_spans": [
"Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. "
],
"free_form_answer": "",
"highlighted_evidence": [
"We solve the OKBC problem by mimicking how humans acquire knowledge and perform reasoning in an interactive conversation. Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. The process typically involves an inference strategy (a sequence of actions), which interleaves a sequence of processing and interactive actions. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"48461765d09f6eb41b036421fb6b7d261f3cb49e",
"6618ac2f2bde471a9ccd963366fb4bed24159aa4"
],
"answer": [
{
"evidence": [
"Evaluation Metrics. To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score."
],
"extractive_spans": [
"Coverage",
"Avg. MCC and avg. +ve F1 score"
],
"free_form_answer": "",
"highlighted_evidence": [
"Evaluation Metrics. To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Evaluation Metrics. To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score."
],
"extractive_spans": [
"strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 )",
"To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score"
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a3cb3b3427bc37ef26476cbfac39951c268270e1",
"e323ec718eef3f5a8a4b336b2c5948d0768cd193"
],
"answer": [
{
"evidence": [
"We solve the OKBC problem by mimicking how humans acquire knowledge and perform reasoning in an interactive conversation. Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. The process typically involves an inference strategy (a sequence of actions), which interleaves a sequence of processing and interactive actions. A processing action can be the selection of related facts, deriving inference chain, etc., that advances the inference process. An interactive action can be deciding what to ask, formulating a suitable question, etc., that enable us to interact. The process helps grow the knowledge over time and the gained knowledge enables us to communicate better in the future. We call this lifelong interactive learning and inference (LiLi). Lifelong learning is reflected by the facts that the newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning. LiLi should have the following capabilities:"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (list)\nLiLi should have the following capabilities:\n1. to formulate an inference strategy for a given query that embeds processing and interactive actions.\n2. to learn interaction behaviors (deciding what to ask and when to ask the user).\n3. to leverage the acquired knowledge in the current and future inference process.\n4. to perform 1, 2 and 3 in a lifelong manner for continuous knowledge learning.",
"highlighted_evidence": [
"LiLi should have the following capabilities:"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As lifelong learning needs to retain knowledge learned from past tasks and use it to help future learning BIBREF31 , LiLi uses a Knowledge Store (KS) for knowledge retention. KS has four components: (i) Knowledge Graph ( INLINEFORM0 ): INLINEFORM1 (the KB) is initialized with base KB triples (see §4) and gets updated over time with the acquired knowledge. (ii) Relation-Entity Matrix ( INLINEFORM2 ): INLINEFORM3 is a sparse matrix, with rows as relations and columns as entity-pairs and is used by the prediction model. Given a triple ( INLINEFORM4 , INLINEFORM5 , INLINEFORM6 ) INLINEFORM7 , we set INLINEFORM8 [ INLINEFORM9 , ( INLINEFORM10 , INLINEFORM11 )] = 1 indicating INLINEFORM12 occurs for pair ( INLINEFORM13 , INLINEFORM14 ). (iii) Task Experience Store ( INLINEFORM15 ): INLINEFORM16 stores the predictive performance of LiLi on past learned tasks in terms of Matthews correlation coefficient (MCC) that measures the quality of binary classification. So, for two tasks INLINEFORM17 and INLINEFORM18 (each relation is a task), if INLINEFORM19 [ INLINEFORM20 ] INLINEFORM21 INLINEFORM22 [ INLINEFORM23 ] [where INLINEFORM24 [ INLINEFORM25 ]=MCC( INLINEFORM26 )], we say C-PR has learned INLINEFORM27 well compared to INLINEFORM28 . (iv) Incomplete Feature DB ( INLINEFORM29 ): INLINEFORM30 stores the frequency of an incomplete path INLINEFORM31 in the form of a tuple ( INLINEFORM32 , INLINEFORM33 , INLINEFORM34 ) and is used in formulating MLQs. INLINEFORM35 [( INLINEFORM36 , INLINEFORM37 , INLINEFORM38 )] = INLINEFORM39 implies LiLi has extracted incomplete path INLINEFORM40 INLINEFORM41 times involving entity-pair INLINEFORM42 [( INLINEFORM43 , INLINEFORM44 )] for query relation INLINEFORM45 .",
"The RL model learns even after training whenever it encounters an unseen state (in testing) and thus, gets updated over time. KS is updated continuously over time as a result of the execution of LiLi and takes part in future learning. The prediction model uses lifelong learning (LL), where we transfer knowledge (parameter values) from the model for a past most similar task to help learn for the current task. Similar tasks are identified by factorizing INLINEFORM0 and computing a task similarity matrix INLINEFORM1 . Besides LL, LiLi uses INLINEFORM2 to identify poorly learned past tasks and acquire more clues for them to improve its skillset over time.",
"LiLi also uses a stack, called Inference Stack ( INLINEFORM0 ) to hold query and its state information for RL. LiLi always processes stack top ( INLINEFORM1 [top]). The clues from the user get stored in INLINEFORM2 on top of the query during strategy execution and processed first. Thus, the prediction model for INLINEFORM3 is learned before performing inference on query, transforming OKBC to a KBC problem. Table 1 shows the parameters of LiLi used in the following sections."
],
"extractive_spans": [
"Knowledge Store (KS) ",
"Knowledge Graph ( INLINEFORM0 )",
" Relation-Entity Matrix ( INLINEFORM2 )",
"Task Experience Store ( INLINEFORM15 )",
"Incomplete Feature DB ( INLINEFORM29 )"
],
"free_form_answer": "",
"highlighted_evidence": [
"As lifelong learning needs to retain knowledge learned from past tasks and use it to help future learning BIBREF31 , LiLi uses a Knowledge Store (KS) for knowledge retention. KS has four components: (i) Knowledge Graph ( INLINEFORM0 ): INLINEFORM1 (the KB) is initialized with base KB triples (see §4) and gets updated over time with the acquired knowledge. (ii) Relation-Entity Matrix ( INLINEFORM2 ): INLINEFORM3 is a sparse matrix, with rows as relations and columns as entity-pairs and is used by the prediction model. Given a triple ( INLINEFORM4 , INLINEFORM5 , INLINEFORM6 ) INLINEFORM7 , we set INLINEFORM8 [ INLINEFORM9 , ( INLINEFORM10 , INLINEFORM11 )] = 1 indicating INLINEFORM12 occurs for pair ( INLINEFORM13 , INLINEFORM14 ). (iii) Task Experience Store ( INLINEFORM15 ): INLINEFORM16 stores the predictive performance of LiLi on past learned tasks in terms of Matthews correlation coefficient (MCC) that measures the quality of binary classification. So, for two tasks INLINEFORM17 and INLINEFORM18 (each relation is a task), if INLINEFORM19 [ INLINEFORM20 ] INLINEFORM21 INLINEFORM22 [ INLINEFORM23 ] [where INLINEFORM24 [ INLINEFORM25 ]=MCC( INLINEFORM26 )], we say C-PR has learned INLINEFORM27 well compared to INLINEFORM28 . (iv) Incomplete Feature DB ( INLINEFORM29 ): INLINEFORM30 stores the frequency of an incomplete path INLINEFORM31 in the form of a tuple ( INLINEFORM32 , INLINEFORM33 , INLINEFORM34 ) and is used in formulating MLQs. INLINEFORM35 [( INLINEFORM36 , INLINEFORM37 , INLINEFORM38 )] = INLINEFORM39 implies LiLi has extracted incomplete path INLINEFORM40 INLINEFORM41 times involving entity-pair INLINEFORM42 [( INLINEFORM43 , INLINEFORM44 )] for query relation INLINEFORM45 .\n\nThe RL model learns even after training whenever it encounters an unseen state (in testing) and thus, gets updated over time. KS is updated continuously over time as a result of the execution of LiLi and takes part in future learning. The prediction model uses lifelong learning (LL), where we transfer knowledge (parameter values) from the model for a past most similar task to help learn for the current task. Similar tasks are identified by factorizing INLINEFORM0 and computing a task similarity matrix INLINEFORM1 . Besides LL, LiLi uses INLINEFORM2 to identify poorly learned past tasks and acquire more clues for them to improve its skillset over time.\n\nLiLi also uses a stack, called Inference Stack ( INLINEFORM0 ) to hold query and its state information for RL. LiLi always processes stack top ( INLINEFORM1 [top]). The clues from the user get stored in INLINEFORM2 on top of the query during strategy execution and processed first. Thus, the prediction model for INLINEFORM3 is learned before performing inference on query, transforming OKBC to a KBC problem. Table 1 shows the parameters of LiLi used in the following sections."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"How much better than the baseline is LiLi?",
"What baseline is used in the experiments?",
"In what way does LiLi imitate how humans acquire knowledge and perform inference during an interactive conversation?",
"What metrics are used to establish that this makes chatbots more knowledgeable and better at learning and conversation? ",
"What are the components of the general knowledge learning engine?"
],
"question_id": [
"cb196725edc9cdb2c54b72364f3bbf7c76471490",
"286078813136943dfafb5155ee15d2429e7601d9",
"8f16dc7d7be0d284069841e456ebb2c69575b32b",
"a7d020120a45c39bee624f65443e09b895c10533",
"585626d18a20d304ae7df228c2128da542d248ff",
"bfc2dc913e7b78f3bd45e5449d71383d0aa4a890"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: An example of interactive inference and learning. Note that LiLi only works with triples. Each triple above is assumed to be extracted from the sentence after it.",
"Table 2: State bits and their meanings.",
"Table 3: Actions and their descriptions.",
"Table 4: Dataset statistics [kwn = known, unk = unknown]",
"Table 5: Inference strategies formulated by LiLi (ordered by frequency).",
"Table 6: Comparison of predictive performance of various versions of LiLi [kwn = known, unk = unknown, all = overall].",
"Table 8: Performance of LiLi on user’s responses."
],
"file": [
"3-Figure1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"7-Table4-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"9-Table8-1.png"
]
} | [
"How much better than the baseline is LiLi?",
"What are the components of the general knowledge learning engine?"
] | [
[
"1802.06024-Experiments-11",
"1802.06024-Experiments-10",
"1802.06024-Experiments-8",
"1802.06024-Results and Analysis-0",
"1802.06024-Experiments-6",
"1802.06024-Experiments-7",
"1802.06024-8-Table6-1.png",
"1802.06024-Experiments-9"
],
[
"1802.06024-Components of LiLi-1",
"1802.06024-Components of LiLi-2",
"1802.06024-Components of LiLi-0",
"1802.06024-Introduction-5"
]
] | [
"In case of Freebase knowledge base, LiLi model had better F1 score than the single model by 0.20 , 0.01, 0.159 for kwn, unk, and all test Rel type. The values for WordNet are 0.25, 0.1, 0.2. \n",
"Answer with content missing: (list)\nLiLi should have the following capabilities:\n1. to formulate an inference strategy for a given query that embeds processing and interactive actions.\n2. to learn interaction behaviors (deciding what to ask and when to ask the user).\n3. to leverage the acquired knowledge in the current and future inference process.\n4. to perform 1, 2 and 3 in a lifelong manner for continuous knowledge learning."
] | 33 |
1809.00530 | Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification | We consider the cross-domain sentiment classification problem, where a sentiment classifier is to be learned from a source domain and to be generalized to a target domain. Our approach explicitly minimizes the distance between the source and the target instances in an embedded feature space. With the difference between source and target minimized, we then exploit additional information from the target domain by consolidating the idea of semi-supervised learning, for which, we jointly employ two regularizations -- entropy minimization and self-ensemble bootstrapping -- to incorporate the unlabeled target data for classifier refinement. Our experimental results demonstrate that the proposed approach can better leverage unlabeled data from the target domain and achieve substantial improvements over baseline methods in various experimental settings. | {
"paragraphs": [
[
"In practice, it is often difficult and costly to annotate sufficient training data for diverse application domains on-the-fly. We may have sufficient labeled data in an existing domain (called the source domain), but very few or no labeled data in a new domain (called the target domain). This issue has motivated research on cross-domain sentiment classification, where knowledge in the source domain is transferred to the target domain in order to alleviate the required labeling effort.",
"One key challenge of domain adaptation is that data in the source and target domains are drawn from different distributions. Thus, adaptation performance will decline with an increase in distribution difference. Specifically, in sentiment analysis, reviews of different products have different vocabulary. For instance, restaurants reviews would contain opinion words such as “tender”, “tasty”, or “undercooked” and movie reviews would contain “thrilling”, “horrific”, or “hilarious”. The intersection between these two sets of opinion words could be small which makes domain adaptation difficult.",
"Several techniques have been proposed for addressing the problem of domain shifting. The aim is to bridge the source and target domains by learning domain-invariant feature representations so that a classifier trained on a source domain can be adapted to another target domain. In cross-domain sentiment classification, many works BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 utilize a key intuition that domain-specific features could be aligned with the help of domain-invariant features (pivot features). For instance, “hilarious” and “tasty” could be aligned as both of them are relevant to “good”.",
"Despite their promising results, these works share two major limitations. First, they highly depend on the heuristic selection of pivot features, which may be sensitive to different applications. Thus the learned new representations may not effectively reduce the domain difference. Furthermore, these works only utilize the unlabeled target data for representation learning while the sentiment classifier was solely trained on the source domain. There have not been many studies on exploiting unlabeled target data for refining the classifier, even though it may contain beneficial information. How to effectively leverage unlabeled target data still remains an important challenge for domain adaptation.",
"In this work, we argue that the information from unlabeled target data is beneficial for domain adaptation and we propose a novel Domain Adaptive Semi-supervised learning framework (DAS) to better exploit it. Our main intuition is to treat the problem as a semi-supervised learning task by considering target instances as unlabeled data, assuming the domain distance can be effectively reduced through domain-invariant representation learning. Specifically, the proposed approach jointly performs feature adaptation and semi-supervised learning in a multi-task learning setting. For feature adaptation, it explicitly minimizes the distance between the encoded representations of the two domains. On this basis, two semi-supervised regularizations – entropy minimization and self-ensemble bootstrapping – are jointly employed to exploit unlabeled target data for classifier refinement.",
"We evaluate our method rigorously under multiple experimental settings by taking label distribution and corpus size into consideration. The results show that our model is able to obtain significant improvements over strong baselines. We also demonstrate through a series of analysis that the proposed method benefits greatly from incorporating unlabeled target data via semi-supervised learning, which is consistent with our motivation. Our datasets and source code can be obtained from https://github.com/ruidan/DAS."
],
[
"Domain Adaptation: The majority of feature adaptation methods for sentiment analysis rely on a key intuition that even though certain opinion words are completely distinct for each domain, they can be aligned if they have high correlation with some domain-invariant opinion words (pivot words) such as “excellent” or “terrible”. Blitzer et al. ( BIBREF0 ) proposed a method based on structural correspondence learning (SCL), which uses pivot feature prediction to induce a projected feature space that works well for both the source and the target domains. The pivot words are selected in a way to cover common domain-invariant opinion words. Subsequent research aims to better align the domain-specific words BIBREF1 , BIBREF5 , BIBREF3 such that the domain discrepancy could be reduced. More recently, Yu and Jiang ( BIBREF4 ) borrow the idea of pivot feature prediction from SCL and extend it to a neural network-based solution with auxiliary tasks. In their experiment, substantial improvement over SCL has been observed due to the use of real-valued word embeddings. Unsupervised representation learning with deep neural networks (DNN) such as denoising autoencoders has also been explored for feature adaptation BIBREF6 , BIBREF7 , BIBREF8 . It has been shown that DNNs could learn transferable representations that disentangle the underlying factors of variation behind data samples.",
"Although the aforementioned methods aim to reduce the domain discrepancy, they do not explicitly minimize the distance between distributions, and some of them highly rely on the selection of pivot features. In our method, we formally construct an objective for this purpose. Similar ideas have been explored in many computer vision problems, where the representations of the underlying domains are encouraged to be similar through explicit objectives BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 such as maximum mean discrepancy (MMD) BIBREF14 . In NLP tasks, Li et al. ( BIBREF15 ) and Chen et al. ( BIBREF16 ) both proposed using adversarial training framework for reducing domain difference. In their model, a sub-network is added as a domain discriminator while deep features are learned to confuse the discriminator. The feature adaptation component in our model shares similar intuition with MMD and adversary training. We will show a detailed comparison with them in our experiments.",
"Semi-supervised Learning: We attempt to treat domain adaptation as a semi-supervised learning task by considering the target instances as unlabeled data. Some efforts have been initiated on transfer learning from unlabeled data BIBREF17 , BIBREF18 , BIBREF19 . In our model, we reduce the domain discrepancy by feature adaptation, and thereafter adopt semi-supervised learning techniques to learn from unlabeled data. Primarily motivated by BIBREF20 and BIBREF21 , we employed entropy minimization and self-ensemble bootstrapping as regularizations to incorporate unlabeled data. Our experimental results show that both methods are effective when jointly trained with the feature adaptation objective, which confirms to our motivation."
],
[
"We conduct most of our experiments under an unsupervised domain adaptation setting, where we have no labeled data from the target domain. Consider two sets INLINEFORM0 and INLINEFORM1 . INLINEFORM2 is from the source domain with INLINEFORM3 labeled examples, where INLINEFORM4 is a one-hot vector representation of sentiment label and INLINEFORM5 denotes the number of classes. INLINEFORM6 is from the target domain with INLINEFORM7 unlabeled examples. INLINEFORM8 denotes the total number of training documents including both labeled and unlabeled. We aim to learn a sentiment classifier from INLINEFORM13 and INLINEFORM14 such that the classifier would work well on the target domain. We also present some results under a setting where we assume that a small number of labeled target examples are available (see Figure FIGREF27 ).",
"For the proposed model, we denote INLINEFORM0 parameterized by INLINEFORM1 as a neural-based feature encoder that maps documents from both domains to a shared feature space, and INLINEFORM2 parameterized by INLINEFORM3 as a fully connected layer with softmax activation serving as the sentiment classifier. We aim to learn feature representations that are domain-invariant and at the same time discriminative on both domains, thus we simultaneously consider three factors in our objective: (1) minimize the classification error on the labeled source examples; (2) minimize the domain discrepancy; and (3) leverage unlabeled data via semi-supervised learning.",
"Suppose we already have the encoded features of documents INLINEFORM0 (see Section SECREF10 ), the objective function for purpose (1) is thus the cross entropy loss on the labeled source examples DISPLAYFORM0 ",
"where INLINEFORM0 denotes the predicted label distribution. In the following subsections, we will explain how to perform feature adaptation and domain adaptive semi-supervised learning in details for purpose (2) and (3) respectively."
],
[
"Unlike prior works BIBREF0 , BIBREF4 , our method does not attempt to align domain-specific words through pivot words. In our preliminary experiments, we found that word embeddings pre-trained on a large corpus are able to adequately capture this information. As we will later show in our experiments, even without adaptation, a naive neural network classifier with pre-trained word embeddings can already achieve reasonably good results.",
"We attempt to explicitly minimize the distance between the source and target feature representations ( INLINEFORM0 and INLINEFORM1 ). A few methods from literature can be applied such as Maximum Mean Discrepancy (MMD) BIBREF14 or adversary training BIBREF15 , BIBREF16 . The main idea of MMD is to estimate the distance between two distributions as the distance between sample means of the projected embeddings in Hilbert space. MMD is implicitly computed through a characteristic kernel, which is used to ensure that the sample mean is injective, leading to the MMD being zero if and only if the distributions are identical. In our implementation, we skip the mapping procedure induced by a characteristic kernel for simplifying the computation and learning. We simply estimate the distribution distance as the distance between the sample means in the current embedding space. Although this approximation cannot preserve all statistical features of the underlying distributions, we find it performs comparably to MMD on our problem. The following equations formally describe the feature adaptation loss INLINEFORM2 : DISPLAYFORM0 ",
" INLINEFORM0 normalization is applied on the mean representations INLINEFORM1 and INLINEFORM2 , rescaling the vectors such that all entries sum to 1. We adopt a symmetric version of KL divergence BIBREF12 as the distance function. Given two distribution vectors INLINEFORM3 , INLINEFORM4 ."
],
[
"We attempt to exploit the information in target data through semi-supervised learning objectives, which are jointly trained with INLINEFORM0 and INLINEFORM1 . Normally, to incorporate target data, we can minimize the cross entropy loss between the true label distributions INLINEFORM2 and the predicted label distributions INLINEFORM3 over target samples. The challenge here is that INLINEFORM4 is unknown, and thus we attempt to estimate it via semi-supervised learning. We use entropy minimization and bootstrapping for this purpose. We will later show in our experiments that both methods are effective, and jointly employing them overall yields the best results.",
"Entropy Minimization: In this method, INLINEFORM0 is estimated as the predicted label distribution INLINEFORM1 , which is a function of INLINEFORM2 and INLINEFORM3 . The loss can thus be written as DISPLAYFORM0 ",
"Assume the domain discrepancy can be effectively reduced through feature adaptation, by minimizing the entropy penalty, training of the classifier is influenced by the unlabeled target data and will generally maximize the margins between the target examples and the decision boundaries, increasing the prediction confidence on the target domain.",
"Self-ensemble Bootstrapping: Another way to estimate INLINEFORM0 corresponds to bootstrapping. The idea is to estimate the unknown labels as the predictions of the model learned from the previous round of training. Bootstrapping has been explored for domain adaptation in previous works BIBREF18 , BIBREF19 . However, in their methods, domain discrepancy was not explicitly minimized via feature adaptation. Applying bootstrapping or other semi-supervised learning techniques in this case may worsen the results as the classifier can perform quite bad on the target data.",
"[t] Pseudocode for training DAS INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 INLINEFORM4 = ensembling momentum, INLINEFORM5 INLINEFORM6 = weight ramp-up function INLINEFORM7 INLINEFORM8 INLINEFORM9 each minibatch INLINEFORM10 , INLINEFORM11 , INLINEFORM12 in",
" INLINEFORM0 , INLINEFORM1 , INLINEFORM2 compute loss INLINEFORM3 on INLINEFORM4 compute loss INLINEFORM5 on INLINEFORM6 compute loss INLINEFORM7 on INLINEFORM8 compute loss INLINEFORM9 on INLINEFORM10 INLINEFORM11 ",
"update network parameters INLINEFORM0 , for INLINEFORM1 INLINEFORM2 INLINEFORM3 ",
"Inspired by the ensembling method proposed in BIBREF21 , we estimate INLINEFORM0 by forming ensemble predictions of labels during training, using the outputs on different training epochs. The loss is formulated as follows: DISPLAYFORM0 ",
"where INLINEFORM0 denotes the estimated labels computed on the ensemble predictions from different epochs. The loss is applied on all documents. It serves for bootstrapping on the unlabeled target data, and it also serves as a regularization that encourages the network predictions to be consistent in different training epochs. INLINEFORM1 is jointly trained with INLINEFORM2 , INLINEFORM3 , and INLINEFORM4 . Algorithm SECREF6 illustrates the overall training process of the proposed domain adaptive semi-supervised learning (DAS) framework.",
"In Algorithm SECREF6 , INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are weights to balance the effects of INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 respectively. INLINEFORM6 and INLINEFORM7 are constant hyper-parameters. We set INLINEFORM8 as a Gaussian curve to ramp up the weight from 0 to INLINEFORM9 . This is to ensure the ramp-up of the bootstrapping loss component is slow enough in the beginning of the training. After each training epoch, we compute INLINEFORM10 which denotes the predictions made by the network in current epoch, and then the ensemble prediction INLINEFORM11 is updated as a weighted average of the outputs from previous epochs and the current epoch, with recent epochs having larger weight. For generating estimated labels INLINEFORM12 , INLINEFORM13 is converted to a one-hot vector where the entry with the maximum value is set to one and other entries are set to zeros. The self-ensemble bootstrapping is a generalized version of bootstrappings that only use the outputs from the previous round of training BIBREF18 , BIBREF19 . The ensemble prediction is likely to be closer to the correct, unknown labels of the target data."
],
[
"We have left the feature encoder INLINEFORM0 unspecified, for which, a few options can be considered. In our implementation, we adopt a one-layer CNN structure from previous works BIBREF22 , BIBREF4 , as it has been demonstrated to work well for sentiment classification tasks. Given a review document INLINEFORM1 consisting of INLINEFORM2 words, we begin by associating each word with a continuous word embedding BIBREF23 INLINEFORM3 from an embedding matrix INLINEFORM4 , where INLINEFORM5 is the vocabulary size and INLINEFORM6 is the embedding dimension. INLINEFORM7 is jointly updated with other network parameters during training. Given a window of dense word embeddings INLINEFORM8 , the convolution layer first concatenates these vectors to form a vector INLINEFORM9 of length INLINEFORM10 and then the output vector is computed by Equation ( EQREF11 ): DISPLAYFORM0 ",
" INLINEFORM0 , INLINEFORM1 is the parameter set of the encoder INLINEFORM2 and is shared across all windows of the sequence. INLINEFORM3 is an element-wise non-linear activation function. The convolution operation can capture local contextual dependencies of the input sequence and the extracted feature vectors are similar to INLINEFORM4 -grams. After the convolution operation is applied to the whole sequence, we obtain a list of hidden vectors INLINEFORM5 . A max-over-time pooling layer is applied to obtain the final vector representation INLINEFORM6 of the input document."
],
[
"Existing benchmark datasets such as the Amazon benchmark BIBREF0 typically remove reviews with neutral labels in both domains. This is problematic as the label information of the target domain is not accessible in an unsupervised domain adaptation setting. Furthermore, removing neutral instances may bias the dataset favorably for max-margin-based algorithms like ours, since the resulting dataset has all uncertain labels removed, leaving only high confidence examples. Therefore, we construct new datasets by ourselves. The results on the original Amazon benchmark is qualitatively similar, and we present them in Appendix SECREF6 for completeness since most of previous works reported results on it.",
"Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .",
"In all our experiments on the small-scale datasets, we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain. Since we cannot control the label distribution of unlabeled data during training, we consider two different settings:",
"Setting (1): Only set 1 of the target domain is used as the unlabeled set. This tells us how the method performs in a condition when the target domain has a close-to-balanced label distribution. As we also evaluate on set 1 of the target domain, this is also considered as a transductive setting.",
"Setting (2): Set 2 from both the source and target domains are used as unlabeled sets. Since set 2 is directly sampled from millions of reviews, it better reflects real-life sentiment distribution.",
"Large-scale datasets: We further conduct experiments on four much larger datasets: IMDB (I), Yelp2014 (Y), Cell Phone (C), and Baby (B). IMDB and Yelp2014 were previously used in BIBREF25 , BIBREF26 . Cell phone and Baby are from the large-scale Amazon dataset BIBREF24 , BIBREF27 . Detailed statistics are summarized in Table TABREF9 . We keep all reviews in the original datasets and consider a transductive setting where all target examples are used for both training (without label information) and evaluation. We perform sampling to balance the classes of labeled source data in each minibatch INLINEFORM3 during training."
],
[
"Ideally, the development set should be drawn from the same distribution as the test set. However, under the unsupervised domain adaptation setting, we do not have any labeled target data at training phase which could be used as development set. In all of our experiments, for each pair of domains, we instead sample 1000 examples from the training set of the source domain as development set. We train the network for a fixed number of epochs, and the model with the minimum classification error on this development set is saved for evaluation. This approach works well on most of the problems since the target domain is supposed to behave like the source domain if the domain difference is effectively reduced.",
"Another problem is how to select the values for hyper-parameters. If we tune INLINEFORM0 and INLINEFORM1 directly on the development set from the source domain, most likely both of them will be set to 0, as unlabeled target data is not helpful for improving in-domain accuracy of the source domain. Other neural network models also have the same problem for hyper-parameter tuning. Therefore, our strategy is to use the development set from the target domain to optimize INLINEFORM2 and INLINEFORM3 for one problem (e.g., we only do this on E INLINEFORM4 BK), and fix their values on the other problems. This setting assumes that we have at least two labeled domains such that we can optimize the hyper-parameters, and then we fix them for other new unlabeled domains to transfer to."
],
[
"We initialize word embeddings using the 300-dimension GloVe vectors supplied by Pennington et al., ( BIBREF28 ), which were trained on 840 billion tokens from the Common Crawl. For each pair of domains, the vocabulary consists of the top 10000 most frequent words. For words in the vocabulary but not present in the pre-trained embeddings, we randomly initialize them.",
"We set hyper-parameters of the CNN encoder following previous works BIBREF22 , BIBREF4 without specific tuning on our datasets. The window size is set to 3 and the size of the hidden layer is set to 300. The nonlinear activation function is Relu. For regularization, we also follow their settings and employ dropout with probability set to 0.5 on INLINEFORM0 before feeding it to the output layer INLINEFORM1 , and constrain the INLINEFORM2 -norm of the weight vector INLINEFORM3 , setting its max norm to 3.",
"On the small-scale datasets and the Aamzon benchmark, INLINEFORM0 and INLINEFORM1 are set to 200 and 1, respectively, tuned on the development set of task E INLINEFORM2 BK under setting 1. On the large-scale datasets, INLINEFORM3 and INLINEFORM4 are set to 500 and 0.2, respectively, tuned on I INLINEFORM5 Y. We use a Gaussian curve INLINEFORM6 to ramp up the weight of the bootstrapping loss INLINEFORM7 from 0 to INLINEFORM8 , where INLINEFORM9 denotes the maximum number of training epochs. We train 30 epochs for all experiments. We set INLINEFORM10 to 3 and INLINEFORM11 to 0.5 for all experiments.",
"The batch size is set to 50 on the small-scale datasets and the Amazon benchmark. We increase the batch size to 250 on the large-scale datasets to reduce the number of iterations. RMSProp optimizer with learning rate set to 0.0005 is used for all experiments."
],
[
"We compare with the following baselines:",
"(1) Naive: A non-domain-adaptive baseline with bag-of-words representations and SVM classifier trained on the source domain.",
"(2) mSDA BIBREF7 : This is the state-of-the-art method based on discrete input features. Top 1000 bag-of-words features are kept as pivot features. We set the number of stacked layers to 3 and the corruption probability to 0.5.",
"(3) NaiveNN: This is a non-domain-adaptive CNN trained on source domain, which is a variant of our model by setting INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 to zeros.",
"(4) AuxNN BIBREF4 : This is a neural model that exploits auxiliary tasks, which has achieved state-of-the-art results on cross-domain sentiment classification. The sentence encoder used in this model is the same as ours.",
"(5) ADAN BIBREF16 : This method exploits adversarial training to reduce representation difference between domains. The original paper uses a simple feedforward network as encoder. For fair comparison, we replace it with our CNN-based encoder. We train 5 iterations on the discriminator per iteration on the encoder and sentiment classifier as suggested in their paper.",
"(6) MMD: MMD has been widely used for minimizing domain discrepancy on images. In those works BIBREF9 , BIBREF13 , variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized. In NLP, adding more layers of CNNs may not be very helpful and thus those models from image-related tasks can not be directly applied to our problem. To compare with MMD-based method, we train a model that jointly minimize the classification loss INLINEFORM0 on the source domain and MMD between INLINEFORM1 and INLINEFORM2 . For computing MMD, we use a Gaussian RBF which is a common choice for characteristic kernel.",
"In addition to the above baselines, we also show results of different variants of our model. DAS as shown in Algorithm SECREF6 denotes our full model. DAS-EM denotes the model with only entropy minimization for semi-supervised learning (set INLINEFORM0 ). DAS-SE denotes the model with only self-ensemble bootstrapping for semi-supervised learning (set INLINEFORM1 ). FANN (feature-adaptation neural network) denotes the model without semi-supervised learning performed (set both INLINEFORM2 and INLINEFORM3 to zeros)."
],
[
"Figure FIGREF17 shows the comparison of adaptation results (see Appendix SECREF7 for the exact numerical numbers). We report classification accuracy on the small-scale dataset. For the large-scale dataset, macro-F1 is instead used since the label distribution in the test set is extremely unbalanced. Key observations are summarized as follows. (1) Both DAS-EM and DAS-SE perform better in most cases compared with ADAN, MDD, and FANN, in which only feature adaptation is performed. This demonstrates the effectiveness of the proposed domain adaptive semi-supervised learning framework. DAS-EM is more effective than DAS-SE in most cases, and the full model DAS with both techniques jointly employed overall has the best performance. (2) When comparing the two settings on the small-scale dataset, all domain-adaptive methods generally perform better under setting 1. In setting 1, the target examples are balanced in classes, which can provide more diverse opinion-related features. However, when considering unsupervised domain adaptation, we should not presume the label distribution of the unlabeled data. Thus, it is necessary to conduct experiments using datasets that reflect real-life sentiment distribution as what we did on setting2 and the large-scale dataset. Unfortunately, this is ignored by most of previous works. (3) Word-embeddings are very helpful, as we can see even NaiveNN can substantially outperform mSDA on most tasks.",
"To see the effect of semi-supervised learning alone, we also conduct experiments by setting INLINEFORM0 to eliminate the effect of feature adaptation. Both entropy minimization and bootstrapping perform very badly in this setting. Entropy minimization gives almost random predictions with accuracy below 0.4, and the results of bootstrapping are also much lower compared to NaiveNN. This suggests that the feature adaptation component is essential. Without it, the learned target representations are less meaningful and discriminative. Applying semi-supervised learning in this case is likely to worsen the results."
],
[
"In Figure FIGREF23 , we show the change of accuracy with respect to the percentage of unlabeled data used for training on three particular problems under setting 1. The value at INLINEFORM0 denotes the accuracies of NaiveNN which does not utilize any target data. For DAS, we observe a nonlinear increasing trend where the accuracy quickly improves at the beginning, and then gradually stabilizes. For other methods, this trend is less obvious, and adding more unlabeled data sometimes even worsen the results. This finding again suggests that the proposed approach can better exploit the information from unlabeled data.",
"We also conduct experiments under a setting with a small number of labeled target examples available. Figure FIGREF27 shows the change of accuracy with respect to the number of labeled target examples added for training. We can observe that DAS is still more effective under this setting, while the performance differences to other methods gradually decrease with the increasing number of labeled target examples."
],
[
"In this subsection, we aim to better understand DAS by analyzing sentiment-related CNN filters. To do that, 1) we first select a list of the most related CNN filters for predicting each sentiment label (positive, negative neutral). Those filters can be identified according to the learned weights INLINEFORM0 of the output layer INLINEFORM1 . Higher weight indicates stronger relatedness. 2) Recall that in our implementation, each CNN filter has a window size of 3 with Relu activation. We can thus represent each selected filter as a ranked list of trigrams with highest activation values.",
"We analyze the CNN filters learned by NaiveNN, FANN and DAS respectively on task E INLINEFORM0 BT under setting 1. We focus on E INLINEFORM1 BT for study because electronics and beauty are very different domains and each of them has a diverse set of domain-specific sentiment expressions. For each method, we identify the top 10 most related filters for each sentiment label, and extract the top trigrams of each selected filter on both source and target domains. Since labeled source examples are used for training, we find the filters learned by the three methods capture similar expressions on the source domain, containing both domain-invariant and domain-specific trigrams. On the target domain, DAS captures more target-specific expressions compared to the other two methods. Due to space limitation, we only present a small subset of positive-sentiment-related filters in Table TABREF34 . The complete results are provided in Appendix SECREF8 . From Table TABREF34 , we can observe that the filters learned by NaiveNN are almost unable to capture target-specific sentiment expressions, while FANN is able to capture limited target-specific words such as “clean” and “scent”. The filters learned by DAS are more domain-adaptive, capturing diverse sentiment expressions in the target domain."
],
[
"In this work, we propose DAS, a novel framework that jointly performs feature adaptation and semi-supervised learning. We have demonstrated through multiple experiments that DAS can better leverage unlabeled data, and achieve substantial improvements over baseline methods. We have also shown that feature adaptation is an essential component, without which, semi-supervised learning is not able to function properly. The proposed framework could be potentially adapted to other domain adaptation tasks, which is the focus of our future studies."
],
[
"Most previous works BIBREF0 , BIBREF1 , BIBREF6 , BIBREF7 , BIBREF29 carried out experiments on the Amazon benchmark released by Blitzer et al. ( BIBREF0 ). The dataset contains 4 different domains: Book (B), DVDs (D), Electronics (E), and Kitchen (K). Following their experimental settings, we consider the binary classification task to predict whether a review is positive or negative on the target domain. Each domain consists of 1000 positive and 1000 negative reviews respectively. We also allow 4000 unlabeled reviews to be used for both the source and the target domains, of which the positive and negative reviews are balanced as well, following the settings in previous works. We construct 12 cross-domain sentiment classification tasks and split the labeled data in each domain into a training set of 1600 reviews and a test set of 400 reviews. The classifier is trained on the training set of the source domain and is evaluated on the test set of the target domain. The comparison results are shown in Table TABREF37 ."
],
[
"Due to space limitation, we only show results in figures in the paper. All numerical numbers used for plotting Figure FIGREF17 are presented in Table TABREF38 . We can observe that DAS-EM, DAS-SE, and DAS all achieve substantial improvements over baseline methods under different settings."
],
[
"As mentioned in Section SECREF36 , we conduct CNN filter analysis on NaiveNN, FANN, and DAS. For each method, we identify the top 10 most related filters for positive, negative, neutral sentiment labels respectively, and then represent each selected filter as a ranked list of trigrams with the highest activation values on it. Table TABREF39 , TABREF40 , TABREF41 in the following pages illustrate the trigrams from the target domain (beauty) captured by the selected filters learned on E INLINEFORM0 BT for each method.",
"We can observe that compared to NaiveNN and FANN, DAS is able to capture a more diverse set of relevant sentiment expressions on the target domain for each sentiment label. This observation is consistent with our motivation. Since NaiveNN, FANN and other baseline methods solely train the sentiment classifier on the source domain, the learned encoder is not able to produce discriminative features on the target domain. DAS addresses this problem by refining the classifier on the target domain with semi-supervised learning, and the overall objective forces the encoder to learn feature representations that are not only domain-invariant but also discriminative on both domains."
]
],
"section_name": [
"Introduction",
"Related Work",
"Notations and Model Overview",
"Feature Adaptation",
"Domain Adaptive Semi-supervised Learning (DAS)",
"CNN Encoder Implementation",
"Datasets and Experimental Settings",
"Selection of Development Set",
"Training Details and Hyper-parameters",
"Models for Comparison",
"Main Results",
"Further Analysis",
"CNN Filter Analysis",
"Conclusion",
"Results on Amazon Benchmark",
"Numerical Results of Figure ",
"CNN Filter Analysis Full Results"
]
} | {
"answers": [
{
"annotation_id": [
"6653f27a56e7d1a2c432621d39bbf70792914f0b",
"6b411ea8054b3c58761800efbe2e180cfcf0a002"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Summary of datasets."
],
"extractive_spans": [],
"free_form_answer": "719313",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Summary of datasets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Large-scale datasets: We further conduct experiments on four much larger datasets: IMDB (I), Yelp2014 (Y), Cell Phone (C), and Baby (B). IMDB and Yelp2014 were previously used in BIBREF25 , BIBREF26 . Cell phone and Baby are from the large-scale Amazon dataset BIBREF24 , BIBREF27 . Detailed statistics are summarized in Table TABREF9 . We keep all reviews in the original datasets and consider a transductive setting where all target examples are used for both training (without label information) and evaluation. We perform sampling to balance the classes of labeled source data in each minibatch INLINEFORM3 during training.",
"Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .",
"FLOAT SELECTED: Table 1: Summary of datasets."
],
"extractive_spans": [],
"free_form_answer": "Book, Electronics, Beauty and Music each have 6000, IMDB 84919, Yelp 231163, Cell Phone 194792 and Baby 160792 labeled data.",
"highlighted_evidence": [
"Large-scale datasets: We further conduct experiments on four much larger datasets: IMDB (I), Yelp2014 (Y), Cell Phone (C), and Baby (B). IMDB and Yelp2014 were previously used in BIBREF25 , BIBREF26 . Cell phone and Baby are from the large-scale Amazon dataset BIBREF24 , BIBREF27 . Detailed statistics are summarized in Table TABREF9 .",
"Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .",
"FLOAT SELECTED: Table 1: Summary of datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ded38c0efddb9da4061ffe6ce1f7fa7876c06649",
"f1c9ea2601bdb6249a0805085111abe8be0778d2"
],
"answer": [
{
"evidence": [
"For the proposed model, we denote INLINEFORM0 parameterized by INLINEFORM1 as a neural-based feature encoder that maps documents from both domains to a shared feature space, and INLINEFORM2 parameterized by INLINEFORM3 as a fully connected layer with softmax activation serving as the sentiment classifier. We aim to learn feature representations that are domain-invariant and at the same time discriminative on both domains, thus we simultaneously consider three factors in our objective: (1) minimize the classification error on the labeled source examples; (2) minimize the domain discrepancy; and (3) leverage unlabeled data via semi-supervised learning.",
"We have left the feature encoder INLINEFORM0 unspecified, for which, a few options can be considered. In our implementation, we adopt a one-layer CNN structure from previous works BIBREF22 , BIBREF4 , as it has been demonstrated to work well for sentiment classification tasks. Given a review document INLINEFORM1 consisting of INLINEFORM2 words, we begin by associating each word with a continuous word embedding BIBREF23 INLINEFORM3 from an embedding matrix INLINEFORM4 , where INLINEFORM5 is the vocabulary size and INLINEFORM6 is the embedding dimension. INLINEFORM7 is jointly updated with other network parameters during training. Given a window of dense word embeddings INLINEFORM8 , the convolution layer first concatenates these vectors to form a vector INLINEFORM9 of length INLINEFORM10 and then the output vector is computed by Equation ( EQREF11 ): DISPLAYFORM0"
],
"extractive_spans": [
"one-layer CNN structure from previous works BIBREF22 , BIBREF4"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the proposed model, we denote INLINEFORM0 parameterized by INLINEFORM1 as a neural-based feature encoder that maps documents from both domains to a shared feature space, and INLINEFORM2 parameterized by INLINEFORM3 as a fully connected layer with softmax activation serving as the sentiment classifier.",
"We have left the feature encoder INLINEFORM0 unspecified, for which, a few options can be considered. In our implementation, we adopt a one-layer CNN structure from previous works BIBREF22 , BIBREF4 , as it has been demonstrated to work well for sentiment classification tasks."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We have left the feature encoder INLINEFORM0 unspecified, for which, a few options can be considered. In our implementation, we adopt a one-layer CNN structure from previous works BIBREF22 , BIBREF4 , as it has been demonstrated to work well for sentiment classification tasks. Given a review document INLINEFORM1 consisting of INLINEFORM2 words, we begin by associating each word with a continuous word embedding BIBREF23 INLINEFORM3 from an embedding matrix INLINEFORM4 , where INLINEFORM5 is the vocabulary size and INLINEFORM6 is the embedding dimension. INLINEFORM7 is jointly updated with other network parameters during training. Given a window of dense word embeddings INLINEFORM8 , the convolution layer first concatenates these vectors to form a vector INLINEFORM9 of length INLINEFORM10 and then the output vector is computed by Equation ( EQREF11 ): DISPLAYFORM0"
],
"extractive_spans": [
" one-layer CNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"We have left the feature encoder INLINEFORM0 unspecified, for which, a few options can be considered. In our implementation, we adopt a one-layer CNN structure from previous works BIBREF22 , BIBREF4 , as it has been demonstrated to work well for sentiment classification tasks."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"81419a3603ab9056e05fb076731bf4249fe07689",
"9efba801f0cd35aba7d4c7a1a110977a57a60ba5"
],
"answer": [
{
"evidence": [
"We compare with the following baselines:",
"(1) Naive: A non-domain-adaptive baseline with bag-of-words representations and SVM classifier trained on the source domain.",
"(2) mSDA BIBREF7 : This is the state-of-the-art method based on discrete input features. Top 1000 bag-of-words features are kept as pivot features. We set the number of stacked layers to 3 and the corruption probability to 0.5.",
"(3) NaiveNN: This is a non-domain-adaptive CNN trained on source domain, which is a variant of our model by setting INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 to zeros.",
"(4) AuxNN BIBREF4 : This is a neural model that exploits auxiliary tasks, which has achieved state-of-the-art results on cross-domain sentiment classification. The sentence encoder used in this model is the same as ours.",
"(5) ADAN BIBREF16 : This method exploits adversarial training to reduce representation difference between domains. The original paper uses a simple feedforward network as encoder. For fair comparison, we replace it with our CNN-based encoder. We train 5 iterations on the discriminator per iteration on the encoder and sentiment classifier as suggested in their paper.",
"(6) MMD: MMD has been widely used for minimizing domain discrepancy on images. In those works BIBREF9 , BIBREF13 , variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized. In NLP, adding more layers of CNNs may not be very helpful and thus those models from image-related tasks can not be directly applied to our problem. To compare with MMD-based method, we train a model that jointly minimize the classification loss INLINEFORM0 on the source domain and MMD between INLINEFORM1 and INLINEFORM2 . For computing MMD, we use a Gaussian RBF which is a common choice for characteristic kernel."
],
"extractive_spans": [
"(1) Naive",
"(2) mSDA BIBREF7",
"(3) NaiveNN",
"(4) AuxNN BIBREF4",
"(5) ADAN BIBREF16",
"(6) MMD"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare with the following baselines:\n\n(1) Naive: A non-domain-adaptive baseline with bag-of-words representations and SVM classifier trained on the source domain.\n\n(2) mSDA BIBREF7 : This is the state-of-the-art method based on discrete input features. Top 1000 bag-of-words features are kept as pivot features. We set the number of stacked layers to 3 and the corruption probability to 0.5.\n\n(3) NaiveNN: This is a non-domain-adaptive CNN trained on source domain, which is a variant of our model by setting INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 to zeros.\n\n(4) AuxNN BIBREF4 : This is a neural model that exploits auxiliary tasks, which has achieved state-of-the-art results on cross-domain sentiment classification. The sentence encoder used in this model is the same as ours.\n\n(5) ADAN BIBREF16 : This method exploits adversarial training to reduce representation difference between domains. The original paper uses a simple feedforward network as encoder. For fair comparison, we replace it with our CNN-based encoder. We train 5 iterations on the discriminator per iteration on the encoder and sentiment classifier as suggested in their paper.\n\n(6) MMD: MMD has been widely used for minimizing domain discrepancy on images. In those works BIBREF9 , BIBREF13 , variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized. In NLP, adding more layers of CNNs may not be very helpful and thus those models from image-related tasks can not be directly applied to our problem. To compare with MMD-based method, we train a model that jointly minimize the classification loss INLINEFORM0 on the source domain and MMD between INLINEFORM1 and INLINEFORM2 . For computing MMD, we use a Gaussian RBF which is a common choice for characteristic kernel."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compare with the following baselines:",
"(1) Naive: A non-domain-adaptive baseline with bag-of-words representations and SVM classifier trained on the source domain.",
"(2) mSDA BIBREF7 : This is the state-of-the-art method based on discrete input features. Top 1000 bag-of-words features are kept as pivot features. We set the number of stacked layers to 3 and the corruption probability to 0.5.",
"(3) NaiveNN: This is a non-domain-adaptive CNN trained on source domain, which is a variant of our model by setting INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 to zeros.",
"(4) AuxNN BIBREF4 : This is a neural model that exploits auxiliary tasks, which has achieved state-of-the-art results on cross-domain sentiment classification. The sentence encoder used in this model is the same as ours.",
"(5) ADAN BIBREF16 : This method exploits adversarial training to reduce representation difference between domains. The original paper uses a simple feedforward network as encoder. For fair comparison, we replace it with our CNN-based encoder. We train 5 iterations on the discriminator per iteration on the encoder and sentiment classifier as suggested in their paper.",
"(6) MMD: MMD has been widely used for minimizing domain discrepancy on images. In those works BIBREF9 , BIBREF13 , variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized. In NLP, adding more layers of CNNs may not be very helpful and thus those models from image-related tasks can not be directly applied to our problem. To compare with MMD-based method, we train a model that jointly minimize the classification loss INLINEFORM0 on the source domain and MMD between INLINEFORM1 and INLINEFORM2 . For computing MMD, we use a Gaussian RBF which is a common choice for characteristic kernel."
],
"extractive_spans": [
"non-domain-adaptive baseline with bag-of-words representations and SVM classifier",
"mSDA",
"non-domain-adaptive CNN trained on source domain",
"neural model that exploits auxiliary tasks",
"adversarial training to reduce representation difference between domains",
"variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare with the following baselines:\n\n(1) Naive: A non-domain-adaptive baseline with bag-of-words representations and SVM classifier trained on the source domain.\n\n(2) mSDA BIBREF7 : This is the state-of-the-art method based on discrete input features. Top 1000 bag-of-words features are kept as pivot features. We set the number of stacked layers to 3 and the corruption probability to 0.5.\n\n(3) NaiveNN: This is a non-domain-adaptive CNN trained on source domain, which is a variant of our model by setting INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 to zeros.\n\n(4) AuxNN BIBREF4 : This is a neural model that exploits auxiliary tasks, which has achieved state-of-the-art results on cross-domain sentiment classification. The sentence encoder used in this model is the same as ours.\n\n(5) ADAN BIBREF16 : This method exploits adversarial training to reduce representation difference between domains. The original paper uses a simple feedforward network as encoder. For fair comparison, we replace it with our CNN-based encoder. We train 5 iterations on the discriminator per iteration on the encoder and sentiment classifier as suggested in their paper.\n\n(6) MMD: MMD has been widely used for minimizing domain discrepancy on images. In those works BIBREF9 , BIBREF13 , variants of deep CNNs are used for encoding images and the MMDs of multiple layers are jointly minimized. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"38ad7fcfadf518ab020a6a652d700e5fc8245925",
"535f4e068c10cfe5a3f0d9f2312b4fba169ba9d8"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Summary of datasets.",
"Most previous works BIBREF0 , BIBREF1 , BIBREF6 , BIBREF7 , BIBREF29 carried out experiments on the Amazon benchmark released by Blitzer et al. ( BIBREF0 ). The dataset contains 4 different domains: Book (B), DVDs (D), Electronics (E), and Kitchen (K). Following their experimental settings, we consider the binary classification task to predict whether a review is positive or negative on the target domain. Each domain consists of 1000 positive and 1000 negative reviews respectively. We also allow 4000 unlabeled reviews to be used for both the source and the target domains, of which the positive and negative reviews are balanced as well, following the settings in previous works. We construct 12 cross-domain sentiment classification tasks and split the labeled data in each domain into a training set of 1600 reviews and a test set of 400 reviews. The classifier is trained on the training set of the source domain and is evaluated on the test set of the target domain. The comparison results are shown in Table TABREF37 ."
],
"extractive_spans": [],
"free_form_answer": "Book, electronics, beauty, music, IMDB, Yelp, cell phone, baby, DVDs, kitchen",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Summary of datasets.",
"The dataset contains 4 different domains: Book (B), DVDs (D), Electronics (E), and Kitchen (K). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .",
"In all our experiments on the small-scale datasets, we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain. Since we cannot control the label distribution of unlabeled data during training, we consider two different settings:"
],
"extractive_spans": [
"we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain",
"Book (BK), Electronics (E), Beauty (BT), and Music (M)"
],
"free_form_answer": "",
"highlighted_evidence": [
"It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .\n\nIn all our experiments on the small-scale datasets, we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How many labels do the datasets have?",
"What is the architecture of the model?",
"What are the baseline methods?",
"What are the source and target domains?"
],
"question_id": [
"6aa2a1e2e3666f2b2a1f282d4cbdd1ca325eb9de",
"b46c0015a122ee5fb95c2a45691cb97f80de1bb6",
"5b7a4994bfdbf8882f391adf1cd2218dbc2255a0",
"9176d2ba1c638cdec334971c4c7f1bb959495a8e"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Summary of datasets.",
"Figure 1: Performance comparison. Average results over 5 runs with random initializations are reported for each neural method. ∗ indicates that the proposed method (either of DAS, DAS-EM, DAS-SE) is significantly better than other baselines (baseline 1-6) with p < 0.05 based on one-tailed unpaired t-test.",
"Figure 2: Accuracy vs. percentage of unlabeled target training examples.",
"Figure 3: Accuracy vs. number of labeled target training examples.",
"Table 2: Comparison of the top trigrams (each column) from the target domain (beauty) captured by the 5 most positive-sentiment-related CNN filters learned on E→BT. ∗ denotes a padding.",
"Table 3: Accuracies on the Amazon benchmark. Average results over 5 runs with random initializations are reported for each neural method. ∗ indicates that the proposed method (DAS-EM, DAS-SE, DAS) is significantly better than other baselines with p < 0.05 based on one-tailed unpaired t-test.",
"Table 4: Performance comparison. Average results over 5 runs with random initializations are reported for each neural method. ∗ indicates that the proposed method (DAS, DAS-EM, DAS-SE) is significantly better than other baselines with p < 0.05 based on one-tailed unpaired t-test.",
"Table 5: Top 5 trigrams from the target domain (beauty) captured by the top 10 most positive-sentimentrelated CNN filters learned on E→BT. ∗ denotes a padding.",
"Table 6: Top 5 trigrams from the target domain (beauty) captured by the top 10 most negative-sentimentrelated CNN filters learned on E→BT. ∗ denotes a padding.",
"Table 7: Top 5 trigrams from the target domain (beauty) captured by the top 10 most neutral-sentimentrelated CNN filters learned on E→BT. ∗ denotes a padding."
],
"file": [
"5-Table1-1.png",
"7-Figure1-1.png",
"8-Figure2-1.png",
"8-Figure3-1.png",
"9-Table2-1.png",
"11-Table3-1.png",
"12-Table4-1.png",
"13-Table5-1.png",
"14-Table6-1.png",
"15-Table7-1.png"
]
} | [
"How many labels do the datasets have?",
"What are the source and target domains?"
] | [
[
"1809.00530-Datasets and Experimental Settings-1",
"1809.00530-Datasets and Experimental Settings-5",
"1809.00530-5-Table1-1.png"
],
[
"1809.00530-Datasets and Experimental Settings-1",
"1809.00530-5-Table1-1.png",
"1809.00530-Results on Amazon Benchmark-0",
"1809.00530-Datasets and Experimental Settings-2"
]
] | [
"Book, Electronics, Beauty and Music each have 6000, IMDB 84919, Yelp 231163, Cell Phone 194792 and Baby 160792 labeled data.",
"Book, electronics, beauty, music, IMDB, Yelp, cell phone, baby, DVDs, kitchen"
] | 34 |
1710.07960 | How big is big enough? Unsupervised word sense disambiguation using a very large corpus | In this paper, the problem of disambiguating a target word for Polish is approached by searching for related words with known meaning. These relatives are used to build a training corpus from unannotated text. This technique is improved by proposing new rich sources of replacements that substitute the traditional requirement of monosemy with heuristics based on wordnet relations. The na\"ive Bayesian classifier has been modified to account for an unknown distribution of senses. A corpus of 600 million web documents (594 billion tokens), gathered by the NEKST search engine allows us to assess the relationship between training set size and disambiguation accuracy. The classifier is evaluated using both a wordnet baseline and a corpus with 17,314 manually annotated occurrences of 54 ambiguous words. | {
"paragraphs": [
[
"The focus of the word sense disambiguation (WSD) task is polysemy, i.e. words having several substantially different meanings. Two common examples are bank (riverside or financial institution) and bass (fish or musical instrument), but usually the meanings of a word are closely related, e.g. class may refer to: (a) a group of students, (b) the period when they meet to study or (c) a room where such meetings occur. Readers deal with this problem by using a word's context and in WSD we aim at doing it automatically.",
"The most effective solution, called supervised WSD, is to use a large number of sense-annotated occurrences of the target word to build a machine learning model to label test cases. However, this approach suffers from a knowledge acquisition bottleneck. The annotation of a separate training corpus for every target word demands a considerable amount of human labour. Therefore, this approach is unusable in applications that require WSD across a wide vocabulary, such as open-domain question answering BIBREF0 .",
"The method of monosemous relatives, which is the focus of this work, bypasses the bottleneck by gathering occurences of words related to the target word, but free from ambiguity, and treating them as training cases of the respective senses. Human labour is eliminated at the expense of accuracy, as the context of each relative only approximately matches the context of the target word sense.",
"Monosemous relatives have been employed multiple times (see Section 2), but results remain unsatisfactory. The aim of my study is to explore the limitations of this technique by implementing and evaluating such a tool for Polish. Firstly, the method is expanded by waiving the requirement of monosemy and proposing several new sources of relatives. These previously unexplored sources are based on wordnet data and help gather many training cases from the corpus. Secondly, a well-known problem of uneven yet unknown distribution of word senses is alleviated by modifying a naïve Bayesian classifier. Thanks to this correction, the classifier is no longer biased towards senses that have more training data. Finally, a very large corpus (600 million documents), gathered from the web by a Polish search engine NEKST, is used to build models based on training corpora of different sizes. Those experiments show what amount of data is sufficient for such a task. The proposed solution is compared to baselines that use wordnet structure only, with no training corpora.",
"This paper is organised as follows. The next section reviews the previous research in the area, focusing on unsupervised WSD using monosemous relatives. Section 3 outlines the proposed solution by describing the new sources of relatives, the employed corpus, the features extracted from context and the modified Bayesian classifier. Section 4 describes the evaluation data and process, while section 5 quotes the results. Section 6 is devoted to discussing the outcomes and section 7 concludes the paper."
],
[
"The problem of WSD has received a lot of attention since the beginning of natural language processing research. WSD is typically expected to improve the results of real-world applications: originally machine translation and recently information retrieval and extraction, especially question answering BIBREF0 . Like many other areas, WSD has greatly benefited from publicly available test sets and competitions. Two notable corpora are: 1) SemCor BIBREF1 , built by labelling a subset of Brown corpus with Princeton WordNet synsets and 2) the public evaluations of Senseval workshops BIBREF2 , BIBREF3 .",
"There are a variety of approaches to solve the WSD problem, which can be grouped based upon how they use their data – see reviews BIBREF4 , BIBREF5 . In supervised solutions a large sense-tagged corpus is available for training. This approach has been applied to the the test set used in the current study, resulting in an accuracy value of 91.5% BIBREF6 . Although this technique undoubtedly yields the best results, we would need an immense amount of human labour to build a training corpus of sufficient size for disambiguating all words. This does not seem possible, especially in the case of languages, such as Polish, which receive less attention than English.",
"In the minimally supervised approach BIBREF7 , a small set of initial training examples, obtained by a heuristic or hand-tagging, is used to label new occurrences. They in turn serve as a training set for next iteration, and so on. This bootstrapping procedure requires very little manual tagging but needs to be carefully implemented to avoid loosing accuracy in further steps.",
"Unsupervised methods use no previously labelled examples. Instead an external knowledge source is employed, e.g. a machine-readable dictionary or wordnet. In the simplest unsupervised solution, called the Lesk algorithm BIBREF8 , meanings of consecutive ambiguous words are selected by finding those senses whose definitions overlap the most.",
"If lack of definitions make the Lesk algorithm infeasible, we can exploit relations between words. This study focuses on monosemous relatives, i.e. words or collocations, selected using wordnet, being related to a disambiguation target, but free of ambiguity. One can easily find occurrences of such relatives in an unannotated text and treat them as training examples for the target ambiguous word. The method has been successfully applied in an English WSD task BIBREF9 , but still many problems remain. One of them is choice of relatives – in fact, even synonyms differ in meaning and usage contexts; and they are not available for many words. That is why also hypernyms and hyponyms, especially multi-word expressions containing the target word, are taken into account. Some researchers also include siblings (i.e. words with a common hypernym with the target) and antonyms, but their influence is not always beneficiary BIBREF10 . Other interesting sources of monosemous relatives are parts of definition BIBREF11 , named entities BIBREF12 , indirect hyponyms and hypernyms, and finally meronyms and holonyms BIBREF10 .",
"The majority of classification techniques are built on an assumption that the training data approximately reflects the true distribution of the target classes. However, that is not the case when using monosemous relatives. The number of their occurrences seldom agrees with the probabilities of corresponding word senses. Quite often it actually is the opposite: obvious and frequent meanings have very few relatives and vice versa. Some researchers simply copy the a priori probabilities from test data BIBREF9 , others employ heuristics, but they are easily beaten by statistics taken from a real annotated corpus, even different than test set BIBREF13 .",
"Preparing a corpus for finding relatives poses a challenge as well. It should contain a lot of text, as many monosemous words are scarce. Some researchers use snippets retrieved from search engines, i.e. AltaVista BIBREF11 or Google BIBREF13 . One can also extend a search query by including the context of the disambiguated word BIBREF14 , but it requires using as many queries as test cases.",
"Finally, the usage of monosemous relatives has more applications than classical WSD. One can use them to generate topical signatures for concepts BIBREF15 , automatically build large sense-tagged corpora BIBREF16 and evaluate the quality of wordnet-related semantic resources BIBREF17 ."
],
[
"The algorithm works as follows. First, a set of relatives is obtained for each sense of a target word using the Polish wordnet: plWordNet BIBREF18 . Some of the replacements may have multiple senses, however usually one of them covers most cases. Secondly, a set of context features is extracted from occurrences of relatives in the NEKST corpus. Finally, the aggregated feature values corresponding to target word senses are used to build a naïve Bayesian classifier adjusted to a situation of unknown a priori probabilities."
],
[
"In order to obtain training cases from unannotated corpora, we aim to select relatives which are semantically similar to a given sense of a target word. An example of this process, concerning the word język (tongue) in one of its meanings (human or animal organ) is shown in Figure FIGREF4 . This study takes into account only synonyms, hypernyms and hyponyms, as other options (siblings, antonyms, higher-order relatives) have previously given unsatisfactory results BIBREF10 . Instead, another problem deserves more attention: how do we select those occurrences of a polysemous relative that correspond to a target word sense? So far, the problem has been circumvented by including only monosemous relatives (narząd and jęzor in the example), which greatly decreases their availability. Instead, we employ those relatives, whose first meaning is related to the considered sense (artykulator in the example). The intuition is that plWordNet usually mentions the most frequent meaning as the first.",
"We also exploit plWordNet relations called determiner, which links nominals with adjectives that are frequently used to describe them. For example, consider a word organ (Eng. organ). An adjective natleniony (oxygenated) is a determiner of one of the meanings of organ, i.e. part of body, but not the others, i.e. part of an institution. Therefore, the included relatives consist of a polysemous related word, including the target word itself, and a determiner associated with a meaning (wydolny organ and natleniony organ in the example). This procedure is performed only in the case of relatives that weren't included so far, i.e. with a sense number higher than 1.",
"Finally, we also make use of a well-known principle called one word per discourse BIBREF19 , which states that a polysemous word is very unlikely to take different meanings in a single document. In this study, the principle is employed in the following way: if in a single document there appear only relatives corresponding to a single target word sense, then all occurrences of the target word in this document are also treated as training examples for this meaning.",
"One can easily see that these assumptions are false in many cases, which may introduce noise and deteriorate a resulting model. Thus, the presented solutions undergo experimental validation using the following sets of relatives:",
"Monosemous children – monosemous direct hyponyms.",
"Monosemous relatives – monosemous synonyms, direct hyponyms and direct hypernyms.",
"First relatives – words in the first meaning belonging to synsets of synonyms, direct hyponyms or direct hypernyms.",
"Word determiners – collocations made of two words in any order: the target word and a determiner associated with a given meaning.",
"All determiners – collocations made of two words in any order: a polysemous relative and a determiner associated with the appropriate meaning.",
"Other words – occurrences of the target ambiguous word in a document that contains other relatives corresponding to exactly one of the meanings.",
"Table TABREF23 shows how many relatives have been obtained for each category, as well as the number of occurrences in the corpus of 6 million documents (see next section)."
],
[
"As some of the relatives may be very rare, it is important to use a training corpus of significant size. In this case, we used 600 million webpages (594 billion tokens) indexed by a Polish search engine NEKST, developed at the Institute of Computer Science, Polish Academy of Sciences. Training sub-corpora were selected with size varying from 19,000 to 60 million documents. Instead of using snippets returned by a search interface, we use whole textual contents (with morphosyntactic annotation) of each document, taken from NEKST distributed infrastructure.",
"Unfortunately, a lot of text on web pages is not suitable for training a WSD classifier, for example elements of page structure or parts unrecognised by a tagger. Thus, each sentence has to satisfy the following requirements to qualify for training:",
"be at least 150-character long.",
"contain at least five words.",
"contain at least four different parts of speech (including punctuation).",
"These criteria help to filter out most of the web content of unsatisfactory quality."
],
[
"Context features extracted for classification are very similar to those that have proven successful in supervised WSD, including experiments on the same evaluation set BIBREF20 :",
"words present at certain positions in a neighbourhood of a target word:",
"lemmas at positions: -2, -1, 1, 2 (denoted by INLINEFORM0 ),",
"morphosyntactic interpretations (sequences of tags) at positions: -1, 1 (denoted by INLINEFORM0 ) and 0 (denoted by INLINEFORM1 ),",
"lemmas present in the sentence (denoted by INLINEFORM0 ).",
"Note that the morphosyntactic interpretations are assigned to single words only, therefore in case of multi-word relatives INLINEFORM0 is not available. Also, a gender tag is removed from INLINEFORM1 ."
],
[
"After gathering the values of features from occurrences of relatives, a naïve Bayesian classification model is built. However, as many researchers in the field have noticed BIBREF13 , BIBREF9 , BIBREF11 , there is a major drawback of using relatives instead of a target word: the number of occurrences of relatives is usually not proportional to the frequency of the target word sense. To bypass this problem, we modify the Bayesian model in this study. The basic classifier chooses the class that maximises the following probability: INLINEFORM0 ",
"In the case of binary features, which represent the occurrence of a certain word in context, we have INLINEFORM0 and: INLINEFORM1 ",
" Which could be rewritten as: INLINEFORM0 ",
" The expression has been formulated as a product of two factors: INLINEFORM0 , independent from observed features and corresponding to empty word context, and INLINEFORM1 that depends on observed context. To weaken the influence of improper distribution of training cases, we omit INLINEFORM2 , so that when no context features are observed, every word sense is considered equally probable.",
"Thus, for the given context features INLINEFORM0 , sense INLINEFORM1 is selected when: INLINEFORM2 ",
"The table with final results ( TABREF27 ) contains accuracies of both original and modified versions of the classifier."
],
[
"For experiments and evaluation a sub-corpus of the National Corpus of Polish, NCP BIBREF6 was employed. The manually annotated sub-corpus contains sense labels of 106 different words: 50 nouns, 48 verbs and 8 adjectives. As verbs have much poorer connectivity in plWordNet, they have been ignored within this study.",
"The senses used for annotation are coarse-grained – with one sense covering a range of related usages. Each word has between two and four senses. To employ the method described in previous section, the NCP senses have been manually mapped to fine-grained plWordNet synsets. As NCP senses have been created independently from wordnet senses, a substantial part of the latter remain uncovered by the former. However, we only found four cases where an NCP sense has no counterpart in wordnet; those words are excluded from the test set.",
"In total, the test set includes 17,314 occurrences of 54 ambiguous words, having two to four coarse-grained meanings. Table TABREF26 contains a detailed summary.",
"The algorithm works using plWordNet synsets and its output is mapped to NCP meaning to measure its quality. Accuracy measures what percentage of the programme's guesses agree with manually assigned senses. To assess the general performance of a particular configuration, the accuracy has been averaged over all target words.",
"To properly judge the results, we need to start with the baselines. Without knowing the distribution of senses, three basic possibilities seem reasonable: we can either 1) select a meaning randomly, 2) base on the sense numbering in NCP or 3) use plWordNet in the same way. To have a better comparison with ontology-based methods, the results also include a word similarity baseline configuration, which selects the sense with the strongest similarity to any of the words in context (sentence). For that purpose the Leacock&Chodorow similarity measure (implemented using all relations between synsets in plWordNet) is employed, as it has been previously used in WSD BIBREF21 and also correlates well with human judgement of similarity BIBREF22 . The baseline results, shown in Table TABREF27 , support the claim of intentional sense ordering in plWordNet."
],
[
"The goal of the first experiment was to select an optimal feature set for this task. Several models with a common basic configuration, i.e. using all possible relatives and 6 million documents, have been built with different feature sets and evaluated. The results are shown in Table TABREF22 . As we can see, the lexical features give us more predictive power than morphological interpretations. The best solution, incorporated into the basic configuration for further experiments, includes all features except these based on the interpretation of the word in focus.",
"Secondly, it is necessary to evaluate different types of replacements, outlined in section SECREF2 . Table TABREF23 contains the average number of possible replacements per target word, the number of occurrences in the six-million corpus and the average classification accuracy. As we can see, the number of replacements rises after adding subsequent sources, but the largest increase is caused by including polysemous relatives with determiners. On the other hand, these compound relatives rarely appear in the corpus (13,126 occurences), whereas employing polysemous words in the first sense results in 1,525,739 new training cases and a substantial growth of accuracy. What is more, although the profits from these sources of relatives differ, none of them decreases the accuracy.",
"The availability of the corpus containing 600 million documents helps to answer the question of sufficient corpus size for such task. Figure FIGREF24 shows mean classification accuracy for models built using different training corpora, which have been created by randomly selecting a subset of the original document set. The considered sizes are between 19,000 and 60,000,000. Additionally, a different type of corpora has been created by using only documents from the Polish Wikipedia (sizes 11,000 – 1,098,000). We see that after a certain point adding in new data does not improve the accuracy. Surprisingly, the subcorpora of Wikipedia yield worse results than those of unlimited origin.",
"Table TABREF26 shows the accuracy of disambiguation in the configuration outlined above with respect to the target word. The easiest words have meanings corresponding to distinct physical objects, e.g. in Polish piłka (100% accuracy) may mean a ball or a small saw. The hardest cases are those with many abstract and fuzzy meanings, e.g. in Polish klasa has four meanings related to English class: (1) group of similar objects, (2) level of quality, (3) group of pupils or (4) classroom. The meaning (1) could be hard to distinguish from (2) even for a human, whereas (3) and (4) may appear in very similar contexts.",
"Finally, Table TABREF27 contains the mean accuracy of the basic configuration of the classifier described in this work (with and without modifications to Bayesian model). It is compared to the four previously mentioned baselines."
],
[
"Although many different configurations have been tested in this study, all of them remain below the accuracy level of 80%, approximately equal to average share of dominating senses in this dataset. This is obviously unsatisfactory and demands explanation.",
"First of all, the new sources of replacements proposed in this work indeed seem to improve the models from 70.86% (only traditional monosemous relatives) to 77.96% (all proposed relatives). The biggest gain is obtained by including the polysemous relatives taking into account only their first meaning. This technique relies on two assumptions: a strong domination of one of the senses and that sense being listed first in plWordNet. While the former is almost always true, if the second assumption is false then the created model are adversely affected. In the case of two target words the senses, the first sense in each case (stopień as a musical concept and forma as a synonym of polynomial) was so peculiar that they were unknown to the author of this study and couldn't be assigned to any of the coarse-grained NCP senses. Clearly, not only the method of unsupervised WSD using relatives, but also other solutions related to polysemy would definitely benefit from a reliable ordering of senses in wordnets, especially as increasingly uncommon senses are added to them with time. It is however not clear how such knowledge could be obtained without solving the WSD task first. What is more, sense distributions obviously change with genre, time, author, etc.",
"When it comes to feature selection, the most unexpected phenomenon observed in this study is low usefulness of the interpretation-based features. According to Table TABREF22 , adding interpretations of neighbouring words ( INLINEFORM0 ) yields very little improvement, while this type of information regarding replacements ( INLINEFORM1 ) even lowers the accuracy. This result could be attributed to two factors. Firstly, more developed replacement generation results in more occurrences, but also causes their tags to differ from the target word by gender or number. They may even not be available at all (in the case of multi-word replacements). The second reason is a difference in language: while in English a word interpretation is represented as one of several dozen part of speech identifiers, in Slavonic languages, such as Polish, we need to specify the values of several tags for each word, leading to thousands of possible interpretations. Obviously, the features based on these tags are very sparse. Finally, the morphosyntactic annotation was performed automatically, which may lead to errors, especially in the case of noisy web text.",
"One of the purposes of this study was to check the necessary amount of training data for such a solution by employing a very large collection from the NEKST search engine. The need for large corpora is obvious when using only monosemous relatives – those usually rare words should appear in many contexts. However, according to the results shown in Figure FIGREF24 , the strategy for generating relatives presented in this paper reaches optimum performance for a reasonable amount of texts – 6 million documents is enough. However, one should keep in mind that this statement remains true assuming a constant evaluation environment; expanding a test set (currently containing 17,314 occurrences) may help to see differences between apparently equivalent models and raise the need for bigger corpora."
],
[
"In this paper the limitations and improvements of unsupervised word sense disambiguation have been investigated. The main problem – insufficient number and quality of replacements has been tackled by adding new rich sources of replacements. The quality of the models has indeed improved, especially thanks to replacements based on sense ordering in plWordNet. To deal with the problem of unknown sense distribution, the Bayesian classifier has been modified, removing the bias towards frequent labels in the training data. Finally, the experiments with very large corpus have shown the sufficient amount of training data for this task, which is only 6 million documents."
],
[
"This study was conducted at Institute of Computer Science, Polish Academy of Sciences, and supported by a research fellowship within \"Information technologies: research and their interdisciplinary applications\" agreement number POKL.04.01.01-00-051/10-00. The author would like to thank Dariusz Czerski and NEKST team for providing access to the search engine index and helpful discussions and Matthew Shardlow for comments that greatly improved the manuscript."
]
],
"section_name": [
"Introduction",
"Related work",
"Method",
"Relatives",
"Corpus",
"Features",
"Classification",
"Evaluation",
"Results",
"Discussion",
"Conclusions",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"3cb8763e8d8f699269ead7ab1b213e27cdd71fe8",
"a54cfa3d47b80a0648139ea12d6cb99d03c34f55"
],
"answer": [
{
"evidence": [
"When it comes to feature selection, the most unexpected phenomenon observed in this study is low usefulness of the interpretation-based features. According to Table TABREF22 , adding interpretations of neighbouring words ( INLINEFORM0 ) yields very little improvement, while this type of information regarding replacements ( INLINEFORM1 ) even lowers the accuracy. This result could be attributed to two factors. Firstly, more developed replacement generation results in more occurrences, but also causes their tags to differ from the target word by gender or number. They may even not be available at all (in the case of multi-word replacements). The second reason is a difference in language: while in English a word interpretation is represented as one of several dozen part of speech identifiers, in Slavonic languages, such as Polish, we need to specify the values of several tags for each word, leading to thousands of possible interpretations. Obviously, the features based on these tags are very sparse. Finally, the morphosyntactic annotation was performed automatically, which may lead to errors, especially in the case of noisy web text."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Finally, the morphosyntactic annotation was performed automatically, which may lead to errors, especially in the case of noisy web text."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"38daaceb6d56558a0887c1ec6aeb642b681223ff",
"7e3c119a6226d0da2c84aa33b4e469d8e1740e41"
],
"answer": [
{
"evidence": [
"Monosemous relatives have been employed multiple times (see Section 2), but results remain unsatisfactory. The aim of my study is to explore the limitations of this technique by implementing and evaluating such a tool for Polish. Firstly, the method is expanded by waiving the requirement of monosemy and proposing several new sources of relatives. These previously unexplored sources are based on wordnet data and help gather many training cases from the corpus. Secondly, a well-known problem of uneven yet unknown distribution of word senses is alleviated by modifying a naïve Bayesian classifier. Thanks to this correction, the classifier is no longer biased towards senses that have more training data. Finally, a very large corpus (600 million documents), gathered from the web by a Polish search engine NEKST, is used to build models based on training corpora of different sizes. Those experiments show what amount of data is sufficient for such a task. The proposed solution is compared to baselines that use wordnet structure only, with no training corpora.",
"The algorithm works as follows. First, a set of relatives is obtained for each sense of a target word using the Polish wordnet: plWordNet BIBREF18 . Some of the replacements may have multiple senses, however usually one of them covers most cases. Secondly, a set of context features is extracted from occurrences of relatives in the NEKST corpus. Finally, the aggregated feature values corresponding to target word senses are used to build a naïve Bayesian classifier adjusted to a situation of unknown a priori probabilities."
],
"extractive_spans": [],
"free_form_answer": "The Näive-Bayes classifier is corrected so it is not biased to most frequent classes",
"highlighted_evidence": [
"Secondly, a well-known problem of uneven yet unknown distribution of word senses is alleviated by modifying a naïve Bayesian classifier. Thanks to this correction, the classifier is no longer biased towards senses that have more training data.",
"Finally, the aggregated feature values corresponding to target word senses are used to build a naïve Bayesian classifier adjusted to a situation of unknown a priori probabilities."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Which could be rewritten as: INLINEFORM0",
"The expression has been formulated as a product of two factors: INLINEFORM0 , independent from observed features and corresponding to empty word context, and INLINEFORM1 that depends on observed context. To weaken the influence of improper distribution of training cases, we omit INLINEFORM2 , so that when no context features are observed, every word sense is considered equally probable.",
"In this paper the limitations and improvements of unsupervised word sense disambiguation have been investigated. The main problem – insufficient number and quality of replacements has been tackled by adding new rich sources of replacements. The quality of the models has indeed improved, especially thanks to replacements based on sense ordering in plWordNet. To deal with the problem of unknown sense distribution, the Bayesian classifier has been modified, removing the bias towards frequent labels in the training data. Finally, the experiments with very large corpus have shown the sufficient amount of training data for this task, which is only 6 million documents."
],
"extractive_spans": [
"Bayesian classifier has been modified, removing the bias towards frequent labels in the training data"
],
"free_form_answer": "",
"highlighted_evidence": [
"Which could be rewritten as: INLINEFORM0\n\nThe expression has been formulated as a product of two factors: INLINEFORM0 , independent from observed features and corresponding to empty word context, and INLINEFORM1 that depends on observed context. To weaken the influence of improper distribution of training cases, we omit INLINEFORM2 , so that when no context features are observed, every word sense is considered equally probable.",
"To deal with the problem of unknown sense distribution, the Bayesian classifier has been modified, removing the bias towards frequent labels in the training data."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"Did they use a crowdsourcing platform for annotations?",
"How do they deal with unknown distribution senses?"
],
"question_id": [
"0ba3ea93eef5660a79ea3c26c6a270eac32dfa4c",
"5e324846a99a5573cd2e843d1657e87f4eb22fa6"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: A part of plWordNet network used to extract replacements for the word język in its 6th sense, meaning tongue (animal or human organ). The resulting replacements are underlined.",
"Table 1: Mean accuracy of the disambiguation algorithm with respect to the features involved in the classification (I0 – interpretation of a disambiguated word, I – interpretations of neighbouring words, Lp – lemmas of neighbouring words, L – lemmas present in the whole sentence).",
"Table 2: Strategies for generating replacements, each built by adding new elements to the previous step, with the resulting number of replacements (average per word), their occurrences in the corpus (total) and the mean accuracy of disambiguation.",
"Figure 2: Mean disambiguation accuracy of models built using corpora of different sizes, created by random selection from 600 million web documents from NEKST search engine: unrestricted or only from the Polish Wikipedia. The best of the baselines, which uses wordnet-based word similarity is also shown.",
"Table 4: Accuracy of four baseline configurations (selecting senses randomly, basing on sense order in the National Corpus of Polish or the Polish wordnet, and choosing the sense which is the most similar to context according to Leacock&Chodorow measure) and two versions of the classifier proposed in this work (based on the traditional naïve Bayesian model or modified as in section 3.4).",
"Table 3: Polysemous words used for evaluation with their number of meanings, test cases and obtained disambiguation accuracy in basic configuration."
],
"file": [
"3-Figure1-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"7-Figure2-1.png",
"8-Table4-1.png",
"8-Table3-1.png"
]
} | [
"How do they deal with unknown distribution senses?"
] | [
[
"1710.07960-Introduction-3",
"1710.07960-Conclusions-0",
"1710.07960-Method-0"
]
] | [
"The Näive-Bayes classifier is corrected so it is not biased to most frequent classes"
] | 35 |
1912.03804 | Women in ISIS Propaganda: A Natural Language Processing Analysis of Topics and Emotions in a Comparison with Mainstream Religious Group | Online propaganda is central to the recruitment strategies of extremist groups and in recent years these efforts have increasingly extended to women. To investigate ISIS' approach to targeting women in their online propaganda and uncover implications for counterterrorism, we rely on text mining and natural language processing (NLP). Specifically, we extract articles published in Dabiq and Rumiyah (ISIS's online English language publications) to identify prominent topics. To identify similarities or differences between these texts and those produced by non-violent religious groups, we extend the analysis to articles from a Catholic forum dedicated to women. We also perform an emotional analysis of both of these resources to better understand the emotional components of propaganda. We rely on Depechemood (a lexical-base emotion analysis method) to detect emotions most likely to be evoked in readers of these materials. The findings indicate that the emotional appeal of ISIS and Catholic materials are similar | {
"paragraphs": [
[
"Since its rise in 2013, the Islamic State of Iraq and Syria (ISIS) has utilized the Internet to spread its ideology, radicalize individuals, and recruit them to their cause. In comparison to other Islamic extremist groups, ISIS' use of technology was more sophisticated, voluminous, and targeted. For example, during ISIS' advance toward Mosul, ISIS related accounts tweeted some 40,000 tweets in one day BIBREF0.However, this heavy engagement forced social media platforms to institute policies to prevent unchecked dissemination of terrorist propaganda to their users, forcing ISIS to adapt to other means to reach their target audience.",
"One such approach was the publication of online magazines in different languages including English. Although discontinued now, these online resources provided a window into ISIS ideology, recruitment, and how they wanted the world to perceive them. For example, after predominantly recruiting men, ISIS began to also include articles in their magazines that specifically addressed women. ISIS encouraged women to join the group by either traveling to the caliphate or by carrying out domestic attacks on behalf of ISIS in their respective countries. This tactical change concerned both practitioners and researchers in the counterterrorism community. New advancements in data science can shed light on exactly how the targeting of women in extremist propaganda works and whether it differs significantly from mainstream religious rhetoric.",
"We utilize natural language processing methods to answer three questions:",
"What are the main topics in women-related articles in ISIS' online magazines?",
"What similarities and/or differences do these topics have with non-violent, non-Islamic religious material addressed specifically to women?",
"What kind of emotions do these articles evoke in their readers and are there similarities in the emotions evoked from both ISIS and non-violent religious materials?",
"As these questions suggest, to understand what, if anything, makes extremist appeals distinctive, we need a point of comparison in terms of the outreach efforts to women from a mainstream, non-violent religious group. For this purpose, we rely on an online Catholic women's forum. Comparison between Catholic material and the content of ISIS' online magazines allows for novel insight into the distinctiveness of extremist rhetoric when targeted towards the female population. To accomplish this task, we employ topic modeling and an unsupervised emotion detection method.",
"The rest of the paper is organized as follows: in Section SECREF2, we review related works on ISIS propaganda and applications of natural language methods. Section SECREF3 describes data collection and pre-processing. Section SECREF4 describes in detail the approach. Section SECREF5 reports the results, and finally, Section SECREF6 presents the conclusion."
],
[
"Soon after ISIS emerged and declared its caliphate, counterterrorism practitioners and political science researchers started to turn their attention towards understanding how the group operated. Researchers investigated the origins of ISIS, its leadership, funding, and how they rose became a globally dominant non-state actor BIBREF1. This interest in the organization's distinctiveness immediately led to inquiries into ISIS' rhetoric, particularly their use of social media and online resources in recruitment and ideological dissemination. For example, Al-Tamimi examines how ISIS differentiated itself from other jihadist movements by using social media with unprecedented efficiency to improve its image with locals BIBREF2. One of ISIS' most impressive applications of its online prowess was in the recruitment process. The organization has used a variety of materials, especially videos, to recruit both foreign and local fighters. Research shows that ISIS propaganda is designed to portray the organization as a provider of justice, governance, and development in a fashion that resonates with young westerners BIBREF3. This propaganda machine has become a significant area of research, with scholars such as Winter identifying key themes in it such as brutality, mercy, victimhood, war, belonging and utopianism. BIBREF4. However, there has been insufficient attention focused on how these approaches have particularly targeted and impacted women. This is significant given that scholars have identified the distinctiveness of this population when it comes to nearly all facets of terrorism.",
"ISIS used different types of media to propagate its messages, such as videos, images, texts, and even music. Twitter was particularly effective and the Arabic Twitter app allowed ISIS to tweet extensively without triggering spam-detection mechanisms the platform uses BIBREF0. Scholars followed the resulting trove of data and this became the preeminent way in which they assess ISIS messages. For example, in BIBREF5 they use both lexical analysis of tweets as well as social network analysis to examine ISIS support or opposition on Twitter. Other researchers used data mining techniques to detect pro-ISIS user divergence behavior at various points in time BIBREF6. By looking at these works, the impact of using text mining and lexical analysis to address important questions becomes obvious. Proper usage of these tools allows the research community to analyze big chunks of unstructured data. This approach, however, became less productive as the social media networks began cracking down and ISIS recruiters moved off of them.",
"With their ability to operate freely on social media now curtailed, ISIS recruiters and propagandists increased their attentiveness to another longstanding tool–English language online magazines targeting western audiences. Al Hayat, the media wing of ISIS, published multiple online magazines in different languages including English. The English online magazine of ISIS was named Dabiq and first appeared on the dark web on July 2014 and continued publishing for 15 issues. This publication was followed by Rumiyah which produced 13 English language issues through September 2017. The content of these magazines provides a valuable but underutilized resource for understanding ISIS strategies and how they appeal to recruits, specifically English-speaking audiences. They also provide a way to compare ISIS' approach with other radical groups. Ingram compared Dabiq contents with Inspire (Al Qaeda publication) and suggested that Al Qaeda heavily emphasized identity-choice, while ISIS' messages were more balanced between identity-choice and rational-choice BIBREF7. In another research paper, Wignell et al. BIBREF8 compared Dabiq and Rumiah by examining their style and what both magazine messages emphasized. Despite the volume of research on these magazines, only a few researchers used lexical analysis and mostly relied on experts' opinions. BIBREF9 is one exception to this approach where they used word frequency on 11 issues of Dabiq publications and compared attributes such as anger, anxiety, power, motive, etc.",
"This paper seeks to establish how ISIS specifically tailored propaganda targeting western women, who became a particular target for the organization as the “caliphate” expanded. Although the number of recruits is unknown, in 2015 it was estimated that around 10 percent of all western recruits were female BIBREF10. Some researchers have attempted to understand how ISIS propaganda targets women. Kneip, for example, analyzed women's desire to join as a form of emancipation BIBREF11. We extend that line of inquiry by leveraging technology to answer key outstanding questions about the targeting of women in ISIS propaganda.",
"To further assess how ISIS propaganda might affect women, we used emotion detection methods on these texts. Emotion detection techniques are mostly divided into lexicon-base or machine learning-base methods. Lexicon-base methods rely on several lexicons while machine learning (ML) methods use algorithm to detect the elation of texts as inputs and emotions as the target, usually trained on a large corpus. Unsupervised methods usually use Non-negative matrix factorization (NMF) and Latent Semantic Analysis (LSA) BIBREF12 approaches. An important distinction that should be made when using text for emotion detection is that emotion detected in the text and the emotion evoked in the reader of that text might differ. In the case of propaganda, it is more desirable to detect possible emotions that will be evoked in a hypothetical reader. In the next section, we describe methods to analyze content and technique to find evoked emotions in a potential reader using available natural language processing tools."
],
[
"Finding useful collections of texts where ISIS targets women is a challenging task. Most of the available material are not reflecting ISIS' official point of view or they do not talk specifically about women. However, ISIS' online magazines are valuable resources for understanding how the organization attempts to appeal to western audiences, particularly women. Looking through both Dabiq and Rumiyah, many issues of the magazines contain articles specifically addressing women, usually with “ to our sisters ” incorporated into the title. Seven out of fifteen Dabiq issues and all thirteen issues of Rumiyah contain articles targeting women, clearly suggesting an increase in attention to women over time.",
"We converted all the ISIS magazines to texts using pdf readers and all articles that addressed women in both magazines (20 articles) were selected for our analysis. To facilitate comparison with a mainstream, non-violent religious group, we collected articles from catholicwomensforum.org, an online resource catering to Catholic women. We scrapped 132 articles from this domain. While this number is large, the articles themselves are much shorter than those published by ISIS. These texts were pre-processed by tokenizing the sentences and eliminating non-word tokens and punctuation marks. Also, all words turned into lower case and numbers and English stop words such as “our, is, did, can, etc. ” have been removed from the produced tokens. For the emotion analysis part, we used a spacy library as part of speech tagging to identify the exact role of words in the sentence. A word and its role have been used to look for emotional values of that word in the same role in the sentence."
],
[
"Most text and document datasets contain many unnecessary words such as stopwords, misspelling, slang, etc. In many algorithms, especially statistical and probabilistic learning algorithms, noise and unnecessary features can have adverse effects on system performance. In this section, we briefly explain some techniques and methods for text cleaning and pre-processing text datasets BIBREF13."
],
[
"Tokenization is a pre-processing method which breaks a stream of text into words, phrases, symbols, or other meaningful elements called tokens BIBREF14. The main goal of this step is to investigate the words in a sentence BIBREF14. Both text classification and text mining requires a parser which processes the tokenization of the documents; for example:",
"sentence BIBREF15 :",
"After sleeping for four hours, he decided to sleep for another four.",
"In this case, the tokens are as follows:",
"{“After” “sleeping” “for” “four” “hours” “he” “decided” “to” “sleep” “for” “another” “four”}."
],
[
"Text and document classification includes many words which do not hold important significance to be used in classification algorithms such as {“a”, “about”, “above”, “across”, “after”, “afterwards”, “again”,$\\hdots $}. The most common technique to deal with these words is to remove them from the texts and documents BIBREF16."
],
[
"K Sparck Jones BIBREF17 proposed inverse document frequency (IDF) as a method to be used in conjunction with term frequency in order to lessen the effect of implicitly common words in the corpus. IDF assigns a higher weight to words with either high frequency or low frequency term in the document. This combination of TF and IDF is well known as term frequency-inverse document frequency (tf-idf). The mathematical representation of the weight of a term in a document by tf-idf is given in Equation DISPLAY_FORM10.",
"Here N is the number of documents and $df(t)$ is the number of documents containing the term t in the corpus. The first term in Equation DISPLAY_FORM10 improves the recall while the second term improves the precision of the word embedding BIBREF18. Although tf-idf tries to overcome the problem of common terms in the document, it still suffers from some other descriptive limitations. Namely, tf-idf cannot account for the similarity between the words in the document since each word is independently presented as an index. However, with the development of more complex models in recent years, new methods, such as word embedding, have been presented that can incorporate concepts such as similarity of words and part of speech tagging."
],
[
"In this section, we describe our methods used for comparing topics and evoked emotions in both ISIS and non-violent religious materials."
],
[
"The key task in comparing ISIS material with that of a non-violent group involves analyzing the content of these two corpora to identify the topics. For our analysis, we considered a simple uni-gram model where each word is considered as a single unit. Understanding what words appear most frequently provides a simple metric for comparison. To do so we normalized the count of words with the number of words in each corpora to account for the size of each corpus. It should be noted, however, that a drawback of word frequencies is that there might be some dominant words that will overcome all the other contents without conveying much information.",
"Topic modeling methods are the more powerful technique for understanding the contents of a corpus. These methods try to discover abstract topics in a corpus and reveal hidden semantic structures in a collection of documents. The most popular topic modeling methods use probabilistic approaches such as probabilistic latent semantic analysis (PLSA) and latent Dirichlet allocation (LDA). LDA is a generalization of pLSA where documents are considered as a mixture of topics and the distribution of topics is governed by a Dirichlet prior ($\\alpha $). Figure FIGREF12 shows plate notation of general LDA structure where $\\beta $ represents prior of word distribution per topic and $\\theta $ refers to topics distribution for documents BIBREF19. Since LDA is among the most widely utilized algorithms for topic modeling, we applied it to our data. However, the coherence of the topics produced by LDA is poorer than expected.",
"To address this lack of coherence, we applied non-negative matrix factorization (NMF). This method decomposes the term-document matrix into two non-negative matrices as shown in Figure FIGREF13. The resulting non-negative matrices are such that their product closely approximate the original data. Mathematically speaking, given an input matrix of document-terms $V$, NMF finds two matrices by solving the following equation BIBREF20:",
"Where W is topic-word matrix and H represents topic-document matrix.",
"NMF appears to provide more coherent topic on specific corpora. O'Callaghan et al. compared LDA with NMF and concluded that NMF performs better in corporas with specific and non-mainstream areas BIBREF21. Our findings align with this assessment and thus our comparison of topics is based on NMF."
],
[
"Propaganda effectiveness hinges on the emotions that it elicits. But detecting emotion in text requires that two essential challenges are overcome.",
"First, emotions are generally complex and emotional representation models are correspondingly contested. Despite this, some models proposed by psychologists have gained wide-spread usage that extends to text-emotion analysis. Robert Plutchik presented a model that arranged emotions from basic to complex in a circumplex as shown in Figure FIGREF15. The model categorizes emotions into 8 main subsets and with addition of intensity and interactions it will classify emotions into 24 classes BIBREF23. Other models have been developed to capture all emotions by defining a 3-dimensional model of pleasure, arousal, and dominance.",
"The second challenge lies in using text for detecting emotion evoked in a potential reader. Common approaches use either lexicon-base methods (such as keyword-based or ontology-based model) or machine learning-base models (usually using large corpus with labeled emotions) BIBREF12. These methods are suited to addressing the emotion that exist in the text, but in the case of propaganda we are more interested in emotions that are elicited in the reader of such materials. The closest analogy to this problem can be found in research that seek to model feelings of people after reading a news article. One solution for this type of problem is to use an approach called Depechemood.",
"Depechemood is a lexicon-based emotion detection method gathered from crowd-annotated news BIBREF24. Drawing on approximately 23.5K documents with average of 500 words per document from rappler.com, researchers asked subjects to report their emotions after reading each article. They then multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words. Due to limitations of their experiment setup, the emotion categories that they present does not exactly match the emotions from the Plutchik wheel categories. However, they still provide a good sense of the general feeling of an individual after reading an article. The emotion categories of Depechemood are: AFRAID, AMUSED, ANGRY, ANNOYED, DON'T CARE, HAPPY, INSPIRED, SAD. Depechemood simply creates dictionaries of words where each word has scores between 0 and 1 for all of these 8 emotion categories. We present our finding using this approach in the result section."
],
[
"In this section, we present the results of our analysis based on the contents of ISIS propaganda materials as compared to articles from the Catholic women forum. We then present the results of emotion analysis conducted on both corpora."
],
[
"After pre-processing the text, both corpora were analyzed for word frequencies. These word frequencies have been normalized by the number of words in each corpus. Figure FIGREF17 shows the most common words in each of these corpora.",
"A comparison of common words suggests that those related to marital relationships ( husband, wife, etc.) appear in both corpora, but the religious theme of ISIS material appears to be stronger. A stronger comparison can be made using topic modeling techniques to discover main topics of these documents. Although we used LDA, our results by using NMF outperform LDA topics, due to the nature of these corpora. Also, fewer numbers of ISIS documents might contribute to the comparatively worse performance. Therefore, we present only NMF results. Based on their coherence, we selected 10 topics for analyzing within both corporas. Table TABREF18 and Table TABREF19 show the most important words in each topic with a general label that we assigned to the topic manually. Based on the NMF output, ISIS articles that address women include topics mainly about Islam, women's role in early Islam, hijrah (moving to another land), spousal relations, marriage, and motherhood.",
"The topics generated from the Catholic women forum are clearly quite different. Some, however, exist in both contexts. More specifically, marriage/divorce, motherhood, and to some extent spousal relations appeared in both generated topics. This suggests that when addressing women in a religious context, these may be very broadly effective and appeal to the feminine audience. More importantly, suitable topic modeling methods will be able to identify these similarities no matter the size of the corpus we are working with. Although, finding the similarities/differences between topics in these two groups of articles might provide some new insights, we turn to emotional analysis to also compare the emotions evoked in the audience."
],
[
"We rely on Depechemood dictionaries to analyze emotions in both corpora. These dictionaries are freely available and come in multiple arrangements. We used a version that includes words with their part of speech (POS) tags. Only words that exist in the Depechemood dictionary with the same POS tag are considered for our analysis. We aggregated the score for each word and normalized each article by emotions. To better compare the result, we added a baseline of 100 random articles from a Reuters news dataset as a non-religious general resource which is available in an NLTK python library. Figure FIGREF22 shows the aggregated score for different feelings in our corpora.",
"Both Catholic and ISIS related materials score the highest in “inspired” category. Furthermore, in both cases, being afraid has the lowest score. However, this is not the case for random news material such as the Reuters corpus, which are not that inspiring and, according to this method, seems to cause more fear in their audience. We investigate these results further by looking at the most inspiring words detected in these two corpora. Table TABREF24 presents 10 words that are among the most inspiring in both corpora. The comparison of the two lists indicate that the method picks very different words in each corpus to reach to the same conclusion. Also, we looked at separate articles in each of the issues of ISIS material addressing women. Figure FIGREF23 shows emotion scores in each of the 20 issues of ISIS propaganda. As demonstrated, in every separate article, this method gives the highest score to evoking inspirations in the reader. Also, in most of these issues the method scored “being afraid” as the lowest score in each issue."
],
[
"In this paper, we have applied natural language processing methods to ISIS propaganda materials in an attempt to understand these materials using available technologies. We also compared these texts with a non-violent religious groups' (both focusing on women related articles) to examine possible similarities or differences in their approaches. To compare the contents, we used word frequency and topic modeling with NMF. Also, our results showed that NMF outperforms LDA due to the niche domain and relatively small number of documents.",
"The results suggest that certain topics play a particularly important roles in ISIS propaganda targeting women. These relate to the role of women in early Islam, Islamic ideology, marriage/divorce, motherhood, spousal relationships, and hijrah (moving to a new land).",
"Comparing these topics with those that appeared on a Catholic women forum, it seems that both ISIS and non-violent groups use topics about motherhood, spousal relationship, and marriage/divorce when they address women. Moreover, we used Depechemood methods to analyze the emotions that these materials are likely to elicit in readers. The result of our emotion analysis suggests that both corpuses used words that aim to inspire readers while avoiding fear. However, the actual words that lead to these effects are very different in the two contexts. Overall, our findings indicate that, using proper methods, automated analysis of large bodies of textual data can provide novel insight insight into extremist propaganda that can assist the counterterrorism community."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data Collection & Pre-Processing ::: Data collection",
"Data Collection & Pre-Processing ::: Pre-Processing ::: Text Cleaning and Pre-processing",
"Data Collection & Pre-Processing ::: Pre-Processing ::: Tokenization",
"Data Collection & Pre-Processing ::: Pre-Processing ::: Stop words",
"Data Collection & Pre-Processing ::: Pre-Processing ::: Term Frequency-Inverse Document Frequency",
"Method",
"Method ::: Content Analysis",
"Method ::: Emotion detection",
"Results",
"Results ::: Content Analysis",
"Results ::: Emotion Analysis",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"3a93821ea58a02da7e9de8da480c74f443d0e908",
"61b98560b9a7f63f0b2538f9f7238375ca446378"
],
"answer": [
{
"evidence": [
"With their ability to operate freely on social media now curtailed, ISIS recruiters and propagandists increased their attentiveness to another longstanding tool–English language online magazines targeting western audiences. Al Hayat, the media wing of ISIS, published multiple online magazines in different languages including English. The English online magazine of ISIS was named Dabiq and first appeared on the dark web on July 2014 and continued publishing for 15 issues. This publication was followed by Rumiyah which produced 13 English language issues through September 2017. The content of these magazines provides a valuable but underutilized resource for understanding ISIS strategies and how they appeal to recruits, specifically English-speaking audiences. They also provide a way to compare ISIS' approach with other radical groups. Ingram compared Dabiq contents with Inspire (Al Qaeda publication) and suggested that Al Qaeda heavily emphasized identity-choice, while ISIS' messages were more balanced between identity-choice and rational-choice BIBREF7. In another research paper, Wignell et al. BIBREF8 compared Dabiq and Rumiah by examining their style and what both magazine messages emphasized. Despite the volume of research on these magazines, only a few researchers used lexical analysis and mostly relied on experts' opinions. BIBREF9 is one exception to this approach where they used word frequency on 11 issues of Dabiq publications and compared attributes such as anger, anxiety, power, motive, etc.",
"Finding useful collections of texts where ISIS targets women is a challenging task. Most of the available material are not reflecting ISIS' official point of view or they do not talk specifically about women. However, ISIS' online magazines are valuable resources for understanding how the organization attempts to appeal to western audiences, particularly women. Looking through both Dabiq and Rumiyah, many issues of the magazines contain articles specifically addressing women, usually with “ to our sisters ” incorporated into the title. Seven out of fifteen Dabiq issues and all thirteen issues of Rumiyah contain articles targeting women, clearly suggesting an increase in attention to women over time."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The English online magazine of ISIS was named Dabiq and first appeared on the dark web on July 2014 and continued publishing for 15 issues. This publication was followed by Rumiyah which produced 13 English language issues through September 2017.",
"Looking through both Dabiq and Rumiyah, many issues of the magazines contain articles specifically addressing women, usually with “ to our sisters ” incorporated into the title."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"9154ceba5d3037d1a6213ad25458396c928017c9",
"cfd3a2724d8961189de28c7990a35baff53c00e6"
],
"answer": [
{
"evidence": [
"Comparing these topics with those that appeared on a Catholic women forum, it seems that both ISIS and non-violent groups use topics about motherhood, spousal relationship, and marriage/divorce when they address women. Moreover, we used Depechemood methods to analyze the emotions that these materials are likely to elicit in readers. The result of our emotion analysis suggests that both corpuses used words that aim to inspire readers while avoiding fear. However, the actual words that lead to these effects are very different in the two contexts. Overall, our findings indicate that, using proper methods, automated analysis of large bodies of textual data can provide novel insight insight into extremist propaganda that can assist the counterterrorism community."
],
"extractive_spans": [
"both corpuses used words that aim to inspire readers while avoiding fear",
"actual words that lead to these effects are very different in the two contexts",
"our findings indicate that, using proper methods, automated analysis of large bodies of textual data can provide novel insight insight into extremist propaganda"
],
"free_form_answer": "",
"highlighted_evidence": [
"Comparing these topics with those that appeared on a Catholic women forum, it seems that both ISIS and non-violent groups use topics about motherhood, spousal relationship, and marriage/divorce when they address women. Moreover, we used Depechemood methods to analyze the emotions that these materials are likely to elicit in readers. The result of our emotion analysis suggests that both corpuses used words that aim to inspire readers while avoiding fear. However, the actual words that lead to these effects are very different in the two contexts. Overall, our findings indicate that, using proper methods, automated analysis of large bodies of textual data can provide novel insight insight into extremist propaganda that can assist the counterterrorism community."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We rely on Depechemood dictionaries to analyze emotions in both corpora. These dictionaries are freely available and come in multiple arrangements. We used a version that includes words with their part of speech (POS) tags. Only words that exist in the Depechemood dictionary with the same POS tag are considered for our analysis. We aggregated the score for each word and normalized each article by emotions. To better compare the result, we added a baseline of 100 random articles from a Reuters news dataset as a non-religious general resource which is available in an NLTK python library. Figure FIGREF22 shows the aggregated score for different feelings in our corpora.",
"Both Catholic and ISIS related materials score the highest in “inspired” category. Furthermore, in both cases, being afraid has the lowest score. However, this is not the case for random news material such as the Reuters corpus, which are not that inspiring and, according to this method, seems to cause more fear in their audience. We investigate these results further by looking at the most inspiring words detected in these two corpora. Table TABREF24 presents 10 words that are among the most inspiring in both corpora. The comparison of the two lists indicate that the method picks very different words in each corpus to reach to the same conclusion. Also, we looked at separate articles in each of the issues of ISIS material addressing women. Figure FIGREF23 shows emotion scores in each of the 20 issues of ISIS propaganda. As demonstrated, in every separate article, this method gives the highest score to evoking inspirations in the reader. Also, in most of these issues the method scored “being afraid” as the lowest score in each issue.",
"Comparing these topics with those that appeared on a Catholic women forum, it seems that both ISIS and non-violent groups use topics about motherhood, spousal relationship, and marriage/divorce when they address women. Moreover, we used Depechemood methods to analyze the emotions that these materials are likely to elicit in readers. The result of our emotion analysis suggests that both corpuses used words that aim to inspire readers while avoiding fear. However, the actual words that lead to these effects are very different in the two contexts. Overall, our findings indicate that, using proper methods, automated analysis of large bodies of textual data can provide novel insight insight into extremist propaganda that can assist the counterterrorism community."
],
"extractive_spans": [],
"free_form_answer": "By comparing scores for each word calculated using Depechemood dictionary and normalize emotional score for each article, they found Catholic and ISIS materials show similar scores",
"highlighted_evidence": [
"We rely on Depechemood dictionaries to analyze emotions in both corpora.",
"We aggregated the score for each word and normalized each article by emotions.",
"Both Catholic and ISIS related materials score the highest in “inspired” category. Furthermore, in both cases, being afraid has the lowest score. ",
"The result of our emotion analysis suggests that both corpuses used words that aim to inspire readers while avoiding fear. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"45e9e168a4032dcba5d853bb5cdf039148b6d77e",
"a91af1ea80a99b342829278bea4d3a74c6dcb806"
],
"answer": [
{
"evidence": [
"Depechemood is a lexicon-based emotion detection method gathered from crowd-annotated news BIBREF24. Drawing on approximately 23.5K documents with average of 500 words per document from rappler.com, researchers asked subjects to report their emotions after reading each article. They then multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words. Due to limitations of their experiment setup, the emotion categories that they present does not exactly match the emotions from the Plutchik wheel categories. However, they still provide a good sense of the general feeling of an individual after reading an article. The emotion categories of Depechemood are: AFRAID, AMUSED, ANGRY, ANNOYED, DON'T CARE, HAPPY, INSPIRED, SAD. Depechemood simply creates dictionaries of words where each word has scores between 0 and 1 for all of these 8 emotion categories. We present our finding using this approach in the result section."
],
"extractive_spans": [],
"free_form_answer": "By multiplying crowd-annotated document-emotion matrix with emotion-word matrix. ",
"highlighted_evidence": [
"Depechemood is a lexicon-based emotion detection method gathered from crowd-annotated news BIBREF24. Drawing on approximately 23.5K documents with average of 500 words per document from rappler.com, researchers asked subjects to report their emotions after reading each article. They then multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Depechemood is a lexicon-based emotion detection method gathered from crowd-annotated news BIBREF24. Drawing on approximately 23.5K documents with average of 500 words per document from rappler.com, researchers asked subjects to report their emotions after reading each article. They then multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words. Due to limitations of their experiment setup, the emotion categories that they present does not exactly match the emotions from the Plutchik wheel categories. However, they still provide a good sense of the general feeling of an individual after reading an article. The emotion categories of Depechemood are: AFRAID, AMUSED, ANGRY, ANNOYED, DON'T CARE, HAPPY, INSPIRED, SAD. Depechemood simply creates dictionaries of words where each word has scores between 0 and 1 for all of these 8 emotion categories. We present our finding using this approach in the result section."
],
"extractive_spans": [
"researchers asked subjects to report their emotions after reading each article",
"multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words",
"Depechemood simply creates dictionaries of words where each word has scores between 0 and 1 for all of these 8 emotion categories"
],
"free_form_answer": "",
"highlighted_evidence": [
"Depechemood is a lexicon-based emotion detection method gathered from crowd-annotated news BIBREF24. Drawing on approximately 23.5K documents with average of 500 words per document from rappler.com, researchers asked subjects to report their emotions after reading each article. They then multiplied the document-emotion matrix and word-document matrix to derive emotion-word matrix for these words. Due to limitations of their experiment setup, the emotion categories that they present does not exactly match the emotions from the Plutchik wheel categories. However, they still provide a good sense of the general feeling of an individual after reading an article. The emotion categories of Depechemood are: AFRAID, AMUSED, ANGRY, ANNOYED, DON'T CARE, HAPPY, INSPIRED, SAD. Depechemood simply creates dictionaries of words where each word has scores between 0 and 1 for all of these 8 emotion categories. We present our finding using this approach in the result section."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8e6e6c9aa759b8387140212028bfc1aa6011329e",
"c7597daad8f2cd5214e56afd72b27d5b209a9376"
],
"answer": [
{
"evidence": [
"What similarities and/or differences do these topics have with non-violent, non-Islamic religious material addressed specifically to women?",
"As these questions suggest, to understand what, if anything, makes extremist appeals distinctive, we need a point of comparison in terms of the outreach efforts to women from a mainstream, non-violent religious group. For this purpose, we rely on an online Catholic women's forum. Comparison between Catholic material and the content of ISIS' online magazines allows for novel insight into the distinctiveness of extremist rhetoric when targeted towards the female population. To accomplish this task, we employ topic modeling and an unsupervised emotion detection method."
],
"extractive_spans": [],
"free_form_answer": "By using topic modeling and unsupervised emotion detection on ISIS materials and articles from Catholic women forum",
"highlighted_evidence": [
"What similarities and/or differences do these topics have with non-violent, non-Islamic religious material addressed specifically to women?",
"As these questions suggest, to understand what, if anything, makes extremist appeals distinctive, we need a point of comparison in terms of the outreach efforts to women from a mainstream, non-violent religious group. For this purpose, we rely on an online Catholic women's forum. Comparison between Catholic material and the content of ISIS' online magazines allows for novel insight into the distinctiveness of extremist rhetoric when targeted towards the female population. To accomplish this task, we employ topic modeling and an unsupervised emotion detection method."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Results ::: Emotion Analysis",
"We rely on Depechemood dictionaries to analyze emotions in both corpora. These dictionaries are freely available and come in multiple arrangements. We used a version that includes words with their part of speech (POS) tags. Only words that exist in the Depechemood dictionary with the same POS tag are considered for our analysis. We aggregated the score for each word and normalized each article by emotions. To better compare the result, we added a baseline of 100 random articles from a Reuters news dataset as a non-religious general resource which is available in an NLTK python library. Figure FIGREF22 shows the aggregated score for different feelings in our corpora.",
"Results ::: Content Analysis",
"After pre-processing the text, both corpora were analyzed for word frequencies. These word frequencies have been normalized by the number of words in each corpus. Figure FIGREF17 shows the most common words in each of these corpora.",
"A comparison of common words suggests that those related to marital relationships ( husband, wife, etc.) appear in both corpora, but the religious theme of ISIS material appears to be stronger. A stronger comparison can be made using topic modeling techniques to discover main topics of these documents. Although we used LDA, our results by using NMF outperform LDA topics, due to the nature of these corpora. Also, fewer numbers of ISIS documents might contribute to the comparatively worse performance. Therefore, we present only NMF results. Based on their coherence, we selected 10 topics for analyzing within both corporas. Table TABREF18 and Table TABREF19 show the most important words in each topic with a general label that we assigned to the topic manually. Based on the NMF output, ISIS articles that address women include topics mainly about Islam, women's role in early Islam, hijrah (moving to another land), spousal relations, marriage, and motherhood."
],
"extractive_spans": [
"A comparison of common words",
"We aggregated the score for each word and normalized each article by emotions. To better compare the result, we added a baseline of 100 random articles from a Reuters news dataset as a non-religious general resource"
],
"free_form_answer": "",
"highlighted_evidence": [
"Results ::: Emotion Analysis\nWe rely on Depechemood dictionaries to analyze emotions in both corpora. These dictionaries are freely available and come in multiple arrangements. We used a version that includes words with their part of speech (POS) tags. Only words that exist in the Depechemood dictionary with the same POS tag are considered for our analysis. We aggregated the score for each word and normalized each article by emotions. To better compare the result, we added a baseline of 100 random articles from a Reuters news dataset as a non-religious general resource which is available in an NLTK python library.",
"Results ::: Content Analysis\nAfter pre-processing the text, both corpora were analyzed for word frequencies. These word frequencies have been normalized by the number of words in each corpus. Figure FIGREF17 shows the most common words in each of these corpora.\n\nA comparison of common words suggests that those related to marital relationships ( husband, wife, etc.) appear in both corpora, but the religious theme of ISIS material appears to be stronger."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"c030d473cb65beb0c87c3f430b3573f84a4a460a",
"cb26f5bd6c188635e1a4cbc7475793c445c3368a"
],
"answer": [
{
"evidence": [
"Topic modeling methods are the more powerful technique for understanding the contents of a corpus. These methods try to discover abstract topics in a corpus and reveal hidden semantic structures in a collection of documents. The most popular topic modeling methods use probabilistic approaches such as probabilistic latent semantic analysis (PLSA) and latent Dirichlet allocation (LDA). LDA is a generalization of pLSA where documents are considered as a mixture of topics and the distribution of topics is governed by a Dirichlet prior ($\\alpha $). Figure FIGREF12 shows plate notation of general LDA structure where $\\beta $ represents prior of word distribution per topic and $\\theta $ refers to topics distribution for documents BIBREF19. Since LDA is among the most widely utilized algorithms for topic modeling, we applied it to our data. However, the coherence of the topics produced by LDA is poorer than expected.",
"To address this lack of coherence, we applied non-negative matrix factorization (NMF). This method decomposes the term-document matrix into two non-negative matrices as shown in Figure FIGREF13. The resulting non-negative matrices are such that their product closely approximate the original data. Mathematically speaking, given an input matrix of document-terms $V$, NMF finds two matrices by solving the following equation BIBREF20:"
],
"extractive_spans": [
"LDA",
"non-negative matrix factorization (NMF)"
],
"free_form_answer": "",
"highlighted_evidence": [
"However, the coherence of the topics produced by LDA is poorer than expected.\n\nTo address this lack of coherence, we applied non-negative matrix factorization (NMF)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"A comparison of common words suggests that those related to marital relationships ( husband, wife, etc.) appear in both corpora, but the religious theme of ISIS material appears to be stronger. A stronger comparison can be made using topic modeling techniques to discover main topics of these documents. Although we used LDA, our results by using NMF outperform LDA topics, due to the nature of these corpora. Also, fewer numbers of ISIS documents might contribute to the comparatively worse performance. Therefore, we present only NMF results. Based on their coherence, we selected 10 topics for analyzing within both corporas. Table TABREF18 and Table TABREF19 show the most important words in each topic with a general label that we assigned to the topic manually. Based on the NMF output, ISIS articles that address women include topics mainly about Islam, women's role in early Islam, hijrah (moving to another land), spousal relations, marriage, and motherhood."
],
"extractive_spans": [],
"free_form_answer": "Using NMF based topic modeling and their coherence prominent topics are identified",
"highlighted_evidence": [
"Therefore, we present only NMF results. Based on their coherence, we selected 10 topics for analyzing within both corporas. ",
"A stronger comparison can be made using topic modeling techniques to discover main topics of these documents. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"What conclusions do the authors draw from their finding that the emotional appeal of ISIS and Catholic materials are similar?",
"How id Depechemood trained?",
"How are similarities and differences between the texts from violent and non-violent religious groups analyzed?",
"How are prominent topics idenified in Dabiq and Rumiyah?"
],
"question_id": [
"2ccc26e11df4eb26fcccdd1f446dc749aff5d572",
"f318a2851d7061f05a5b32b94251f943480fbd15",
"6bbbb9933aab97ce2342200447c6322527427061",
"2007bfb8f66e88a235c3a8d8c0a3b3dd88734706",
"d859cc37799a508bbbe4270ed291ca6394afce2c"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Fig. 1: Plate notation of LDA model",
"Fig. 2: NMF decomposition of document-term matrix [11]",
"Fig. 3: 2D representation of Plutchik wheel of emotions [16]",
"Table 1: NMF Topics of women in ISIS",
"Table 2: NMF Topics of women in catholic forum",
"Fig. 4: Word frequency of most common words in catholic corpora",
"Fig. 5: Word frequency of most common words in dabiq corpora",
"Fig. 6: Comparison of emotions of our both corpora along with Reuters news",
"Fig. 7: Feeling detected in ISIS magazines (first 7 issues belong to Dabiq and last 13 belong to Rumiyah)",
"Table 3: Words with highest inspiring scores"
],
"file": [
"7-Figure1-1.png",
"7-Figure2-1.png",
"8-Figure3-1.png",
"9-Table1-1.png",
"10-Table2-1.png",
"11-Figure4-1.png",
"11-Figure5-1.png",
"13-Figure6-1.png",
"14-Figure7-1.png",
"14-Table3-1.png"
]
} | [
"What conclusions do the authors draw from their finding that the emotional appeal of ISIS and Catholic materials are similar?",
"How id Depechemood trained?",
"How are similarities and differences between the texts from violent and non-violent religious groups analyzed?",
"How are prominent topics idenified in Dabiq and Rumiyah?"
] | [
[
"1912.03804-Conclusion and Future Work-2",
"1912.03804-Results ::: Emotion Analysis-0",
"1912.03804-Results ::: Emotion Analysis-1"
],
[
"1912.03804-Method ::: Emotion detection-3"
],
[
"1912.03804-Results ::: Content Analysis-0",
"1912.03804-Results ::: Content Analysis-1",
"1912.03804-Results ::: Emotion Analysis-0",
"1912.03804-Introduction-6",
"1912.03804-Introduction-4"
],
[
"1912.03804-Method ::: Content Analysis-2",
"1912.03804-Method ::: Content Analysis-1",
"1912.03804-Results ::: Content Analysis-1"
]
] | [
"By comparing scores for each word calculated using Depechemood dictionary and normalize emotional score for each article, they found Catholic and ISIS materials show similar scores",
"By multiplying crowd-annotated document-emotion matrix with emotion-word matrix. ",
"By using topic modeling and unsupervised emotion detection on ISIS materials and articles from Catholic women forum",
"Using NMF based topic modeling and their coherence prominent topics are identified"
] | 36 |
1912.08960 | Going Beneath the Surface: Evaluating Image Captioning for Grammaticality, Truthfulness and Diversity | Image captioning as a multimodal task has drawn much interest in recent years. However, evaluation for this task remains a challenging problem. Existing evaluation metrics focus on surface similarity between a candidate caption and a set of reference captions, and do not check the actual relation between a caption and the underlying visual content. We introduce a new diagnostic evaluation framework for the task of image captioning, with the goal of directly assessing models for grammaticality, truthfulness and diversity (GTD) of generated captions. We demonstrate the potential of our evaluation framework by evaluating existing image captioning models on a wide ranging set of synthetic datasets that we construct for diagnostic evaluation. We empirically show how the GTD evaluation framework, in combination with diagnostic datasets, can provide insights into model capabilities and limitations to supplement standard evaluations. | {
"paragraphs": [
[
"Automatically generating text to describe the content of images, also known as image captioning, is a multimodal task of considerable interest in both the computer vision and the NLP communities. Image captioning can be framed as a translation task from an image to a descriptive natural language statement. Many existing captioning models BIBREF0, BIBREF1, BIBREF2, BIBREF3 follow the typical encoder-decoder framework where a convolutional network is used to condense images into visual feature representations, combined with a recurrent network for language generation. While these models demonstrate promising results, quantifying image captioning performance remains a challenging problem, in a similar way to other generative tasks BIBREF4, BIBREF5.",
"Evaluating candidate captions for human preference is slow and laborious. To alleviate this problem, many automatic evaluation metrics have been proposed, such as BLEU BIBREF6, METEOR BIBREF7, ROUGE BIBREF8 and CIDEr BIBREF9. These n-gram-based metrics evaluate captioning performance based on surface similarity between a candidate caption and reference statements. A more recent evaluation metric for image captioning is SPICE BIBREF10, which takes into account semantic propositional content of generated captions by scoring a caption based upon a graph-based semantic representation transformed from reference captions.",
"The rationale behind these evaluation metrics is that human reference captions serve as an approximate target and comparing model outputs to this target is a proxy for how well a system performs. Thus, a candidate caption is not directly evaluated with respect to image content, but compared to a set of human statements about that image.",
"However, in image captioning, visual scenes with multiple objects and relations correspond to a diversity of valid descriptions. Consider the example image and captions from the ShapeWorld framework BIBREF11 shown in Figure FIGREF1. The first three captions are true statements about the image and express relevant ideas, but describe different objects, attributes and spatial relationships, while the fourth caption is wrong despite referring to the same objects as in the third caption. This casts doubt on the sufficiency of using a set of reference captions to approximate the content of an image. We argue that, while existing metrics have undeniably been useful for real-world captioning evaluation, their focus on approximate surface comparison limits deeper insights into the learning process and eventual behavior of captioning models.",
"To address this problem, we propose a set of principled evaluation criteria which evaluate image captioning models for grammaticality, truthfulness and diversity (GTD). These criteria correspond to necessary requirements for image captioning systems: (a) that the output is grammatical, (b) that the output statement is true with respect to the image, and (c) that outputs are diverse and mirror the variability of training captions.",
"Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE. We empirically demonstrate that the existing metrics BLEU and SPICE do not capture true caption-image agreement in all scenarios, while the GTD framework allows a fine-grained investigation of how well existing models cope with varied visual situations and linguistic constructions.",
"We believe that as a supplementary evaluation method to real-world metrics, the GTD framework provides evaluation insights that are sufficiently interesting to motivate future work."
],
[
"As a natural language generation task, image captioning frequently uses evaluation metrics such as BLEU BIBREF6, METEOR BIBREF7, ROUGE BIBREF8 and CIDEr BIBREF9. These metrics use n-gram similarity between the candidate caption and reference captions to approximate the correlation between a candidate caption and the associated ground truth. SPICE BIBREF10 is a more recent metric specifically designed for image captioning. For SPICE, both the candidate caption and reference captions are parsed to scene graphs, and the agreement between tuples extracted from these scene graphs is examined. SPICE more closely relates to our truthfulness evaluation than the other metrics, but it still uses overlap comparison to reference captions as a proxy to ground truth. In contrast, our truthfulness metric directly evaluates a candidate caption against a model of the actual visual content.",
"Many researchers have pointed out problems with existing reference-based metrics including low correlations with human judgment BIBREF12, BIBREF10, BIBREF13 and strong baselines using nearest-neighbor methods BIBREF14 or relying solely on object detection BIBREF15. Fundamental concerns have been raised with respect to BLEU, including variability in parameterization and precise score calculation leading to significantly different results BIBREF16. Its validity as a metric for tasks other than machine translation has been questioned BIBREF17, particularly for tasks for which the output content is not narrowly constrained, like dialogue BIBREF18.",
"Some recent work focuses on increasing the diversity of generated captions, for which various measures are proposed. Devlin et al. BIBREF19 explored the concept of caption diversity by evaluating performance on compositionally novel images. van Miltenburg et al BIBREF20 framed image captioning as a word recall task and proposed several metrics, predominantly focusing on diversity at the word level. However, this direction is still relatively new and lacks standardized benchmarks and metrics."
],
[
"Recently, many synthetic datasets have been proposed as diagnostic tools for deep learning models, such as CLEVR BIBREF21 for visual question answering (VQA), the bAbI tasks BIBREF22 for text understanding and reasoning, and ShapeWorld BIBREF11 for visually grounded language understanding. The primary motivation is to reduce complexity which is considered irrelevant to the evaluation focus, to enable better control over the data, and to provide more detailed insights into strengths and limitations of existing models.",
"In this work, we develop the evaluation datasets within the ShapeWorld framework. ShapeWorld is a controlled data generation framework consisting of abstract colored shapes (see Figure FIGREF1 for an example). We use ShapeWorld to generate training and evaluation data for two major reasons. ShapeWorld supports customized data generation according to user specification, which enables a variety of model inspections in terms of language construction, visual complexity and reasoning ability. Another benefit is that each training and test instance generated in ShapeWorld is returned as a triplet of $<$image, caption, world model$>$. The world model stores information about the underlying microworld used to generate an image and a descriptive caption, internally represented as a list of entities with their attributes, such as shape, color, position. During data generation, ShapeWorld randomly samples a world model from a set of available entities and attributes. The generated world model is then used to realize a corresponding instance consisting of image and caption. The world model gives the actual semantic information contained in an image, which allows evaluation of caption truthfulness."
],
[
"In the following we introduce GTD in more detail, consider it as an evaluation protocol covering necessary aspects of the multifaceted captioning task, rather than a specific metric."
],
[
"An essential criterion for an image captioning model is that the captions generated are grammatically well-formed. Fully accurate assessment of grammaticality in a general context is itself a difficult task, but becomes more feasible in a very constrained context like our diagnostic language data. We take parseability with the English Resource Grammar BIBREF23 as a surrogate for grammaticality, meaning that a sentence is considered grammatically well-formed if we obtain a parse using the ERG.",
"The ERG is a broad-coverage grammar based on the head-driven phrase structure grammar (HPSG) framework. It is linguistically precise: sentences only parse if they are valid according to its hand-built rules. It is designed to be general-purpose: verified coverage is around 80% for Wikipedia, and over 90% for corpora with shorter sentences and more limited vocabulary (for details see BIBREF24 flickinger2011accuracy). Since the ShapeWorld training data – the only language source for models to learn from – is generated using the same grammar, the ERG has $\\sim $100% coverage of grammaticality in the model output space."
],
[
"The second aspect we investigate is truthfulness, that is, whether a candidate caption is compatible with the content of the image it is supposed to describe. We evaluate caption truthfulness on the basis of a linguistically-motivated approach using formal semantics. We convert the output of the ERG parse for a grammatical caption to a Dependency Minimal Recursion Semantics (DMRS) graph using the pydmrs tool BIBREF25. Each converted DMRS is a logical semantic graph representation corresponding to the caption. We construct a logical proposition from the DMRS graph, and evaluate it against the actual world model of the corresponding image. A caption can be said to agree with an image only if the proposition evaluates as true on the basis of the world model. By examining the logical agreement between a caption representation and a world model, we can check whether the semantics of this caption agrees with the visual content which the world model represents. Thus we do not rely on a set of captions as a surrogate for the content of an image, but instead leverage the fact that we have the ground truth, thus enabling the evaluation of true image-caption agreement."
],
[
"While grammaticality and truthfulness are essential requirements for image captions, these criteria alone can easily be “gamed” by specializing on a small set of generic statements which are true most of the time. In the context of abstract shapes, such captions include examples like “There is a shape” or “At least zero shapes are blue” (which is technically true even if there is no blue shape). This motivates the third fundamental requirement of captioning output to be diverse.",
"As ShapeWorldICE exploits a limited size of open-class words, we emphasize the diversity in ShapeWorldICE at the sentence level rather than the word level. Since the ground-truth reference captions in ShapeWorld are randomly sampled, we take the sampled captions accompanying the test images as a proxy for optimal caption diversity, and compare it with the empirical output diversity of the evaluated model on these test images. Practically, we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number:",
"Language constructions here correspond to reduced caption representations which only record whether an object is described by shape (e.g., “square”), color (e.g., “red shape”) or color-shape combination (e.g., “red square”). So the statement “A square is red” and “A circle is blue” are considered the same, while “A shape is red” is different."
],
[
"We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper. We consider three different types of captioning tasks, each of which focuses on a distinct aspect of reasoning abilities. Existential descriptions examine whether a certain object is present in an image. Spatial descriptions identify spatial relationships among visual objects. Quantification descriptions involve count-based and ratio-based statements, with an explicit focus on inspecting models for their counting ability. We develop two variants for each type of dataset to enable different levels of visual complexity or specific aspects of the same reasoning type. All the training and test captions sampled in this work are in English.",
"Each dataset variant consists of around 200k training instances and 4,096 validation instances, plus 4,096 test instances. Each training instance consists of an image and a reference caption. At test time, only the test images are available to the evaluated models. Underlying world models are kept from the models and are used for later GTD evaluation. For each test instance, we sample ten reference captions of the same distribution as the training captions to enable the comparison of our proposed metrics to BLEU and SPICE. We fine-tune our model hyperparameters based on the performance on the validation set. All reported results are measured on the test split with the parameters yielding the best validation performance."
],
[
"We experiment with two image captioning models: the Show&Tell model BIBREF0 and the LRCN1u model BIBREF1. Both models follow the basic encoder-decoder architecture design that uses a CNN encoder to condense the visual information into an image embedding, which in turn conditions an LSTM decoder to generate a natural language caption. The main difference between the two models is the way they condition the decoder. The Show&Tell model feeds the image embedding as the “predecessor word embedding” to the first produced word, while the LRCN1u model concatenates the image features with the embedded previous word as the input to the sequence model at each time step.",
"We follow the common practice in image captioning to use a CNN component pretrained on object detection and fine-tune its parameters on the image captioning task. The encoder and decoder components are jointly optimized with respect to the standard cross-entropy sequence loss on the respective ShapeWorldICE dataset. For all our experiments, we train models end-to-end for a fixed number of 100k iterations with a batch size of 64. We use Adam optimization BIBREF26 with a learning rate of 0.001. Word embeddings are randomly initialized and jointly trained during the training."
],
[
"We train and evaluate the Show&Tell and LRCN1u models on the ShapeWorldICE datasets. Here we discuss in detail the diagnostic results of these experiments. During training, we periodically record model output on the test images, to be able to analyze the development of our evaluation metrics throughout the process. We also compute BLEU-4 scores and SPICE scores of generated captions for comparison, using 10 reference captions per test image.",
"LRCN1u exhibits clearly superior performance in terms of truthfulness. We start off by comparing performance of the Show&Tell model and the LRCN1u model, see Figure FIGREF8. While both models learn to produce grammatical sentences early on, it can be seen that LRCN1u is clearly superior in terms of truthfulness, achieving 100% halfway through training, whereas Show&Tell only slowly reaches around 90% by the end of 100k iterations. This indicates that incorporating visual features at every generation step is beneficial for producing true captions. The diversity ratios of captions generated by two models both increase substantially as the training progresses, with LRCN1u exhibiting a slightly greater caption diversity at the end of training.",
"We observed similar results on other ShapeWorldICE datasets that we experimented with, validating the superiority of LRCN1u over Show&Tell on ShapeWorldICE. Consequently, we decided to focus on the LRCN1u architecture in subsequent evaluations, where we report detailed results with respect to the GTD framework on a variety of datasets.",
"Correlation between the BLEU/SPICE scores and the ground truth. From the learning curves shown in Figure FIGREF9, we find low or no correlation between the BLEU/SPICE scores and caption truthfulness.",
"On Existential-OneShape, the BLEU curve follows the trend of the truthfulness curve in general, indicating that BLEU is able to capture caption truthfulness well in this simple scenario. However, while BLEU reports equivalent model performance on Existential-MultiShapes and Spatial-MultiShapes, the truthfulness metric demonstrates very different results. The BLEU score for generated Existential-MultiShapes captions increases rapidly at the beginning of training and then plateaus despite the continuous increase in truthfulness ratio. Captions generated on Spatial-MultiShapes attain a relatively high BLEU score from an early stage of training, but exhibit low agreement ($<$0.6 truthfulness ratio) with ground-truth visual scenes. In the case of Spatial-MultiShapes, spatial descriptors for two objects are chosen from a fixed set (“above”, “below”, “to the left of” and “to the right of”). It is very likely for a generated spatial descriptor to match one of the descriptors mentioned in reference captions. In this particular case, the model is apt to infer a caption which has high n-gram overlaps with reference captions, resulting in a relatively high BLEU score. Thus an increased BLEU score does not necessarily indicate improved performance.",
"While the truthfulness and BLEU scores in Figure FIGREF9 both increase rapidly early on and then stay stable at a high rate after training for 20k iterations, the SPICE curve instead shows a downward trend in the later stage of training. We examined the output SPICE score for each test instance. SPICE reports a precision score of 1.0 for most test instances after 20k iterations, which is consistent with the truthfulness and BLEU scores. However, SPICE forms the reference scene graph as the union of the scene graphs extracted from individual reference captions, thus introducing redundancies. SPICE uses the F1 score of scene graph matching between the candidate and reference and hence is lowered by imperfect recall.",
"Comparing SPICE curves for three datasets shown in Figure FIGREF9-FIGREF9, they suggest an increase in task complexity, but they do not reflect the successively closing gap of caption truthfulness scores between two Existential datasets, or the substantial difference in caption truthfulness between captions on Existential-MultiShapes and Spatial-MultiShapes.",
"In the remainder of the paper we discuss in detail the diagnostic results of the LRCN1u model demonstrated by the GTD evaluation framework.",
"Perfect grammaticality for all caption types. As shown in Figure FIGREF15, generated captions for all types of ShapeWorldICE datasets attain quasi-perfect grammaticality scores in fewer than 5,000 iterations, suggesting that the model quickly learns to generate grammatically well-formed sentences.",
"Failure to learn complex spatial relationships. While CNNs can produce rich visual representations that can be used for a variety of vision tasks BIBREF27, it remains an open question whether these condensed visual representations are rich enough for multimodal tasks that require higher-level abilities of scene understanding and visual reasoning. From Figure FIGREF16, we can see that while the model performs rather well on Existential datasets, it exhibits a worse performance on Spatial data. The caption agreement ratio in the simple Spatial-TwoShapes scenario is relatively high, but drops significantly on Spatial-MultiShapes, demonstrating the deficiencies of the model in learning spatial relationships from complex visual scenes.",
"The counting task is non-trivial. Counting has long been considered to be a challenging task in multimodal reasoning BIBREF28, BIBREF29. To explore how well the LRCN1u model copes with counting tasks, we generated two Quantification datasets. The Quant-Count captions describe the number of objects with certain attributes that appear in an image (e.g. “Exactly four shapes are crosses”), while the Quant-Ratio captions describe the ratio of certain objects (e.g. “A third of the shapes are blue squares”).",
"From Figure FIGREF16, we notice that the LRCN1u model performs poorly on these datasets in terms of truthfulness, reflected in the 0.50 and 0.46 scores achieved by the model on the Quant-Count and Quant-Ratio tasks respectively. The learning curve for Quant-Ratio exhibits a more gradual rise as the training progresses, suggesting a greater complexity for the ratio-based task.",
"Caption diversity benefits from varied language constructions in the training data. The diversity ratios of generated captions for different ShapeWorldICE datasets are illustrated in Figure FIGREF17. We can see that the diversity of inferred captions is largely sensitive to the caption variability in the dataset itself. For simple datasets (such as Existential-OneShape) where language constructions in the training set are less diverse, the output captions tend to have uniform sentence structures. The high diversity ratios of generated Spatial and Quantification captions suggest that caption diversity benefits from heterogeneous language constructions in complex datasets."
],
[
"Evaluation metrics are required as a proxy for performance in real applications. As such, they should, as far as possible, allow measurement of fundamental aspects of the performance of models on tasks. In this work, we propose the GTD evaluation framework as a supplement to standard image captioning evaluation which explicitly focuses on grammaticality, truthfulness and diversity. We developed the ShapeWorldICE evaluation suite to allow in-depth and fine-grained inspection of model behaviors. We have empirically verified that GTD captures different aspects of performance to existing metrics by evaluating image captioning models on the ShapeWorldICE suite. We hope that this framework will shed light on important aspects of model behaviour and that this will help guide future research efforts.",
"While performing the evaluation experiments on the LRCN1u model, we noticed that caption agreement does not always improve as the training loss decreases. Ideally, the training objective should be in accordance with how a model is eventually evaluated. In future work, we plan to investigate the feasibility of deliberately encoding the GTD signal in the training process, for instance, by implementing a GTD-aware loss. We also plan to extend the existing ShapeWorldICE benchmark to include more linguistic constructions (such as relative clauses, compound sentences and coreference). By doing so, we hope to reveal how well existing image captioning models cope with complex generation tasks."
],
[
"We thank the anonymous reviewers for their constructive feedback. HX is grateful for being supported by the CSC Cambridge Scholarship. TS is supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the EPSRC (grant EP/L016427/1) and the University of Edinburgh. AK is grateful for being supported by a Qualcomm Research Studentship and an EPSRC Doctoral Training Studentship."
]
],
"section_name": [
"Introduction",
"Related work ::: Existing evaluation of image captioning",
"Related work ::: Synthetic datasets",
"GTD Evaluation Framework",
"GTD Evaluation Framework ::: Grammaticality",
"GTD Evaluation Framework ::: Truthfulness",
"GTD Evaluation Framework ::: Diversity",
"Experimental Setup ::: Datasets",
"Experimental Setup ::: Models",
"Results",
"Discussions and Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"693d023e6f0732de99b13d858fb5e6db73186df9",
"e8557acf29f36327032e24b3f6d5676abdcce1be"
],
"answer": [
{
"evidence": [
"Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE. We empirically demonstrate that the existing metrics BLEU and SPICE do not capture true caption-image agreement in all scenarios, while the GTD framework allows a fine-grained investigation of how well existing models cope with varied visual situations and linguistic constructions."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"In this work, we develop the evaluation datasets within the ShapeWorld framework. ShapeWorld is a controlled data generation framework consisting of abstract colored shapes (see Figure FIGREF1 for an example). We use ShapeWorld to generate training and evaluation data for two major reasons. ShapeWorld supports customized data generation according to user specification, which enables a variety of model inspections in terms of language construction, visual complexity and reasoning ability. Another benefit is that each training and test instance generated in ShapeWorld is returned as a triplet of $<$image, caption, world model$>$. The world model stores information about the underlying microworld used to generate an image and a descriptive caption, internally represented as a list of entities with their attributes, such as shape, color, position. During data generation, ShapeWorld randomly samples a world model from a set of available entities and attributes. The generated world model is then used to realize a corresponding instance consisting of image and caption. The world model gives the actual semantic information contained in an image, which allows evaluation of caption truthfulness."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this work, we develop the evaluation datasets within the ShapeWorld framework. ShapeWorld is a controlled data generation framework consisting of abstract colored shapes (see Figure FIGREF1 for an example)."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
},
{
"annotation_id": [
"3ac0d42b1c169429408c4dd0f053fe96d71134ea",
"5c19b79e67b258f53e86e89c8712f05a885c4fde"
],
"answer": [
{
"evidence": [
"We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper. We consider three different types of captioning tasks, each of which focuses on a distinct aspect of reasoning abilities. Existential descriptions examine whether a certain object is present in an image. Spatial descriptions identify spatial relationships among visual objects. Quantification descriptions involve count-based and ratio-based statements, with an explicit focus on inspecting models for their counting ability. We develop two variants for each type of dataset to enable different levels of visual complexity or specific aspects of the same reasoning type. All the training and test captions sampled in this work are in English.",
"FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."
],
"extractive_spans": [],
"free_form_answer": "Existential (OneShape, MultiShapes), Spacial (TwoShapes, Multishapes), Quantification (Count, Ratio) datasets are generated from ShapeWorldICE",
"highlighted_evidence": [
"We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper.",
"FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE. We empirically demonstrate that the existing metrics BLEU and SPICE do not capture true caption-image agreement in all scenarios, while the GTD framework allows a fine-grained investigation of how well existing models cope with varied visual situations and linguistic constructions.",
"We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper. We consider three different types of captioning tasks, each of which focuses on a distinct aspect of reasoning abilities. Existential descriptions examine whether a certain object is present in an image. Spatial descriptions identify spatial relationships among visual objects. Quantification descriptions involve count-based and ratio-based statements, with an explicit focus on inspecting models for their counting ability. We develop two variants for each type of dataset to enable different levels of visual complexity or specific aspects of the same reasoning type. All the training and test captions sampled in this work are in English.",
"FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."
],
"extractive_spans": [],
"free_form_answer": "ShapeWorldICE datasets: OneShape, MultiShapes, TwoShapes, MultiShapes, Count, and Ratio",
"highlighted_evidence": [
"Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE.",
"We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper.",
"FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
},
{
"annotation_id": [
"59e8d5c050d515ca95053b1e452d5df3be37bba9",
"f574366f8e2c4652d6048c11f5242fc85ebf5af8"
],
"answer": [
{
"evidence": [
"We experiment with two image captioning models: the Show&Tell model BIBREF0 and the LRCN1u model BIBREF1. Both models follow the basic encoder-decoder architecture design that uses a CNN encoder to condense the visual information into an image embedding, which in turn conditions an LSTM decoder to generate a natural language caption. The main difference between the two models is the way they condition the decoder. The Show&Tell model feeds the image embedding as the “predecessor word embedding” to the first produced word, while the LRCN1u model concatenates the image features with the embedded previous word as the input to the sequence model at each time step."
],
"extractive_spans": [
"Show&Tell and LRCN1u"
],
"free_form_answer": "",
"highlighted_evidence": [
"We experiment with two image captioning models: the Show&Tell model BIBREF0 and the LRCN1u model BIBREF1. Both models follow the basic encoder-decoder architecture design that uses a CNN encoder to condense the visual information into an image embedding, which in turn conditions an LSTM decoder to generate a natural language caption. The main difference between the two models is the way they condition the decoder. The Show&Tell model feeds the image embedding as the “predecessor word embedding” to the first produced word, while the LRCN1u model concatenates the image features with the embedded previous word as the input to the sequence model at each time step."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We experiment with two image captioning models: the Show&Tell model BIBREF0 and the LRCN1u model BIBREF1. Both models follow the basic encoder-decoder architecture design that uses a CNN encoder to condense the visual information into an image embedding, which in turn conditions an LSTM decoder to generate a natural language caption. The main difference between the two models is the way they condition the decoder. The Show&Tell model feeds the image embedding as the “predecessor word embedding” to the first produced word, while the LRCN1u model concatenates the image features with the embedded previous word as the input to the sequence model at each time step."
],
"extractive_spans": [
"Show&Tell model",
"LRCN1u"
],
"free_form_answer": "",
"highlighted_evidence": [
"We experiment with two image captioning models: the Show&Tell model BIBREF0 and the LRCN1u model BIBREF1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9e77f46c4c2b4f69fd1b9cbd0f028d09c198a473",
"be51b2b020b517b4741a6c9477be624af17066e3"
],
"answer": [
{
"evidence": [
"As ShapeWorldICE exploits a limited size of open-class words, we emphasize the diversity in ShapeWorldICE at the sentence level rather than the word level. Since the ground-truth reference captions in ShapeWorld are randomly sampled, we take the sampled captions accompanying the test images as a proxy for optimal caption diversity, and compare it with the empirical output diversity of the evaluated model on these test images. Practically, we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number:"
],
"extractive_spans": [
"diversity score as the ratio of observed number versus optimal number"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since the ground-truth reference captions in ShapeWorld are randomly sampled, we take the sampled captions accompanying the test images as a proxy for optimal caption diversity, and compare it with the empirical output diversity of the evaluated model on these test images. Practically, we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As ShapeWorldICE exploits a limited size of open-class words, we emphasize the diversity in ShapeWorldICE at the sentence level rather than the word level. Since the ground-truth reference captions in ShapeWorld are randomly sampled, we take the sampled captions accompanying the test images as a proxy for optimal caption diversity, and compare it with the empirical output diversity of the evaluated model on these test images. Practically, we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number:"
],
"extractive_spans": [
" we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number"
],
"free_form_answer": "",
"highlighted_evidence": [
"As ShapeWorldICE exploits a limited size of open-class words, we emphasize the diversity in ShapeWorldICE at the sentence level rather than the word level. Since the ground-truth reference captions in ShapeWorld are randomly sampled, we take the sampled captions accompanying the test images as a proxy for optimal caption diversity, and compare it with the empirical output diversity of the evaluated model on these test images. Practically, we look at language constructions used and compute the corresponding diversity score as the ratio of observed number versus optimal number"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Are the images from a specific domain?",
"Which datasets are used?",
"Which existing models are evaluated?",
"How is diversity measured?"
],
"question_id": [
"50e80cfa84200717921840fddcf3b051a9216ad8",
"b1bc9ae9d40e7065343c12f860a461c7c730a612",
"63a1cbe66fd58ff0ead895a8bac1198c38c008aa",
"509af1f11bd6f3db59284258e18fdfebe86cae47"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: ShapeWorld example: spatial statements in the context of multiple shapes. The first three statements are truthful and diverse descriptions of the image. The fourth statement is wrong, but nonetheless exhibits a high degree of n-gram overlap with the true reference captions.",
"Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene.",
"Figure 2: Performance comparison of the Show&Tell model and the LRCN1u model on Existential-MultiShapes. SnT represents the Show&Tell model while LRCN represents the LRCN1u model. Grammaticality, Truthfulness and Diversity refer to the grammaticality ratio, the truthfulness ratio and the diversity ratio of generated captions, respectively.",
"Figure 3: Learning curves for LRCN1u on Existential-OneShape, Existential-MultiShapes and Spatial-MultiShapes. Truthfulness refers to the ratio of generated captions that are grammatical and agree with groundtruth world models. BLEU and SPICE denote average BLEU-4 scores and average SPICE scores across the test split, respectively.",
"Figure 4: Ratio of grammatical sentences produced by LRCN1u for different ShapeWorldICE datasets in the first 20k training iterations (stays at 100% afterwards).",
"Figure 6: Diversity ratio of sentences produced by LRCN1u on different ShapeWorldICE datasets.",
"Figure 5: Truthfulness ratio of sentences produced by LRCN1u for different ShapeWorldICE datasets."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"6-Figure6-1.png",
"6-Figure5-1.png"
]
} | [
"Which datasets are used?"
] | [
[
"1912.08960-Introduction-5",
"1912.08960-Experimental Setup ::: Datasets-0",
"1912.08960-3-Table1-1.png"
]
] | [
"ShapeWorldICE datasets: OneShape, MultiShapes, TwoShapes, MultiShapes, Count, and Ratio"
] | 37 |
2002.11910 | Integrating Boundary Assembling into a DNN Framework for Named Entity Recognition in Chinese Social Media Text | Named entity recognition is a challenging task in Natural Language Processing, especially for informal and noisy social media text. Chinese word boundaries are also entity boundaries, therefore, named entity recognition for Chinese text can benefit from word boundary detection, outputted by Chinese word segmentation. Yet Chinese word segmentation poses its own difficulty because it is influenced by several factors, e.g., segmentation criteria, employed algorithm, etc. Dealt improperly, it may generate a cascading failure to the quality of named entity recognition followed. In this paper we integrate a boundary assembling method with the state-of-the-art deep neural network model, and incorporate the updated word boundary information into a conditional random field model for named entity recognition. Our method shows a 2% absolute improvement over previous state-of-the-art results. | {
"paragraphs": [
[
"Named entity recognition (NER) is a challenging problem in Natural Language Processing, and often serves as an important step for many popular applications, such as information extraction and question answering. NER requires phrases referring to entities in text be identified and assigned to particular entity types, thus can be naturally modeled as a sequence labeling task. In recent years, a lot of progress has been made on NER by applying sequential models such as conditional random field (CRF) or neural network models such as long short-term memory (LSTM) (e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3). Yet this task still remains a challenging one, especially in social media domain such as tweets, partially because of informality and noise of such text and low frequencies of distinctive named entities BIBREF4.",
"Chinese is a language that consists of sequential Chinese characters without capitalization information and delimitation between words. Rather than words as in English or Romance languages, the equivalence of an English word in Chinese may contain one or several Chinese characters. Thus Chinese word segmentation is needed as a first step before entity mentions can be recognized. The outputs of Chinese word segmentation are often used as features to support named entity recognition. In the neural network based models, the boundary information can be extracted from hidden layers of a Chinese word segmentation model (e.g., BIBREF5, BIBREF6).",
"Relying on outputs of Chinese word segmentation has its own challenge, because Chinese word segmentation is often influenced by the following four factors. First, models trained from corpora in other languages, even if not language-specific such as in BIBREF3, cannot be directly applied to Chinese text. Some efforts have been made on Chinese word segmentation recently BIBREF7, BIBREF8, BIBREF9, BIBREF10. Second, differences in segmentation criteria exist among labeled corpora. For instance, in the PKU’s People’s Daily corpus, a person's family name and given name is segmented into two tokens, while in the Penn Chinese Treebank corpus they are segmented into one BIBREF11. Third, for entity mentions that are compound words, models may separate them into fragmented words BIBREF12. Many Chinese words are also morphemes, and separated entity mention fragments may be falsely combined with adjacent words. Fourth, current sequential models perform poorly in identifying named entities with long dependency. Yet many named entities expand over a long range, especially organization names and location names.",
"Errors caused by Chinese word segmentation can generate a cascading failure, which hurt the performance of downstream tasks such as NER. Latest developments in Chinese NER (e.g., BIBREF13, BIBREF14, BIBREF5, BIBREF15, BIBREF16, BIBREF17) have yet shown little focus on this issue. BIBREF12 found that by assembling these single words back together, information groups and sentence structure are better reserved, which could benefit downstream tasks such as NER.",
"Inspired by BIBREF12, we integrate in this paper a boundary assembling step into the state-of-the-art LSTM model for Chinese word segmentation, and feed the output into a CRF model for NER, resulting in a 2% absolute improvement on the overall F1 score over current state-of-the-art methods.",
"This paper is organized as follows. In Section SECREF2 we discuss our model, which consists of an LSTM module for Chinese word segmentation, a boundary assembling step for more accurate word segmentation, and a CRF module for NER. We show the experiment results and discuss the model performance in Section SECREF3. We conclude in Section SECREF4."
],
[
"Our model consists of three modules. A diagram of the model is shown in Figure FIGREF1. Characters in the input text for Chinese word segmentation are converted to vectors that are used to train the LSTM module. Output of the LSTM module are transformed by a biased-linear transformation to get likelihood scores of segmentation labeling, then passed through the boundary assembling module. The updated boundary information is used as feature input into the CRF for Chinese word segmentation (CWS), together with character-vector sequences. In each training epoch, CRF for CWS provides feedback into the LSTM hidden layer and the biased-linear transformation to update the hyper-parameters. Another corpus for NER is then used to train the LSTM again, the hidden vector of which (now contains segmentation information updated by the boundary assembling method) is taken as feature input to CRF for NER. Lexical features extracted from the input text for NER, as well as the word embedding sequence, are also taken by the CRF module as input to generate NER labels. This section provides descriptions for each module."
],
[
"We choose an LSTM module for the CWS task. Raw input Chinese text is converted from characters to vectors with character-positional input embeddings pre-trained by BIBREF5 over 112,971,734 Weibo messages using word2vec BIBREF18. Detailed parameter settings can be found in BIBREF13. The embeddings contain 52,057 unique characters in a 100-dimension space.",
"The LSTM module takes these vectors into a single layer that contains 150 nodes, and modifies them into likelihood scores for each segmentation label. A biased-linear transformation is carried out on these likelihood scores, generating predicted labels for segmentation. These labels are then modified by the Boundary Assembling module, which we will discuss in detail in the next section. Because labels are in sequence, and dependency may exist among adjacent labels, a transition probability matrix is introduced. The likelihood score together with the transition score are taken by a CRF module with a maximum-likelihood training objective. Feedbacks based on the loss function of the CRF are then given to the LSTM's hidden layer and the biased-linear transformation's parameters for update."
],
[
"In each sentence, Chinese characters are labeled as either Begin, Inside, End, or Singleton (BIES labeling). The likelihood of individual Chinese characters being labeled as each type is calculated by the LSTM module described in the previous section. BIBREF12 found in a Chinese corpus that the word label \"End\" has a better performance than \"Begin\". This motivates us to carry out a backward greedy search over each sentence's label sequence to identify word boundaries. If two words segmented in a sentence are identified as nouns, and one word is immediately before the other, we assemble their boundaries, creating a new word candidate for entity recognition. This strategy has the advantage to find named entities with long word length. It also reduces the influence caused by different segmentation criteria."
],
[
"A log-bilinear CRF module is used for the NER task, and takes three inputs. The first is the sequential character-positional embeddings mentioned above. The second is the hidden vector from LSTM as dynamic feature inputs. The third is lexical features extracted from the input text. These lexical features are the likelihood of a character being at a specific position of a noun (first character of a noun, second character of a noun, etc.), and is achieved by comparing the character with a pre-defined dictionary trained by BIBREF13."
],
[
"Datasets used in this study for training, validation, and test are the same as used in Peng et al. peng2016improving for both word segmentation and named entity recognition. Specifically, dataset for word segmentation is taken from the SIGHAN 2005 bakeoff PKU corpus BIBREF21, which includes 123,530 sentences for training and 11,697 sentences for testing. Dataset for named entity recognition is a corpus composed of 1,890 Sina Weibo (a Chinese social media website) messages, with 1,350 messages split into training set, 270 into validation set, and 270 into test set BIBREF5. Both named entity and nominal mention are annotated, each with four entity types: person, organization, location, and geo-political entity. A major cleanup and revision of annotations of this corpus has been performed by He and Sun HeS16. In this study, all results for comparisons are based on this updated corpus."
],
[
"The weights and hyper-parameters in the LSTM module are adjusted in each iteration using stochastic gradient descent for two stages in tandem. The first stage is based on the CWS input text, and the second stage on NER input text. Since the corpus for CWS is much larger than the corpus for NER, the former corpus is randomly sub-sampled during each training epoch, with each sample containing 13,500 sentences for training step one, and 1,350 sentences for training step two. Models are trained until the F1 score of NER model converges on the validation dataset, or up to 30 epochs.",
"Models are trained on a Windows PC with a 2.7 GHz Intel Core i7 CPU. For the best performed model, the average training time for the LSTM module is 1897.6 seconds per iteration. Time for data loading, pre-processing, or model evaluation is not included."
],
[
"Our best model performance with its Precision, Recall, and F1 scores on named entity and nominal mention are shown in Table TABREF5. This best model performance is achieved with a dropout rate of 0.1, and a learning rate of 0.05. Our results are compared with state-of-the-art models BIBREF15, BIBREF19, BIBREF20 on the same Sina Weibo training and test datasets. Our model shows an absolute improvement of 2% for the overall F1 score.",
"This significant improvement validates our method of applying boundary assembling to the segmented sentences, which results in more accurate word segmentation and better semantic understanding. Sentences in the PKU corpus are often segmented into the smallest word units. This results in too fragmented information and incorrect lexical units when input into a named entity recognition model, although this may benefit some other natural language process tasks. By applying the boundary assembling method, sense group in a sentence is better preserved. The downstream NER model can then take advantage of this, improving its result.",
"The PKU corpus used for CWS module consists of mainly news articles, which are quite different from the social media Weibo corpus. Performance of an NLP task often drops when tested on a different domain or a corpus of different characteristics. Our improvement indicates that the boundary assembling method is not sensitive to the specific domain, and is a robust method for cross-domain scenarios.",
"The identification of two nouns that are next to each other depends on the pre-trained lexical features. If our model is tested over out-of-vocabulary dataset, it may not perform well due to the lack of this lexical information."
],
[
"In this paper we integrate a boundary assembling step with an LSTM module and a CRF module for Named Entity Recognition in Chinese social media text. With the abundance of social media information, our work is timely and desirable. The improvement in experiment results over existing methods clearly shows the effectiveness of our approach."
],
[
"This research was partially funded by the Engineering Directorate of the National Science Foundation (1820118)."
]
],
"section_name": [
"Introduction",
"Model",
"Model ::: LSTM for Word Segmentation",
"Model ::: Boundary Assembling Method",
"Model ::: CRF for Named Entity Recognition",
"Experiments ::: Datasets",
"Experiments ::: Training Settings",
"Experiments ::: Results and Discussion",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"7b5fab32fca52587af8b969749aa05867ff15a56",
"d102b0f06eaef8a426a652347bf315aa17b05dec"
],
"answer": [
{
"evidence": [
"Inspired by BIBREF12, we integrate in this paper a boundary assembling step into the state-of-the-art LSTM model for Chinese word segmentation, and feed the output into a CRF model for NER, resulting in a 2% absolute improvement on the overall F1 score over current state-of-the-art methods."
],
"extractive_spans": [
"LSTM model"
],
"free_form_answer": "",
"highlighted_evidence": [
"Inspired by BIBREF12, we integrate in this paper a boundary assembling step into the state-of-the-art LSTM model for Chinese word segmentation, and feed the output into a CRF model for NER, resulting in a 2% absolute improvement on the overall F1 score over current state-of-the-art methods."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our best model performance with its Precision, Recall, and F1 scores on named entity and nominal mention are shown in Table TABREF5. This best model performance is achieved with a dropout rate of 0.1, and a learning rate of 0.05. Our results are compared with state-of-the-art models BIBREF15, BIBREF19, BIBREF20 on the same Sina Weibo training and test datasets. Our model shows an absolute improvement of 2% for the overall F1 score."
],
"extractive_spans": [
"BIBREF15",
"BIBREF19",
"BIBREF20 "
],
"free_form_answer": "",
"highlighted_evidence": [
"Our results are compared with state-of-the-art models BIBREF15, BIBREF19, BIBREF20 on the same Sina Weibo training and test datasets. Our model shows an absolute improvement of 2% for the overall F1 score."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"b9b0d25505faeb8d0bd6bb8c727a94191f85f621",
"fbcc0e78b6d7ad8d70fa31bd511a3e2ec999b6b4"
],
"answer": [
{
"evidence": [
"In each sentence, Chinese characters are labeled as either Begin, Inside, End, or Singleton (BIES labeling). The likelihood of individual Chinese characters being labeled as each type is calculated by the LSTM module described in the previous section. BIBREF12 found in a Chinese corpus that the word label \"End\" has a better performance than \"Begin\". This motivates us to carry out a backward greedy search over each sentence's label sequence to identify word boundaries. If two words segmented in a sentence are identified as nouns, and one word is immediately before the other, we assemble their boundaries, creating a new word candidate for entity recognition. This strategy has the advantage to find named entities with long word length. It also reduces the influence caused by different segmentation criteria."
],
"extractive_spans": [
"This motivates us to carry out a backward greedy search over each sentence's label sequence to identify word boundaries. If two words segmented in a sentence are identified as nouns, and one word is immediately before the other, we assemble their boundaries, creating a new word candidate for entity recognition."
],
"free_form_answer": "",
"highlighted_evidence": [
"BIBREF12 found in a Chinese corpus that the word label \"End\" has a better performance than \"Begin\". This motivates us to carry out a backward greedy search over each sentence's label sequence to identify word boundaries. If two words segmented in a sentence are identified as nouns, and one word is immediately before the other, we assemble their boundaries, creating a new word candidate for entity recognition. This strategy has the advantage to find named entities with long word length. It also reduces the influence caused by different segmentation criteria."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In each sentence, Chinese characters are labeled as either Begin, Inside, End, or Singleton (BIES labeling). The likelihood of individual Chinese characters being labeled as each type is calculated by the LSTM module described in the previous section. BIBREF12 found in a Chinese corpus that the word label \"End\" has a better performance than \"Begin\". This motivates us to carry out a backward greedy search over each sentence's label sequence to identify word boundaries. If two words segmented in a sentence are identified as nouns, and one word is immediately before the other, we assemble their boundaries, creating a new word candidate for entity recognition. This strategy has the advantage to find named entities with long word length. It also reduces the influence caused by different segmentation criteria."
],
"extractive_spans": [
"backward greedy search over each sentence's label sequence to identify word boundaries"
],
"free_form_answer": "",
"highlighted_evidence": [
"This motivates us to carry out a backward greedy search over each sentence's label sequence to identify word boundaries."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3b0b2e9687278f2fd45f131a7da3002ea61b9d47",
"f0b78dfa76b776a559e8494ee4f4044bd26a67cb"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."
],
"extractive_spans": [],
"free_form_answer": "Overall F1 score:\n- He and Sun (2017) 58.23\n- Peng and Dredze (2017) 58.99\n- Xu et al. (2018) 59.11",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our best model performance with its Precision, Recall, and F1 scores on named entity and nominal mention are shown in Table TABREF5. This best model performance is achieved with a dropout rate of 0.1, and a learning rate of 0.05. Our results are compared with state-of-the-art models BIBREF15, BIBREF19, BIBREF20 on the same Sina Weibo training and test datasets. Our model shows an absolute improvement of 2% for the overall F1 score.",
"FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."
],
"extractive_spans": [],
"free_form_answer": "For Named entity the maximum precision was 66.67%, and the average 62.58%, same values for Recall was 55.97% and 50.33%, and for F1 57.14% and 55.64%. Where for Nominal Mention had maximum recall of 74.48% and average of 73.67%, Recall had values of 54.55% and 53.7%, and F1 had values of 62.97% and 62.12%. Finally the Overall F1 score had maximum value of 59.11% and average of 58.77%",
"highlighted_evidence": [
"Our best model performance with its Precision, Recall, and F1 scores on named entity and nominal mention are shown in Table TABREF5. ",
"FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What state-of-the-art deep neural network is used?",
"What boundary assembling method is used?",
"What are previous state of the art results?"
],
"question_id": [
"23e16c1173b7def2c5cb56053b57047c9971e3bb",
"d78f7f84a76a07b777d4092cb58161528ca3803c",
"9da1e124d28b488b0d94998d32aa2fa8a5ebec51"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The model for named entity recognition. The LSTM module is trained twice, with inputs first for CWS then for NER. Boundary assembling method is added in between LSTM and CRF for CWS, so that a better representation of segmentation can be obtained. Dashed-line arrows indicate parameter adjustment based on CRF’s loss function between each training epoch. CRF for NER takes directly the hidden vectors in LSTM as dynamic features. Abbreviations: CRF: conditional random field; CWS: Chinese word segmentation; NER: named entity recognition; LSTM: long short-term memory.",
"Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png"
]
} | [
"What are previous state of the art results?"
] | [
[
"2002.11910-4-Table1-1.png",
"2002.11910-Experiments ::: Results and Discussion-0"
]
] | [
"For Named entity the maximum precision was 66.67%, and the average 62.58%, same values for Recall was 55.97% and 50.33%, and for F1 57.14% and 55.64%. Where for Nominal Mention had maximum recall of 74.48% and average of 73.67%, Recall had values of 54.55% and 53.7%, and F1 had values of 62.97% and 62.12%. Finally the Overall F1 score had maximum value of 59.11% and average of 58.77%"
] | 38 |
1909.09587 | Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model | Because it is not feasible to collect training data for every language, there is a growing interest in cross-lingual transfer learning. In this paper, we systematically explore zero-shot cross-lingual transfer learning on reading comprehension tasks with a language representation model pre-trained on multi-lingual corpus. The experimental results show that with pre-trained language representation zero-shot learning is feasible, and translating the source data into the target language is not necessary and even degrades the performance. We further explore what does the model learn in zero-shot setting. | {
"paragraphs": [
[
"Reading Comprehension (RC) has become a central task in natural language processing, with great practical value in various industries. In recent years, many large-scale RC datasets in English BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 have nourished the development of numerous powerful and diverse RC models BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. The state-of-the-art model BIBREF12 on SQuAD, one of the most widely used RC benchmarks, even surpasses human-level performance. Nonetheless, RC on languages other than English has been limited due to the absence of sufficient training data. Although some efforts have been made to create RC datasets for Chinese BIBREF13, BIBREF14 and Korean BIBREF15, it is not feasible to collect RC datasets for every language since annotation efforts to collect a new RC dataset are often far from trivial. Therefore, the setup of transfer learning, especially zero-shot learning, is of extraordinary importance.",
"Existing methods BIBREF16 of cross-lingual transfer learning on RC datasets often count on machine translation (MT) to translate data from source language into target language, or vice versa. These methods may not require a well-annotated RC dataset for the target language, whereas a high-quality MT model is needed as a trade-off, which might not be available when it comes to low-resource languages.",
"In this paper, we leverage pre-trained multilingual language representation, for example, BERT learned from multilingual un-annotated sentences (multi-BERT), in cross-lingual zero-shot RC. We fine-tune multi-BERT on the training set in source language, then test the model in target language, with a number of combinations of source-target language pair to explore the cross-lingual ability of multi-BERT. Surprisingly, we find that the models have the ability to transfer between low lexical similarity language pair, such as English and Chinese. Recent studies BIBREF17, BIBREF12, BIBREF18 show that cross-lingual language models have the ability to enable preliminary zero-shot transfer on simple natural language understanding tasks, but zero-shot transfer of RC has not been studied. To our knowledge, this is the first work systematically exploring the cross-lingual transferring ability of multi-BERT on RC tasks."
],
[
"Multi-BERT has showcased its ability to enable cross-lingual zero-shot learning on the natural language understanding tasks including XNLI BIBREF19, NER, POS, Dependency Parsing, and so on. We now seek to know if a pre-trained multi-BERT has ability to solve RC tasks in the zero-shot setting."
],
[
"We have training and testing sets in three different languages: English, Chinese and Korean. The English dataset is SQuAD BIBREF2. The Chinese dataset is DRCD BIBREF14, a Chinese RC dataset with 30,000+ examples in the training set and 10,000+ examples in the development set. The Korean dataset is KorQuAD BIBREF15, a Korean RC dataset with 60,000+ examples in the training set and 10,000+ examples in the development set, created in exactly the same procedure as SQuAD. We always use the development sets of SQuAD, DRCD and KorQuAD for testing since the testing sets of the corpora have not been released yet.",
"Next, to construct a diverse cross-lingual RC dataset with compromised quality, we translated the English and Chinese datasets into more languages, with Google Translate. An obvious issue with this method is that some examples might no longer have a recoverable span. To solve the problem, we use fuzzy matching to find the most possible answer, which calculates minimal edit distance between translated answer and all possible spans. If the minimal edit distance is larger than min(10, lengths of translated answer - 1), we drop the examples during training, and treat them as noise when testing. In this way, we can recover more than 95% of examples. The following generated datasets are recovered with same setting.",
"The pre-trained multi-BERT is the official released one. This multi-lingual version of BERT were pre-trained on corpus in 104 languages. Data in different languages were simply mixed in batches while pre-training, without additional effort to align between languages. When fine-tuning, we simply adopted the official training script of BERT, with default hyperparameters, to fine-tune each model until training loss converged."
],
[
"Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.",
"In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not."
],
[
"Table TABREF8 shows that fine-tuning on un-translated target language data achieves much better performance than data translated into the target language. Because the above statement is true across all the languages, it is a strong evidence that translation degrades the performance.We notice that the translated corpus and untranslated corpus are not the same. This may be another factor that influences the results. Conducting an experiment between un-translated and back-translated data may deal with this problem."
],
[
"Here we discuss the case that the training data are translated. We consider each result is affected by at least three factors: (1) training corpus, (2) data size, (3) whether the source corpus is translated into the target language. To study the effect of data-size, we conducted an extra experiment where we down-sampled the size of English data to be the same as Chinese corpus, and used the down-sampled corpus to train. Then We carried out one-way ANOVA test and found out the significance of the three factors are ranked as below: (1) > (2) >> (3). The analysis supports that the characteristics of training data is more important than translated into target language or not. Therefore, although translation degrades the performance, whether translating the corpus into the target language is not critical."
],
[
"It has been shown that extractive QA tasks like SQuAD may be tackled by some language independent strategies, for example, matching words in questions and context BIBREF20. Is zero-shot learning feasible because the model simply learns this kind of language independent strategies on one language and apply to the other?",
"To verify whether multi-BERT largely counts on a language independent strategy, we test the model on the languages unseen during pre-training. To make sure the languages have never been seen before, we artificially make unseen languages by permuting the whole vocabulary of existing languages. That is, all the words in the sentences of a specific language are replaced by other words in the same language to form the sentences in the created unseen language. It is assumed that if multi-BERT used to find answers by language independent strategy, then multi-BERT should also do well on unseen languages. Table TABREF14 shows that the performance of multi-BERT drops drastically on the dataset. It implies that multi-BERT might not totally rely on pattern matching when finding answers."
],
[
"PCA projection of hidden representations of the last layer of multi-BERT before and after fine-tuning are shown in Fig. FIGREF15. The red points represent Chinese tokens, and the blue points are for English. The results show that tokens from different languages might be embedded into the same space with close spatial distribution. Even though during the fine-tuning only the English data is used, the embedding of the Chinese token changed accordingly. We also quantitatively evaluate the similarities between the embedding of the languages. The results can be found in the Appendix."
],
[
"We observe linguistic-agnostic representations in the last subsection. If tokens are represented in a language-agnostic way, the model may be able to handle code-switching data. Because there is no code-switching data for RC, we create artificial code-switching datasets by replacing some of the words in contexts or questions with their synonyms in another language. The synonyms are found by word-by-word translation with given dictionaries. We use the bilingual dictionaries collected and released in facebookresearch/MUSE GitHub repository. We substitute the words if and only if the words are in the bilingual dictionaries.",
"Table TABREF14 shows that on all the code-switching datasets, the EM/F1 score drops, indicating that the semantics of representations are not totally disentangled from language. However, the examples of the answers of the model (Table TABREF21) show that multi-BERT could find the correct answer spans although some keywords in the spans have been translated into another language."
],
[
"There are various types of typology in languages. For example, in English the typology order is subject-verb-object (SVO) order, but in Japanese and Korean the order is subject-object-verb (SOV). We construct a typology-manipulated dataset to examine if the typology order of the training data influences the transfer learning results. If the model only learns the semantic mapping between different languages, changing English typology order from SVO to SOV should improve the transfer ability from English to Japanese. The method used to generate datasets is the same as BIBREF21.",
"The source code is from a GitHub repository named Shaul1321/rnn_typology, which labels given sentences to CoNLL format with StanfordCoreNLP and then re-arranges them greedily.",
"Table TABREF23 shows that when we change the English typology order to SOV or OSV order, the performance on Korean is improved and worsen on English and Chinese, but very slightly. The results show that the typology manipulation on the training set has little influence. It is possible that multi-BERT normalizes the typology order of different languages to some extent."
],
[
"In this paper, we systematically explore zero-shot cross-lingual transfer learning on RC with multi-BERT. The experimental results on English, Chinese and Korean corpora show that even when the languages for training and testing are different, reasonable performance can be obtained. Furthermore, we created several artificial data to study the cross-lingual ability of multi-BERT in the presence of typology variation and code-switching. We showed that only token-level pattern matching is not sufficient for multi-BERT to answer questions and typology variation and code-switching only caused minor effects on testing performance."
],
[
"The architecture of multi-BERT is a Transformer encoder BIBREF25. While fine-tuning on SQuAD-like dataset, the bottom layers of multi-BERT are initialized from Google-pretrained parameters, with an added output layer initialized from random parameters. Tokens representations from the last layer of bottom-part of multi-BERT are inputs to the output layer and then the output layer outputs a distribution over all tokens that indicates the probability of a token being the START/END of an answer span."
],
[
"As all translated versions of SQuAD/DRCD are parallel to each other. Given a source-target language pair, we calculate cosine similarity of the mean pooling of tokens representation within corresponding answer-span as a measure of how much they look like in terms of the internal representation of multi-BERT. The results are shown in Fig. FIGREF26."
],
[
"Singular Vector Canonical Correlation Analysis (SVCCA) is a general method to compare the correlation of two sets of vector representations. SVCCA has been proposed to compare learned representations across language models BIBREF24. Here we adopt SVCCA to measure the linear similarity of two sets of representations in the same multi-BERT from different translated datasets, which are parallel to each other. The results are shown in Fig FIGREF28."
],
[
"In the paper, we show that internal representations of multi-BERT are linear-mappable to some extent between different languages. This implies that multi-BERT model might encode semantic and syntactic information in language-agnostic ways and explains how zero-shot transfer learning could be done.",
"To take a step further, while transfering model from source dataset to target dataset, we align representations in two proposed way, to improve performance on target dataset."
],
[
"Algorithms proposed in BIBREF23, BIBREF22, BIBREF26 to unsupervisedly learn linear mapping between two sets of embeddings are used here to align representations of source (training data) to those of target. We obtain the mapping generated by embeddings from one specific layer of pre-trained multi-BERT then we apply this mapping to transform the internal representations of multi-BERT while fine-tuning on training data."
],
[
"In Adversarial Method, we add an additional transform layer to transform representations and a discrimination layer to discriminate between transformed representations from source language (training set) and target language (development set). And the GAN loss is applied in the total loss of fine-tuning."
],
[
"As table TABREF33 shows, there are no improvements among above methods. Some linear mapping methods even causes devastating effect on EM/F1 scores."
]
],
"section_name": [
"Introduction",
"Zero-shot Transfer with Multi-BERT",
"Zero-shot Transfer with Multi-BERT ::: Experimental Setup and Data",
"Zero-shot Transfer with Multi-BERT ::: Experimental Results",
"Zero-shot Transfer with Multi-BERT ::: Discussion ::: The Effect of Machine Translation",
"Zero-shot Transfer with Multi-BERT ::: Discussion ::: The Effect of Other Factors",
"What Does Zero-shot Transfer Model Learn? ::: Unseen Language Dataset",
"What Does Zero-shot Transfer Model Learn? ::: Embedding in Multi-BERT",
"What Does Zero-shot Transfer Model Learn? ::: Code-switching Dataset",
"What Does Zero-shot Transfer Model Learn? ::: Typology-manipulated Dataset",
"Conclusion",
"Supplemental Material ::: Internal Representation of multi-BERT",
"Supplemental Material ::: Internal Representation of multi-BERT ::: Cosine Similarity",
"Supplemental Material ::: Internal Representation of multi-BERT ::: SVCCA",
"Supplemental Material ::: Improve Transfering",
"Supplemental Material ::: Improve Transfering ::: Linear Mapping Method",
"Supplemental Material ::: Improve Transfering ::: Adversarial Method",
"Supplemental Material ::: Improve Transfering ::: Discussion"
]
} | {
"answers": [
{
"annotation_id": [
"4b08cf1a8af4de65f8b1732f377843e2b76304f6",
"d4e9dd9238c8f630dc34ce0ddaedd4e1c0fa8dd8"
],
"answer": [
{
"evidence": [
"Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.",
"FLOAT SELECTED: Table 1: EM/F1 scores over Chinese testing set.",
"FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."
],
"extractive_spans": [
"Table TABREF6",
"Table TABREF8"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. ",
"Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.",
"FLOAT SELECTED: Table 1: EM/F1 scores over Chinese testing set.",
"FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.",
"In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not.",
"FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."
],
"extractive_spans": [
"when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8",
" F1 score is only 44.1 for the model training on Zh-En"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean.",
"For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En.",
"FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"3b3e6d3fc449d9330399d6baa40df97b8a12bff1",
"84d513cea966549851e6cbab6645f9baf945c1ce",
"bac5971265af8bd80fadec91222358d19fb24d53"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."
],
"extractive_spans": [],
"free_form_answer": "En-Fr, En-Zh, En-Jp, En-Kr, Zh-En, Zh-Fr, Zh-Jp, Zh-Kr to English, Chinese or Korean",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not.",
"FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."
],
"extractive_spans": [
"English ",
"Chinese"
],
"free_form_answer": "",
"highlighted_evidence": [
"In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. ",
"FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We have training and testing sets in three different languages: English, Chinese and Korean. The English dataset is SQuAD BIBREF2. The Chinese dataset is DRCD BIBREF14, a Chinese RC dataset with 30,000+ examples in the training set and 10,000+ examples in the development set. The Korean dataset is KorQuAD BIBREF15, a Korean RC dataset with 60,000+ examples in the training set and 10,000+ examples in the development set, created in exactly the same procedure as SQuAD. We always use the development sets of SQuAD, DRCD and KorQuAD for testing since the testing sets of the corpora have not been released yet.",
"Next, to construct a diverse cross-lingual RC dataset with compromised quality, we translated the English and Chinese datasets into more languages, with Google Translate. An obvious issue with this method is that some examples might no longer have a recoverable span. To solve the problem, we use fuzzy matching to find the most possible answer, which calculates minimal edit distance between translated answer and all possible spans. If the minimal edit distance is larger than min(10, lengths of translated answer - 1), we drop the examples during training, and treat them as noise when testing. In this way, we can recover more than 95% of examples. The following generated datasets are recovered with same setting."
],
"extractive_spans": [
"English",
"Chinese",
"Korean",
"we translated the English and Chinese datasets into more languages, with Google Translate"
],
"free_form_answer": "",
"highlighted_evidence": [
"We have training and testing sets in three different languages: English, Chinese and Korean.",
"Next, to construct a diverse cross-lingual RC dataset with compromised quality, we translated the English and Chinese datasets into more languages, with Google Translate."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5bf3e125dabc09d06b951726da5e62f5eeb9a355",
"d057faf33d735b03146bd8e88acb24144b450c2a"
],
"answer": [
{
"evidence": [
"Multi-BERT has showcased its ability to enable cross-lingual zero-shot learning on the natural language understanding tasks including XNLI BIBREF19, NER, POS, Dependency Parsing, and so on. We now seek to know if a pre-trained multi-BERT has ability to solve RC tasks in the zero-shot setting."
],
"extractive_spans": [
"pre-trained multi-BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"We now seek to know if a pre-trained multi-BERT has ability to solve RC tasks in the zero-shot setting."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.",
"Reading Comprehension (RC) has become a central task in natural language processing, with great practical value in various industries. In recent years, many large-scale RC datasets in English BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 have nourished the development of numerous powerful and diverse RC models BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. The state-of-the-art model BIBREF12 on SQuAD, one of the most widely used RC benchmarks, even surpasses human-level performance. Nonetheless, RC on languages other than English has been limited due to the absence of sufficient training data. Although some efforts have been made to create RC datasets for Chinese BIBREF13, BIBREF14 and Korean BIBREF15, it is not feasible to collect RC datasets for every language since annotation efforts to collect a new RC dataset are often far from trivial. Therefore, the setup of transfer learning, especially zero-shot learning, is of extraordinary importance."
],
"extractive_spans": [
"QANet ",
"BIBREF14",
" fine-tuned a BERT model"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese.",
"BIBREF14",
" In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"f78bf44a10f7a5af00e253c01cb277af0d1581bb"
],
"answer": [
{
"evidence": [
"We have training and testing sets in three different languages: English, Chinese and Korean. The English dataset is SQuAD BIBREF2. The Chinese dataset is DRCD BIBREF14, a Chinese RC dataset with 30,000+ examples in the training set and 10,000+ examples in the development set. The Korean dataset is KorQuAD BIBREF15, a Korean RC dataset with 60,000+ examples in the training set and 10,000+ examples in the development set, created in exactly the same procedure as SQuAD. We always use the development sets of SQuAD, DRCD and KorQuAD for testing since the testing sets of the corpora have not been released yet.",
"The pre-trained multi-BERT is the official released one. This multi-lingual version of BERT were pre-trained on corpus in 104 languages. Data in different languages were simply mixed in batches while pre-training, without additional effort to align between languages. When fine-tuning, we simply adopted the official training script of BERT, with default hyperparameters, to fine-tune each model until training loss converged."
],
"extractive_spans": [
"we simply adopted the official training script of BERT, with default hyperparameters, to fine-tune each model until training loss converged"
],
"free_form_answer": "",
"highlighted_evidence": [
"We have training and testing sets in three different languages: English, Chinese and Korean.",
"When fine-tuning, we simply adopted the official training script of BERT, with default hyperparameters, to fine-tune each model until training loss converged."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What is the model performance on target language reading comprehension?",
"What source-target language pairs were used in this work? ",
"What model is used as a baseline? ",
"what does the model learn in zero-shot setting?"
],
"question_id": [
"37be0d479480211291e068d0d3823ad0c13321d3",
"a3d9b101765048f4b61cbd3eaa2439582ebb5c77",
"009ce6f2bea67e7df911b3f93443b23467c9f4a1",
"55569d0a4586d20c01268a80a7e31a17a18198e2"
],
"question_writer": [
"f7c76ad7ff9c8b54e8c397850358fa59258c6672",
"f7c76ad7ff9c8b54e8c397850358fa59258c6672",
"f7c76ad7ff9c8b54e8c397850358fa59258c6672",
"f7c76ad7ff9c8b54e8c397850358fa59258c6672"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: EM/F1 scores over Chinese testing set.",
"Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language.",
"Table 3: EM/F1 scores over artificially created unseen languages (English-permuted and Chinese-permuted).",
"Figure 1: PCA visualization of hidden representations from the 12-th transformer layer of multi-BERT before and after fine-tuning on English. The red points represent Chinese tokens, and the blue points are for English.",
"Table 4: EM/F1 scores on artificial code-switching datasets generated by replacing some of the words in English dataset with synonyms in another languages. (Sub. is the substitution ratio of the dataset)",
"Table 5: Answers inferenced on code-switching dataset. The predicted answers would be the same as the ground truths (gt) if we translate every word into English.",
"Table 6: EM/F1 scores over artificially created typology-manipulated dataset.",
"Figure 3: The relation of SVCCA similarity with EM/F1 scores in red and blue respectively. Each point represents a source-target language pair of datasets.",
"Figure 2: The relation of cosine similarity of answer words with EM/F1 scores in red and blue respectively. Each point represents a source-target language pair of datasets.",
"Table 7: EM/F1 scores on DRCD dev-set."
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Table3-1.png",
"4-Figure1-1.png",
"4-Table4-1.png",
"4-Table5-1.png",
"5-Table6-1.png",
"7-Figure3-1.png",
"7-Figure2-1.png",
"8-Table7-1.png"
]
} | [
"What source-target language pairs were used in this work? "
] | [
[
"1909.09587-Zero-shot Transfer with Multi-BERT ::: Experimental Setup and Data-1",
"1909.09587-Zero-shot Transfer with Multi-BERT ::: Experimental Setup and Data-0",
"1909.09587-Zero-shot Transfer with Multi-BERT ::: Experimental Results-1",
"1909.09587-2-Table2-1.png"
]
] | [
"En-Fr, En-Zh, En-Jp, En-Kr, Zh-En, Zh-Fr, Zh-Jp, Zh-Kr to English, Chinese or Korean"
] | 39 |
1802.07862 | Multimodal Named Entity Recognition for Short Social Media Posts | We introduce a new task called Multimodal Named Entity Recognition (MNER) for noisy user-generated data such as tweets or Snapchat captions, which comprise short text with accompanying images. These social media posts often come in inconsistent or incomplete syntax and lexical notations with very limited surrounding textual contexts, bringing significant challenges for NER. To this end, we create a new dataset for MNER called SnapCaptions (Snapchat image-caption pairs submitted to public and crowd-sourced stories with fully annotated named entities). We then build upon the state-of-the-art Bi-LSTM word/character based NER models with 1) a deep image network which incorporates relevant visual context to augment textual information, and 2) a generic modality-attention module which learns to attenuate irrelevant modalities while amplifying the most informative ones to extract contexts from, adaptive to each sample and token. The proposed MNER model with modality attention significantly outperforms the state-of-the-art text-only NER models by successfully leveraging provided visual contexts, opening up potential applications of MNER on myriads of social media platforms. | {
"paragraphs": [
[
"Social media with abundant user-generated posts provide a rich platform for understanding events, opinions and preferences of groups and individuals. These insights are primarily hidden in unstructured forms of social media posts, such as in free-form text or images without tags. Named entity recognition (NER), the task of recognizing named entities from free-form text, is thus a critical step for building structural information, allowing for its use in personalized assistance, recommendations, advertisement, etc.",
"While many previous approaches BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 on NER have shown success for well-formed text in recognizing named entities via word context resolution (e.g. LSTM with word embeddings) combined with character-level features (e.g. CharLSTM/CNN), several additional challenges remain for recognizing named entities from extremely short and coarse text found in social media posts. For instance, short social media posts often do not provide enough textual contexts to resolve polysemous entities (e.g. “monopoly is da best \", where `monopoly' may refer to a board game (named entity) or a term in economics). In addition, noisy text includes a huge number of unknown tokens due to inconsistent lexical notations and frequent mentions of various newly trending entities (e.g. “xoxo Marshmelloooo \", where `Marshmelloooo' is a mis-spelling of a known entity `Marshmello', a music producer), making word embeddings based neural networks NER models vulnerable.",
"To address the challenges above for social media posts, we build upon the state-of-the-art neural architecture for NER with the following two novel approaches (Figure FIGREF1 ). First, we propose to leverage auxiliary modalities for additional context resolution of entities. For example, many popular social media platforms now provide ways to compose a post in multiple modalities - specifically image and text (e.g. Snapchat captions, Twitter posts with image URLs), from which we can obtain additional context for understanding posts. While “monopoly\" in the previous example is ambiguous in its textual form, an accompanying snap image of a board game can help disambiguate among polysemous entities, thereby correctly recognizing it as a named entity.",
"Second, we also propose a general modality attention module which chooses per decoding step the most informative modality among available ones (in our case, word embeddings, character embeddings, or visual features) to extract context from. For example, the modality attention module lets the decoder attenuate the word-level signals for unknown word tokens (“Marshmellooooo\" with trailing `o's) and amplifies character-level features intsead (capitalized first letter, lexical similarity to other known named entity token `Marshmello', etc.), thereby suppressing noise information (“UNK\" token embedding) in decoding steps. Note that most of the previous literature in NER or other NLP tasks combine word and character-level information with naive concatenation, which is vulnerable to noisy social media posts. When an auxiliary image is available, the modality attention module determines to amplify this visual context in disambiguating polysemous entities, or to attenuate visual contexts when they are irrelevant to target named entities, selfies, etc. Note that the proposed modality attention module is distinct from how attention is used in other sequence-to-sequence literature (e.g. attending to a specific token within an input sequence). Section SECREF2 provides the detailed literature review.",
"Our contributions are three-fold: we propose (1) an LSTM-CNN hybrid multimodal NER network that takes as input both image and text for recognition of a named entity in text input. To the best of our knowledge, our approach is the first work to incorporate visual contexts for named entity recognition tasks. (2) We propose a general modality attention module that selectively chooses modalities to extract primary context from, maximizing information gain and suppressing irrelevant contexts from each modality (we treat words, characters, and images as separate modalities). (3) We show that the proposed approaches outperform the state-of-the-art NER models (both with and without using additional visual contexts) on our new MNER dataset SnapCaptions, a large collection of informal and extremely short social media posts paired with unique images."
],
[
"Neural models for NER have been recently proposed, producing state-of-the-art performance on standard NER tasks. For example, some of the end-to-end NER systems BIBREF4 , BIBREF2 , BIBREF3 , BIBREF0 , BIBREF1 use a recurrent neural network usually with a CRF BIBREF5 , BIBREF6 for sequence labeling, accompanied with feature extractors for words and characters (CNN, LSTMs, etc.), and achieve the state-of-the-art performance mostly without any use of gazetteers information. Note that most of these work aggregate textual contexts via concatenation of word embeddings and character embeddings. Recently, several work have addressed the NER task specifically on noisy short text segments such as Tweets, etc. BIBREF7 , BIBREF8 . They report performance gains from leveraging external sources of information such as lexical information (POS tags, etc.) and/or from several preprocessing steps (token substitution, etc.). Our model builds upon these state-of-the-art neural models for NER tasks, and improves the model in two critical ways: (1) incorporation of visual contexts to provide auxiliary information for short media posts, and (2) addition of the modality attention module, which better incorporates word embeddings and character embeddings, especially when there are many missing tokens in the given word embedding matrix. Note that we do not explore the use of gazetteers information or other auxiliary information (POS tags, etc.) BIBREF9 as it is not the focus of our study.",
"Attention modules are widely applied in several deep learning tasks BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . For example, they use an attention module to attend to a subset within a single input (a part/region of an image, a specific token in an input sequence of tokens, etc.) at each decoding step in an encoder-decoder framework for image captioning tasks, etc. BIBREF14 explore various attention mechanisms in NLP tasks, but do not incorporate visual components or investigate the impact of such models on noisy social media data. BIBREF15 propose to use attention for a subset of discrete source samples in transfer learning settings. Our modality attention differs from the previous approaches in that we attenuate or amplifies each modality input as a whole among multiple available modalities, and that we use the attention mechanism essentially to map heterogeneous modalities in a single joint embedding space. Our approach also allows for re-use of the same model for predicting labels even when some of the modalities are missing in input, as other modalities would still preserve the same semantics in the embeddings space.",
"Multimodal learning is studied in various domains and applications, aimed at building a joint model that extracts contextual information from multiple modalities (views) of parallel datasets.",
"The most relevant task to our multimodal NER system is the task of multimodal machine translation BIBREF16 , BIBREF17 , which aims at building a better machine translation system by taking as input a sentence in a source language as well as a corresponding image. Several standard sequence-to-sequence architectures are explored (a target-language LSTM decoder that takes as input an image first). Other previous literature include study of Canonical Correlation Analysis (CCA) BIBREF18 to learn feature correlations among multiple modalities, which is widely used in many applications. Other applications include image captioning BIBREF10 , audio-visual recognition BIBREF19 , visual question answering systems BIBREF20 , etc.",
"To the best of our knowledge, our approach is the first work to incorporate visual contexts for named entity recognition tasks."
],
[
"Figure FIGREF2 illustrates the proposed multimodal NER (MNER) model. First, we obtain word embeddings, character embeddings, and visual features (Section SECREF3 ). A Bi-LSTM-CRF model then takes as input a sequence of tokens, each of which comprises a word token, a character sequence, and an image, in their respective representation (Section SECREF4 ). At each decoding step, representations from each modality are combined via the modality attention module to produce an entity label for each token ( SECREF5 ). We formulate each component of the model in the following subsections.",
"Notations: Let INLINEFORM0 a sequence of input tokens with length INLINEFORM1 , with a corresponding label sequence INLINEFORM2 indicating named entities (e.g. in standard BIO formats). Each input token is composed of three modalities: INLINEFORM3 for word embeddings, character embeddings, and visual embeddings representations, respectively."
],
[
"Similar to the state-of-the-art NER approaches BIBREF0 , BIBREF1 , BIBREF8 , BIBREF4 , BIBREF2 , BIBREF3 , we use both word embeddings and character embeddings.",
"Word embeddings are obtained from an unsupervised learning model that learns co-occurrence statistics of words from a large external corpus, yielding word embeddings as distributional semantics BIBREF21 . Specifically, we use pre-trained embeddings from GloVE BIBREF22 .",
"Character embeddings are obtained from a Bi-LSTM which takes as input a sequence of characters of each token, similarly to BIBREF0 . An alternative approach for obtaining character embeddings is using a convolutional neural network as in BIBREF1 , but we find that Bi-LSTM representation of characters yields empirically better results in our experiments.",
"Visual embeddings: To extract features from an image, we take the final hidden layer representation of a modified version of the convolutional network model called Inception (GoogLeNet) BIBREF23 , BIBREF24 trained on the ImageNet dataset BIBREF25 to classify multiple objects in the scene. Our implementation of the Inception model has deep 22 layers, training of which is made possible via “network in network\" principles and several dimension reduction techniques to improve computing resource utilization. The final layer representation encodes discriminative information describing what objects are shown in an image, which provide auxiliary contexts for understanding textual tokens and entities in accompanying captions.",
"Incorporating this visual information onto the traditional NER system is an open challenge, and multiple approaches can be considered. For instance, one may provide visual contexts only as an initial input to decoder as in some encoder-decoder image captioning systems BIBREF26 . However, we empirically observe that an NER decoder which takes as input the visual embeddings at every decoding step (Section SECREF4 ), combined with the modality attention module (Section SECREF5 ), yields better results.",
"Lastly, we add a transform layer for each feature INLINEFORM0 before it is fed to the NER entity LSTM."
],
[
"Our MNER model is built on a Bi-LSTM and CRF hybrid model. We use the following implementation for the entity Bi-LSTM.",
" it = (Wxiht-1 + Wcict-1)",
"ct = (1-it) ct-1",
" + it tanh(Wxcxt + Whcht-1)",
"ot = (Wxoxt + Whoht-1 + Wcoct)",
"ht = LSTM(xt)",
"= ot tanh(ct)",
"where INLINEFORM0 is a weighted average of three modalities INLINEFORM1 via the modality attention module, which will be defined in Section SECREF5 . Bias terms for gates are omitted here for simplicity of notation.",
"We then obtain bi-directional entity token representations INLINEFORM0 by concatenating its left and right context representations. To enforce structural correlations between labels in sequence decoding, INLINEFORM1 is then passed to a conditional random field (CRF) to produce a label for each token maximizing the following objective. y* = y p(y|h; WCRF)",
"p(y|h; WCRF) = t t (yt-1,yt;h) y' t t (y't-1,y't;h)",
"where INLINEFORM0 is a potential function, INLINEFORM1 is a set of parameters that defines the potential functions and weight vectors for label pairs ( INLINEFORM2 ). Bias terms are omitted for brevity of formulation.",
"The model can be trained via log-likelihood maximization for the training set INLINEFORM0 :",
" L(WCRF) = i p(y|h; W)",
""
],
[
"The modality attention module learns a unified representation space for multiple available modalities (words, characters, images, etc.), and produces a single vector representation with aggregated knowledge among multiple modalities, based on their weighted importance. We motivate this module from the following observations.",
"A majority of the previous literature combine the word and character-level contexts by simply concatenating the word and character embeddings at each decoding step, e.g. INLINEFORM0 in Eq. SECREF4 . However, this naive concatenation of two modalities (word and characters) results in inaccurate decoding, specifically for unknown word token embeddings (an all-zero vector INLINEFORM1 or a random vector INLINEFORM2 is assigned for any unknown token INLINEFORM3 , thus INLINEFORM4 or INLINEFORM5 ). While this concatenation approach does not cause significant errors for well-formatted text, we observe that it induces performance degradation for our social media post datasets which contain a significant number of missing tokens.",
"Similarly, naive merging of textual and visual information ( INLINEFORM0 ) yields suboptimal results as each modality is treated equally informative, whereas in our datasets some of the images may contain irrelevant contexts to textual modalities. Hence, ideally there needs a mechanism in which the model can effectively turn the switch on and off the modalities adaptive to each sample.",
"To this end, we propose a general modality attention module, which adaptively attenuates or emphasizes each modality as a whole at each decoding step INLINEFORM0 , and produces a soft-attended context vector INLINEFORM1 as an input token for the entity LSTM. [at(w),at(c),at(v)] = (Wm[xt(w); xt(c); xt(v)] + bm )",
"t(m) = (at(m))m'{w,c,v}(at(m')) m {w,c,v}",
"xt = m{w,c,v} t(m)xt(m)",
"where INLINEFORM0 is an attention vector at each decoding step INLINEFORM1 , and INLINEFORM2 is a final context vector at INLINEFORM3 that maximizes information gain for INLINEFORM4 . Note that the optimization of the objective function (Eq. SECREF4 ) with modality attention (Eq. SECREF5 ) requires each modality to have the same dimension ( INLINEFORM5 ), and that the transformation via INLINEFORM6 essentially enforces each modality to be mapped into the same unified subspace, where the weighted average of which encodes discrimitive features for recognition of named entities.",
"When visual context is not provided with each token (as in the traditional NER task), we can define the modality attention for word and character embeddings only in a similar way: [at(w),at(c)] = (Wm[xt(w); xt(c)] + bm )",
"t(m) = (at(m))m'{w,c}(at(m')) m {w,c}",
"xt = m{w,c} t(m)xt(m)",
"Note that while we apply this modality attention module to the Bi-LSTM+CRF architecture (Section SECREF4 ) for its empirical superiority, the module itself is flexible and thus can work with other NER architectures or for other multimodal applications."
],
[
"The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Snapchat Live Stories or Our Stories). Examples of such public crowd-sourced stories are “New York Story” or “Thanksgiving Story”, which comprise snaps that are aggregated for various public events, venues, etc. All snaps were posted between year 2016 and 2017, and do not contain raw images or other associated information (only textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet are available). We split the dataset into train (70%), validation (15%), and test sets (15%). The captions data have average length of 30.7 characters (5.81 words) with vocabulary size 15,733, where 6,612 are considered unknown tokens from Stanford GloVE embeddings BIBREF22 . Named entities annotated in the SnapCaptions dataset include many of new and emerging entities, and they are found in various surface forms (various nicknames, typos, etc.) To the best of our knowledge, SnapCaptions is the only dataset that contains natural image-caption pairs with expert-annotated named entities."
],
[
"Task: given a caption and a paired image (if used), the goal is to label every token in a caption in BIO scheme (B: beginning, I: inside, O: outside) BIBREF27 . We report the performance of the following state-of-the-art NER models as baselines, as well as several configurations of our proposed approach to examine contributions of each component (W: word, C: char, V: visual).",
"Bi-LSTM/CRF (W only): only takes word token embeddings (Stanford GloVE) as input. The rest of the architecture is kept the same.",
"Bi-LSTM/CRF + Bi-CharLSTM (C only): only takes a character sequence of each word token as input. (No word embeddings)",
"Bi-LSTM/CRF + Bi-CharLSTM (W+C) BIBREF0 : takes as input both word embeddings and character embeddings extracted from a Bi-CharLSTM. Entity LSTM takes concatenated vectors of word and character embeddings as input tokens.",
"Bi-LSTM/CRF + CharCNN (W+C) BIBREF1 : uses character embeddings extracted from a CNN instead.",
"Bi-LSTM/CRF + CharCNN (W+C) + Multi-task BIBREF8 : trains the model to perform both recognition (into multiple entity types) as well as segmentation (binary) tasks.",
"(proposed) Bi-LSTM/CRF + Bi-CharLSTM with modality attention (W+C): uses the modality attention to merge word and character embeddings.",
"(proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception (W+C+V): takes as input visual contexts extracted from InceptionNet as well, concatenated with word and char vectors.",
"(proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception with modality attention (W+C+V): uses the modality attention to merge word, character, and visual embeddings as input to entity LSTM."
],
[
"Table TABREF6 shows the NER performance on the Snap Captions dataset. We report both entity types recognition (PER, LOC, ORG, MISC) and named entity segmentation (named entity or not) results.",
"Parameters: We tune the parameters of each model with the following search space (bold indicate the choice for our final model): character embeddings dimension: {25, 50, 100, 150, 200, 300}, word embeddings size: {25, 50, 100, 150, 200, 300}, LSTM hidden states: {25, 50, 100, 150, 200, 300}, and INLINEFORM0 dimension: {25, 50, 100, 150, 200, 300}. We optimize the parameters with Adagrad BIBREF28 with batch size 10, learning rate 0.02, epsilon INLINEFORM1 , and decay 0.0.",
"Main Results: When visual context is available (W+C+V), we see that the model performance greatly improves over the textual models (W+C), showing that visual contexts are complimentary to textual information in named entity recognition tasks. In addition, it can be seen that the modality attention module further improves the entity type recognition performance for (W+C+V). This result indicates that the modality attention is able to focus on the most effective modality (visual, words, or characters) adaptive to each sample to maximize information gain. Note that our text-only model (W+C) with the modality attention module also significantly outperform the state-of-the-art baselines BIBREF8 , BIBREF1 , BIBREF0 that use the same textual modalities (W+C), showing the effectiveness of the modality attention module for textual models as well.",
"Error Analysis: Table TABREF17 shows example cases where incorporation of visual contexts affects prediction of named entities. For example, the token `curry' in the caption “The curry's \" is polysemous and may refer to either a type of food or a famous basketball player `Stephen Curry', and the surrounding textual contexts do not provide enough information to disambiguate it. On the other hand, visual contexts (visual tags: `parade', `urban area', ...) provide similarities to the token's distributional semantics from other training examples (snaps from “NBA Championship Parade Story\"), and thus the model successfully predicts the token as a named entity. Similarly, while the text-only model erroneously predicts `Apple' in the caption “Grandma w dat lit Apple Crisp\" as an organization (Apple Inc.), the visual contexts (describing objects related to food) help disambiguate the token, making the model predict it correctly as a non-named entity (a fruit). Trending entities (musicians or DJs such as `CID', `Duke Dumont', `Marshmello', etc.) are also recognized correctly with strengthened contexts from visual information (describing concert scenes) despite lack of surrounding textual contexts. A few cases where visual contexts harmed the performance mostly include visual tags that are unrelated to a token or its surrounding textual contexts.",
"Visualization of Modality Attention: Figure FIGREF19 visualizes the modality attention module at each decoding step (each column), where amplified modality is represented with darker color, and attenuated modality is represented with lighter color.",
"For the image-aided model (W+C+V; upper row in Figure FIGREF19 ), we confirm that the modality attention successfully attenuates irrelevant signals (selfies, etc.) and amplifies relevant modality-based contexts in prediction of a given token. In the example of “disney word essential = coffee\" with visual tags selfie, phone, person, the modality attention successfully attenuates distracting visual signals and focuses on textual modalities, consequently making correct predictions. The named entities in the examples of “Beautiful night atop The Space Needle\" and “Splash Mountain\" are challenging to predict because they are composed of common nouns (space, needle, splash, mountain), and thus they often need additional contexts to correctly predict. In the training data, visual contexts make stronger indicators for these named entities (space needle, splash mountain), and the modality attention module successfully attends more to stronger signals.",
"For text-only model (W+C), we observe that performance gains mostly come from the modality attention module better handling tokens unseen during training or unknown tokens from the pre-trained word embeddings matrix. For example, while WaRriOoOrs and Kooler Matic are missing tokens in the word embeddings matrix, it successfully amplifies character-based contexts (capitalized first letters, similarity to known entities `Golden State Warriors') and suppresses word-based contexts (word embeddings for unknown tokens `WaRriOoOrs'), leading to correct predictions. This result is significant because it shows performance of the model, with an almost identical architecture, can still improve without having to scale the word embeddings matrix indefinitely.",
"Figure FIGREF19 (b) shows the cases where the modality attention led to incorrect predictions. For example, the model predicts missing tokens HUUUGE and Shampooer incorrectly as named entities by amplifying misleading character-based contexts (capitalized first letters) or visual contexts (concert scenes, associated contexts of which often include named entities in the training dataset).",
"Sensitivity to Word Embeddings Vocabulary Size: In order to isolate the effectiveness of the modality attention module on textual models in handling missing tokens, we report the performance with varying word embeddings vocabulary sizes in Table TABREF20 . By increasing the number of missing tokens artificially by randomly removing words from the word embeddings matrix (original vocab size: 400K), we observe that while the overall performance degrades, the modality attention module is able to suppress the peformance degradation. Note also that the performance gap generally gets bigger as we decrease the vocabulary size of the word embeddings matrix. This result is significant in that the modality attention is able to improve the model more robust to missing tokens without having to train an indefinitely large word embeddings matrix for arbitrarily noisy social media text datasets."
],
[
"We proposed a new multimodal NER (MNER: image + text) task on short social media posts. We demonstrated for the first time an effective MNER system, where visual information is combined with textual information to outperform traditional text-based NER baselines. Our work can be applied to myriads of social media posts or other articles across multiple platforms which often include both text and accompanying images. In addition, we proposed the modality attention module, a new neural mechanism which learns optimal integration of different modes of correlated information. In essence, the modality attention learns to attenuate irrelevant or uninformative modal information while amplifying the primary modality to extract better overall representations. We showed that the modality attention based model outperforms other state-of-the-art baselines when text was the only modality available, by better combining word and character level information."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed Methods",
"Features",
"Bi-LSTM + CRF for Multimodal NER",
"Modality Attention",
"SnapCaptions Dataset",
"Baselines",
"Results: SnapCaptions Dataset",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"7c576fb32125b57086ba9d3d5864dfa71c0588e5",
"baa3510b050f21f679a85a618a0e00a69a2dff1c"
],
"answer": [
{
"evidence": [
"For the image-aided model (W+C+V; upper row in Figure FIGREF19 ), we confirm that the modality attention successfully attenuates irrelevant signals (selfies, etc.) and amplifies relevant modality-based contexts in prediction of a given token. In the example of “disney word essential = coffee\" with visual tags selfie, phone, person, the modality attention successfully attenuates distracting visual signals and focuses on textual modalities, consequently making correct predictions. The named entities in the examples of “Beautiful night atop The Space Needle\" and “Splash Mountain\" are challenging to predict because they are composed of common nouns (space, needle, splash, mountain), and thus they often need additional contexts to correctly predict. In the training data, visual contexts make stronger indicators for these named entities (space needle, splash mountain), and the modality attention module successfully attends more to stronger signals."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For the image-aided model (W+C+V; upper row in Figure FIGREF19 ), we confirm that the modality attention successfully attenuates irrelevant signals (selfies, etc.) and amplifies relevant modality-based contexts in prediction of a given token."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Error Analysis: Table TABREF17 shows example cases where incorporation of visual contexts affects prediction of named entities. For example, the token `curry' in the caption “The curry's \" is polysemous and may refer to either a type of food or a famous basketball player `Stephen Curry', and the surrounding textual contexts do not provide enough information to disambiguate it. On the other hand, visual contexts (visual tags: `parade', `urban area', ...) provide similarities to the token's distributional semantics from other training examples (snaps from “NBA Championship Parade Story\"), and thus the model successfully predicts the token as a named entity. Similarly, while the text-only model erroneously predicts `Apple' in the caption “Grandma w dat lit Apple Crisp\" as an organization (Apple Inc.), the visual contexts (describing objects related to food) help disambiguate the token, making the model predict it correctly as a non-named entity (a fruit). Trending entities (musicians or DJs such as `CID', `Duke Dumont', `Marshmello', etc.) are also recognized correctly with strengthened contexts from visual information (describing concert scenes) despite lack of surrounding textual contexts. A few cases where visual contexts harmed the performance mostly include visual tags that are unrelated to a token or its surrounding textual contexts."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"On the other hand, visual contexts (visual tags: `parade', `urban area', ...) provide similarities to the token's distributional semantics from other training examples (snaps from “NBA Championship Parade Story\"), and thus the model successfully predicts the token as a named entity."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"3c4b3220606fb88d3a06bccd2b0db49f43e11bf8",
"8415c4e16fc2c88ee20f5c333cb3608afee37a50"
],
"answer": [
{
"evidence": [
"(proposed) Bi-LSTM/CRF + Bi-CharLSTM with modality attention (W+C): uses the modality attention to merge word and character embeddings.",
"(proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception (W+C+V): takes as input visual contexts extracted from InceptionNet as well, concatenated with word and char vectors.",
"(proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception with modality attention (W+C+V): uses the modality attention to merge word, character, and visual embeddings as input to entity LSTM."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"(proposed) Bi-LSTM/CRF + Bi-CharLSTM with modality attention (W+C): uses the modality attention to merge word and character embeddings.\n\n(proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception (W+C+V): takes as input visual contexts extracted from InceptionNet as well, concatenated with word and char vectors.\n\n(proposed) Bi-LSTM/CRF + Bi-CharLSTM + Inception with modality attention (W+C+V): uses the modality attention to merge word, character, and visual embeddings as input to entity LSTM."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Our contributions are three-fold: we propose (1) an LSTM-CNN hybrid multimodal NER network that takes as input both image and text for recognition of a named entity in text input. To the best of our knowledge, our approach is the first work to incorporate visual contexts for named entity recognition tasks. (2) We propose a general modality attention module that selectively chooses modalities to extract primary context from, maximizing information gain and suppressing irrelevant contexts from each modality (we treat words, characters, and images as separate modalities). (3) We show that the proposed approaches outperform the state-of-the-art NER models (both with and without using additional visual contexts) on our new MNER dataset SnapCaptions, a large collection of informal and extremely short social media posts paired with unique images."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Our contributions are three-fold: we propose (1) an LSTM-CNN hybrid multimodal NER network that takes as input both image and text for recognition of a named entity in text input."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"cb6466b997427248e68f9235e7420a49a9d3d062",
"ea27ea55112361a6f73f11ec7a3b27242cfc4f3e"
],
"answer": [
{
"evidence": [
"The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Snapchat Live Stories or Our Stories). Examples of such public crowd-sourced stories are “New York Story” or “Thanksgiving Story”, which comprise snaps that are aggregated for various public events, venues, etc. All snaps were posted between year 2016 and 2017, and do not contain raw images or other associated information (only textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet are available). We split the dataset into train (70%), validation (15%), and test sets (15%). The captions data have average length of 30.7 characters (5.81 words) with vocabulary size 15,733, where 6,612 are considered unknown tokens from Stanford GloVE embeddings BIBREF22 . Named entities annotated in the SnapCaptions dataset include many of new and emerging entities, and they are found in various surface forms (various nicknames, typos, etc.) To the best of our knowledge, SnapCaptions is the only dataset that contains natural image-caption pairs with expert-annotated named entities."
],
"extractive_spans": [
"PER, LOC, ORG, MISC"
],
"free_form_answer": "",
"highlighted_evidence": [
"The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Snapchat Live Stories or Our Stories). Examples of such public crowd-sourced stories are “New York Story” or “Thanksgiving Story”, which comprise snaps that are aggregated for various public events, venues, etc. All snaps were posted between year 2016 and 2017, and do not contain raw images or other associated information (only textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet are available). We split the dataset into train (70%), validation (15%), and test sets (15%). The captions data have average length of 30.7 characters (5.81 words) with vocabulary size 15,733, where 6,612 are considered unknown tokens from Stanford GloVE embeddings BIBREF22 . Named entities annotated in the SnapCaptions dataset include many of new and emerging entities, and they are found in various surface forms (various nicknames, typos, etc.) To the best of our knowledge, SnapCaptions is the only dataset that contains natural image-caption pairs with expert-annotated named entities."
],
"extractive_spans": [
"PER",
"LOC",
"ORG",
"MISC"
],
"free_form_answer": "",
"highlighted_evidence": [
"The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"420e0f6adcc90fd2c33a75adae1f863fadfca75f",
"e7fcf32b76e9b92489d3abecd598deb8ed2127cc"
],
"answer": [
{
"evidence": [
"Task: given a caption and a paired image (if used), the goal is to label every token in a caption in BIO scheme (B: beginning, I: inside, O: outside) BIBREF27 . We report the performance of the following state-of-the-art NER models as baselines, as well as several configurations of our proposed approach to examine contributions of each component (W: word, C: char, V: visual)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Task: given a caption and a paired image (if used), the goal is to label every token in a caption in BIO scheme (B: beginning, I: inside, O: outside) BIBREF27 . "
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6de25a17ed3727d60caf432a197aebf508fe45a0",
"dfbae90c192c6c6ee920598b0ebba94dd892748d"
],
"answer": [
{
"evidence": [
"The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Snapchat Live Stories or Our Stories). Examples of such public crowd-sourced stories are “New York Story” or “Thanksgiving Story”, which comprise snaps that are aggregated for various public events, venues, etc. All snaps were posted between year 2016 and 2017, and do not contain raw images or other associated information (only textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet are available). We split the dataset into train (70%), validation (15%), and test sets (15%). The captions data have average length of 30.7 characters (5.81 words) with vocabulary size 15,733, where 6,612 are considered unknown tokens from Stanford GloVE embeddings BIBREF22 . Named entities annotated in the SnapCaptions dataset include many of new and emerging entities, and they are found in various surface forms (various nicknames, typos, etc.) To the best of our knowledge, SnapCaptions is the only dataset that contains natural image-caption pairs with expert-annotated named entities."
],
"extractive_spans": [
"10K user-generated image (snap) and textual caption pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Snapchat Live Stories or Our Stories). Examples of such public crowd-sourced stories are “New York Story” or “Thanksgiving Story”, which comprise snaps that are aggregated for various public events, venues, etc. All snaps were posted between year 2016 and 2017, and do not contain raw images or other associated information (only textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet are available). We split the dataset into train (70%), validation (15%), and test sets (15%). The captions data have average length of 30.7 characters (5.81 words) with vocabulary size 15,733, where 6,612 are considered unknown tokens from Stanford GloVE embeddings BIBREF22 . Named entities annotated in the SnapCaptions dataset include many of new and emerging entities, and they are found in various surface forms (various nicknames, typos, etc.) To the best of our knowledge, SnapCaptions is the only dataset that contains natural image-caption pairs with expert-annotated named entities."
],
"extractive_spans": [],
"free_form_answer": "10000",
"highlighted_evidence": [
"The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do they inspect their model to see if their model learned to associate image parts with words related to entities?",
"Does their NER model learn NER from both text and images?",
"Which types of named entities do they recognize?",
"Can named entities in SnapCaptions be discontigious?",
"How large is their MNER SnapCaptions dataset?"
],
"question_id": [
"7cd22ca9e107d2b13a7cc94252aaa9007976b338",
"adbf33c6144b2f5c40d0c6a328a92687a476f371",
"f7a89b9cd2792f23f2cb43d50a01b8218a6fbb24",
"a0543b4afda15ea47c1e623c7f00d4aaca045be0",
"1591068b747c94f45b948e12edafe74b5e721047"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Multimodal NER + modality attention. (a) Visual contexts help recognizing polysemous entity names (‘Monopoly’ as in a board game versus an economics term). (b) Modality attention successfully suppresses word embeddings of a unknown token (‘Marshmelloooo’ with erroneously trailing ‘o’s), and focuses on character-based context (e.g. capitalized first letter, and lexical similarity to a known named entity (‘Marshmello’, a music producer)) for correct prediction.",
"Figure 2: The main architecture for our multimodal NER (MNER) network with modality attention. At each decoding step, word embeddings, character embeddings, and visual features are merged with modality attention. Bi-LSTM/CRF takes as input each token and produces an entity label.",
"Table 1: NER performance on the SnapCaptions dataset with varying modalities (W: word, C: char, V: visual). We report precision, recall, and F1 score for both entity types recognition (PER, LOC, ORG, MISC) and entity segmentation (untyped recognition - named entity or not) tasks.",
"Table 2: Error analysis: when do images help NER? Ground-truth labels (GT) and predictions of our model with vision input (W+C+V) and the one without (W+C) for the underlined named entities (or false positives) are shown. For interpretability, visual tags (label output of InceptionNet) are presented instead of actual feature vectors used.",
"Figure 3: Visualization of modality attention (a) successful cases and (b) unsuccessful ones from SnapCaptions test data. For each decoding step of a token (column), the modality attention module amplifies the most relevant modality (darker) while attenuating irrelevant modalities (lighter). The model makes final predictions based on weighted signals from all modalities. For interpretability, visual tags (label output of InceptionNet) are presented instead of actual feature vectors used. GT: ground-truth, Pred: prediction by our model. Modalities- W: words, C: characters, V: visual.",
"Table 3: NER performance (F1) on SnapCaptions with varying word embeddings vocabulary size. Models being compared: (W+C) Bi-LSTM/CRF + BiCharLSTM w/ and w/o modality attention (M.A.)"
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"8-Figure3-1.png",
"8-Table3-1.png"
]
} | [
"How large is their MNER SnapCaptions dataset?"
] | [
[
"1802.07862-SnapCaptions Dataset-0"
]
] | [
"10000"
] | 40 |
2004.01853 | STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization | Abstractive summarization aims to rewrite a long document to its shorter form, which is usually modeled as a sequence-to-sequence (Seq2Seq) learning problem. Seq2Seq Transformers are powerful models for this problem. Unfortunately, training large Seq2Seq Transformers on limited supervised summarization data is challenging. We, therefore, propose STEP (as shorthand for Sequence-to-Sequence Transformer Pre-training), which can be trained on large scale unlabeled documents. Specifically, STEP is pre-trained using three different tasks, namely sentence reordering, next sentence generation, and masked document generation. Experiments on two summarization datasets show that all three tasks can improve performance upon a heavily tuned large Seq2Seq Transformer which already includes a strong pre-trained encoder by a large margin. By using our best task to pre-train STEP, we outperform the best published abstractive model on CNN/DailyMail by 0.8 ROUGE-2 and New York Times by 2.4 ROUGE-2. | {
"paragraphs": [
[
"Large pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 improved the state-of-the-art of various natural language understanding (NLU) tasks such as question answering (e.g., SQuAD; BIBREF5), natural language inference (e.g., MNLI; BIBREF6) as well as text classification BIBREF7. These models (i.e., large LSTMs; BIBREF8 or Transformers; BIBREF9) are pre-trained on large scale unlabeled text with language modeling BIBREF0, BIBREF1, masked language modeling BIBREF2, BIBREF4 and permutation language modeling BIBREF3 objectives. In NLU tasks, pre-trained language models are mostly used as text encoders.",
"Abstractive document summarization aims to rewrite a long document to its shorter form while still retaining its important information. Different from extractive document summarization that extacts important sentences, abstractive document summarization may paraphrase original sentences or delete contents from them. For more details on differences between abstractive and extractive document summary, we refer the interested readers to Nenkova:McKeown:2011 and Section SECREF2. This task is usually framed as a sequence-to-sequence learning problem BIBREF10, BIBREF11. In this paper, we adopt the sequence-to-sequence (seq2seq) Transformer BIBREF9, which has been demonstrated to be the state-of-the-art for seq2seq modeling BIBREF9, BIBREF12. Unfortunately, training large seq2seq Transformers on limited supervised summarization data is challenging BIBREF12 (refer to Section SECREF5). The seq2seq Transformer has an encoder and a decoder Transformer. Abstractive summarization requires both encoding of an input document and generation of a summary usually containing multiple sentences. As mentioned earlier, we can take advantage of recent pre-trained Transformer encoders for the document encoding part as in liu2019text. However, liu2019text leave the decoder randomly initialized. In this paper, we aim to pre-train both the encoder (i.e., the encoding part) and decoder (i.e., the generation part) of a seq2seq Transformer , which is able to improve abstractive summarization performance.",
"Based on the above observations, we propose Step (as shorthand for Sequence-to-Sequence TransformEr Pre-training), which can be pre-trained on large scale unlabeled documents. Specifically, we design three tasks for seq2seq model pre-training, namely Sentence Reordering (SR), Next Sentence Generation (NSG), and Masked Document Generation (MDG). SR learns to recover a document with randomly shuffled sentences. NSG generates the next segment of a document based on its preceding segment. MDG recovers a masked document to its original form. After pre-trianing Step using the three tasks on unlabeled documents, we fine-tune it on supervised summarization datasets.",
"We evaluate our methods on two summarization datasets (i.e., the CNN/DailyMail and the New York Times datasets). Experiments show that all three tasks we propose can improve upon a heavily tuned large seq2seq Transformer which already includes a strong pre-trained encoder by a large margin. Compared to the best published abstractive models, Step improves the ROUGE-2 by 0.8 on the CNN/DailyMail dataset and by 2.4 on the New York Times dataset using our best performing task for pre-training. Human experiments also show that Step can produce significantly better summaries in comparison with recent strong abstractive models."
],
[
"This section introduces extractive and abstractive document summarization as well as pre-training methods for natural language processing tasks."
],
[
"Extractive summarization systems learn to find the informative sentences in a document as its summary. This task is usually viewed as a sentence ranking problem BIBREF13, BIBREF14 using scores from a binary (sequence) classification model, which predicts whether a sentence is in the summary or not. Extractive neural models employ hierarchical LSTMs/CNNs as the feature learning part of the binary (sequence) classifier BIBREF15, BIBREF16, BIBREF17, BIBREF18, which largely outperforms discrete feature based models BIBREF19, BIBREF20, BIBREF21. Very recently, the feature learning part was replaced again with pre-trained transformers BIBREF22, BIBREF23 that lead to another huge improvement of summarization performance. However, extractive models have their own limitations. For example, the extracted sentences might be too long and redundant. Besides, human written summaries in their nature are abstractive. Therefore, we focus on abstractive summarization in this paper."
],
[
"The goal of abstractive summarization is to generate summaries by rewriting a document, which is a sequence-to-sequence learning problem. seq2seq attentive LSTMs BIBREF8, BIBREF24 are employed in nallapati2016abstractive. Even these models are extended with copy mechanism BIBREF25, coverage model BIBREF11 and reinforcement learning BIBREF26, their results are still very close to that of Lead3 which selects the leading three sentences of a document as its summary. One possible reason is that LSTMs without pre-training are not powerful enough. liu2019text used a seq2seq Transformer model with its encoder initialized with a pre-trained Transformer (i.e., BERT; BIBREF2) and achieved the state-of-the-art performance. Our work goes one step further, we propose a method to pre-train the decoder together with the encoder and then initialize both the encoder and decoder of a summarization model with the pre-trained Transformers.",
"There is also a line of work that bridges extractive and abstractive models with reinforcement learning BIBREF27, attention fusion BIBREF28 and bottom-up attention BIBREF29, while our model is conceptually simpler."
],
[
"Pre-training methods draw a lot of attention recently. peters2018deep and radford:2019:arxiv pre-trained LSTM and Transformer encoders using language modeling objectives. To leverage the context in both directions, BIBREF2 proposed BERT, which is trained with the mask language modeling objective. XLNet BIBREF3 is trained with permutation language modeling objective, which removes the independence assumption of masked tokens in BERT. RoBERTa BIBREF4 extends BERT with more training data and better training strategies. All the methods above focus on pre-training an encoder, while we propose methods to pre-train both the encoder and decoder of a seq2seq model.",
"dong2019unified proposed a Transformer language model that can be used for both natural language understanding and generation tasks, which is pre-trained using masked, unidirectional and seq2seq language modeling objectives. Their method tries to pre-train a seq2seq Transformer with its encoder and decoder parameters shared. Differently, we pre-train a seq2seq Transformer with separate parameters for the encoder and decoder. song2019mass proposed a method to pre-train a seq2seq Transformer by masking a span of text and then predicting the original text with masked tokens at other positions. Their pre-training task is similar to our Masked Document Generation task, but we apply a different masking strategy and predict the original text without masked tokens. Besides, we propose another two tasks for seq2seq model pre-training. BIBREF30 tested their model on sentence-level tasks (e.g., machine translation and sentence compression), while we aim to solve document-level tasks (e.g., abstractive document summarization)."
],
[
"This section first introduces the backbone architecture of our abstractive summarization model Step. We then describe methods to pre-train Step and finally move on to the fine-tuning on summarization datasets."
],
[
"In this work, the task of abstractive document summarization is modeled as a sequence-to-sequence learning problem, where a document is viewed as a sequence of tokens and its corresponding summary as another sequence of tokens. We adopt the seq2seq Transformer architecture BIBREF9, which includes an encoder Transformer and a decoder Transformer. Both the encoder and decoder Transformers have multiple layers and each layer contains a multi-head attentive sub-layer followed by a fully connected sub-layer with residual connections BIBREF31 and layer normalization BIBREF32.",
"Let us use $X = (x_1, x_2, \\dots , x_{|X|})$ to denote a document and use $Y = (y_1, y_2, \\dots , y_{|Y|})$ to denote its summary. The encoder takes the document $X$ as input and transforms it to its contextual representations. The decoder learns to generate the summary $Y$ one token at a time based on the contextual representations and all preceding tokens that have been generated so far:",
"where $y_{<t}$ stands for all tokens before position $t$ (i.e., $y_{<t}=(y_1, y_2, \\dots , y_{t-1})$). This model can be trained by minimizing the negative log-likelihood of the training document-summary pairs."
],
[
"Training a seq2seq Transformer model on a summarization dataset from scratch is difficult due to the limited number of document-summary pairs. Pre-trained Transformer encoders such as BERT BIBREF2 and RoBERTa BIBREF4 have achieved great success in many natural language understanding tasks. Therefore, we first initialize the encoder of our seq2seq Transformer summarization model Step with an existing pre-trained Transformer encoder (i.e., RoBERTa) to enhance its language understanding capabilities. To help Step gain language generation capabilities and the abilities of associating generated text with encoder outputs, we continue to pre-train it on unlabeled text. In the following, we describe our pre-training tasks."
],
[
"A document is typically composed of multiple sentences separated by full stops. In this task, we first shuffle the document by sentences and then recover the original document. There are several reasons why we design this task. First, a summary of a document usually consists of multiple sentences. We expect that Step learns to generate long and coherent summaries (across sentences). The output of the task (i.e., the original document) also contains multiple sentences. Second, sentence reordering (or content reordering) is necessary for summarization. According to the statistics on training sets of our summarization datasets, contents of the original documents are reordered in their summaries for 40% of cases. We define content reordering as follows. For each document-summary pair, we first map each sentence in the summary to one sentence in its paired document by maximizing the ROUGE score. If the sequence of sentences in the summary is different from the sequence of their mapped sentences in the original document, we count this as one content reordering. Thirdly, abstractive summary requires reproducing factual details (e.g., named entities, figures) from source text. We also expect Step to learn to copy tokens. Here is a formal definition of this task. Let us change the notation of a document slightly in this paragraph. Let $X=(S_1, S_2, \\dots , S_m)$ denote a document, where $S_i = (w^i_1, w^i_2, \\dots , w^i_{|S_i|})$ is a sentence in it, $w^i_j$ is a word in $S_i$ and $m$ is the number of sentences. $X$ is still a sequence of tokens (by concatenating tokens in all sentences). Let $A=\\text{\\tt permutation}(m)=(a_1,a_2,\\dots , a_m)$ denote a permuted range of $(1, 2, \\dots , m)$ and therefore $\\hat{X}_S=(S_{a_1}, S_{a_2}, \\dots , S_{a_m})$ is the shuffled document. Note that $\\hat{X}_S$ is a sequence of tokens by concatenating all shuffled sentences. Step can be trained on $\\langle \\hat{X}_S, X \\rangle $ pairs constructed from unlabeled documents, as demonstrated in Figure FIGREF5.",
"Note that document rotation is a special case of sentence reordering with significant amount of partially ordered sentences, which we believe is a simpler task. In this work, we thus only consider the general case of sentence reordering."
],
[
"The second pre-training task leverages the natural order of text. Next Sentence Generation (NSG) uses one span of text in a document to predict its next span of text, as shown in Figure FIGREF5. Specifically, we split a document into two segments (i.e., $G_1$ and $G_2$). Note that each segment might contain multiple sentences, which fits the document summarization task very well, since either a document or its summary usually includes multiple sentences. Intuitively, in a document, sentences are highly correlated with their preceding sentences due to the context dependent nature of documents or language. We intend our model to learn to generate multiple sentences and also learn to focus on preceding context.",
"We have at least two options for the splitting position of the two segments. Option one: the position right after a full-stop symbol (such as period, question mark, etc.) is selected as the splitting point, which ensures full sentences for each segment. Option two: the splitting point can be at any position within the document. We choose the second option, which may lead to incomplete sentences in segments. We intend to force the encoder and decoder to understand input text without complete information, which we believe is more challenging compared to option one. Besides, as a common wisdom in abstractive summarization, documents are truncated to a fixed number of tokens, which may also contain incomplete sentences. We use option two to reduce the pre-training and fine-tuning input mismatch. In this task, we train the model Step on large amount of $\\langle G_1, G_2\\rangle $ pairs constructed following the option two splitting strategy.",
"Next sentence prediction has been used in skip-thought vectors BIBREF33. There are two differences. First, each segment in their model only has one sentence; second, they use this task to pre-train an encoder rather than an entire seq2seq model. BIBREF2 introduced a task named next sentence prediction (NSP), which is different from this task. NSP is a classification task, but NSG is a generation task, which intends to pre-train a generation model."
],
[
"The third task we consider is Masked Document Generation (MDG) that learns to recover a document with a masked span of tokens (see Figure FIGREF5). For simplicity, a document consisting of a sequence of tokens is denoted as $X=(x_1, x_2, \\cdots , x_{|X|})$. We randomly sample the length of the span $l$ from a discrete uniform distribution $\\mathcal {U}(a, b)$ and the span start position $k$ from another discrete uniform distribution $\\mathcal {U}(1, |X|-l+1)$ (see Section SECREF4 for more details). Thus, $\\mathcal {M}=(x_k, x_{k+1}, \\cdots , x_{k+l-1})$ is the text span to be masked.",
"One straightforward masking strategy is to replace each token residing in $\\mathcal {M}$ with a special [MASK] token. However, we refrain from doing so because of the following three reasons. Usually, [MASK] tokens will not appear in downstream tasks. Second, entirely masking a continuous sub-sequence of $X$ may make the whole document incomprehensible, which might be too challenging for our model to learn. Third, similar to SR, avoiding replacing every token with [MASK] also helps our model learn the ability of copying tokens from the input while preserving the ability of generating novel tokens.",
"In the sub-sequence $\\mathcal {M}$, each token is processed with one of the three strategies: 1) replaced with the [MASK] token; 2) replaced with a random token; 3) remain unchanged. Inspired by BERT BIBREF2, for 80% tokens, we follow strategy 1). In 10% of cases, we employ strategy 2) and we use strategy 3) for the remaining 10% of cases. Let $\\hat{X}_M$ denote the document after the application of our masking strategy. We could create infinite amount of $\\langle \\hat{X}_M,X\\rangle $ pairs to train Step.",
"During pre-training, we could also employ all the three tasks (i.e., SR, NSG, MDG) together. For each training batch, we randomly choose one task and each task is used for $1/3$ of the time."
],
[
"After pre-training Step with the three tasks introduced in Section SECREF9, we fine-tune the model on abstractive document summarization datasets. The fine-tuning process is straightforward. We simply continue to train Step on the supervised document-summary pairs. Similar to other seq2seq summarization models, we do beam search during the generation of summaries."
],
[
"In this section, we present the experimental setup for evaluating our summarization models. We first introduce the datasets used for our experiments. Then we describe training details of our models as well as our evaluation protocols."
],
[
"We assess the summarization performance of our models on two benchmark datasets: the CNN/DailyMail (CNNDM) dataset BIBREF34, BIBREF11 and the New York Times (NYT) dataset BIBREF35. We pre-train our models on the GIGA-CM dataset introduced in zhang-etal-2019-hibert."
],
[
"CNNDM contains news articles and their associated highlights (i.e., summaries) collected from the CNN and Daily Mail Online websites. Following previous work BIBREF11, BIBREF22, BIBREF23, we use the non-anonymized version of CNNDM. Specifically, we preprocess the dataset with the publicly available scripts provided by see2017get and obtain 287,226 document-summary pairs for training, 13,368 for validation and 11,490 for test."
],
[
"The NYT dataset is a collection of articles along with multi-sentence summaries written by library scientists. We closely follow the preprocessing procedures described in durrett2016learning and liu2019text. The test set is constructed by including all articles published on January 1, 2017 or later, which contains 9,076 articles. The remaining 100,834 articles are split into a training set of 96,834 examples and a validation set of 4,000 examples. As in BIBREF36, we also remove articles whose summaries contain less than 50 words from the test set, and the resulting test set contains 3,452 examples."
],
[
"To pre-train our model with the tasks introduced in Section SECREF9, following the procedures in BIBREF22, we created the GIGA-CM dataset, which contains only unlabeled documents. The training set of GIGA-CM is composed of 6,521,658 documents sampled from the English Gigaword dataset and the training documents in CNNDM. We used the 13,368 documents in the validation split of CNNDM as its validation set. Note that the Gigaword dataset overlaps with the NYT dataset and we therefore exclude the test set of NYT from the training set of GIGA-CM.",
"For CNNDM, NYT and GIGA-CM datasets, we segment and tokenize documents and/or summaries (GIGA-CM only contains documents) using the Stanford CoreNLP toolkit BIBREF37. To reduce the vocabulary size, we further apply the UTF8 based BPE BIBREF38 introduced in GPT-2 BIBREF39 to all datasets. As a common wisdom in abstractive summarization, documents and summaries in CNNDM and NYT are usually truncated to 512 and 256 tokens, respectively.",
"We leverage unlabeled documents differently for different pre-training tasks (see Section SECREF9). We first split each document into 512 token segments if it contains more than 512 tokens (segments or documents with less than 512 tokens are removed). In Sentence Reordering (SR) and Masked Document Generation (MDG), we use the segment after transformation to predict the original segment. We set the minimum masked length $a=100$ and the maximum masked length $b=256$ in MDG. In Next Sentence Generation (NSG), each segment is used to predict its next 256 tokens."
],
[
"As mentioned in Section SECREF3, our model is a Seq2Seq Transformer model BIBREF9. The encoder is initialized with the $\\text{RoBERTa}_{\\text{LARGE}}$ model BIBREF4, and therefore they share the same architecture. Specifically, the encoder is a 24-layer Transformer. Each layer has 16 attention heads and its hidden size and feed-forward filter size are 1,024 and 4,096, respectively. The decoder is shallower with 6 layers. The hidden size and number of attention head of the decoder are identical to these of the encoder, but the feed-forward filter size is 2,048. We use a smaller filter size in the decoder to reduce the computational and memory cost. The dropout rates of all layers in the encoder are set to 0.1 and all dropout rates in the decoder are set to 0.3. Our models are optimized using Adam BIBREF40 with $\\beta _1=0.9$, $\\beta _2=0.98$. The other optimization hyper-parameters for pre-training and fine-tuning are different. In the pre-training stage, the encoder is initialized with a pre-trained model while the decoder is randomly initialized. Therefore, we used two separate optimizers for the encoder and decoder with a smaller learning rate for the encoder optimizer. Learning rates of the encoder and decoder are set to $2e-5$ and $1e-4$ with 10,000 warmup steps, respectively. We also adopted the same learning rate schedule strategies as BIBREF9. We used smaller batch sizes for datasets with less examples (i.e., 1,024 for GIGA-CM, 256 for CNNDM and 128 for NYT) to ensure each epoch has sufficient number of model updates. We trained our models until their convergence of validation perplexities (around 30 epochs on GIGA-CM, 60 epochs on CNNDM and 40 epochs on NYT). One epoch on GIGA-CM takes around 24 hours with 8 Nvidia Tesla V100 GPUs. The time costs for different pre-training tasks are close.",
"Most of the hyper-parameters in the fine-tuning stage are the same as these in the pre-training stage. The differences are as follows. The learning rates for both the encoder and decoder are set to $2e-5$ with 4,000 warmup steps, since both the encoder and decoder are already pre-trained. We trained our models for 50 epochs (saved per epoch) and selected the best model w.r.t. ROUGE score on the validation set . During decoding, we applied beam search with beam size of 5. Following BIBREF26, we also blocked repeating trigrams during beam search and tuned the minimum summary length on the validation set. Similar to the pre-training process, the datasets with less instances were fine-tuned with smaller batch size (i.e., 768 for CNNDM and 64 for NYT)."
],
[
"We used ROUGE BIBREF41 to measure the quality of different summarization model outputs. We reported full-length F1 based ROUGE-1, ROUGE-2 and ROUGE-L scores on CNNDM, while we used the limited-length recall based ROUGE-1, ROUGE-2 and ROUGE-L on NYT following BIBREF36. The ROUGE scores are computed using the ROUGE-1.5.5.pl script.",
"Since summaries generated by abstractive models may produce disfluent or ungrammatical outputs, we also evaluated abstractive systems by eliciting human judgements. Following previous work BIBREF15, BIBREF17, 20 documents are randomly sampled from the test split of CNNDM. Participants are presented with a document and a list of outputs generated by different abstractive summarization systems. Then they are asked to rank the outputs according to informativeness (does the summary capture the informative part of the document?), fluency (is the summary grammatical?), and succinctness (does the summary express the document clearly in a few words?)"
],
[
"The results on the CNNDM are summarized in Table TABREF25. The first and second blocks show results of previous extractive and abstractive models, respectively. Results of Step are all listed in the third block. Lead3 is a baseline which simply takes the first three sentences of a document as its summary. BERTExt BIBREF23 is an extractive model fine-tuning on BERT BIBREF2 that outperforms other extractive systems. PTGen BIBREF11, DRM BIBREF26, and DCA BIBREF42 are sequence-to-sequence learning based models extended with copy and coverage mechanism, reinforcement learning, and deep communicating agents individually. BottomUp BIBREF29 assisted summary generation with a word prediction model. BERTAbs BIBREF23 and UniLM BIBREF43 are both pre-training based seq2seq summarization models. We also implemented three abstractive models as our baselines. Transformer-S2S is 6-layer seq2seq Transformer BIBREF9 with random initialization. When we replaced the encoder of Transformer-S2S with $\\text{RoBERTa}_\\text{BASE}$ BIBREF4, $\\text{RoBERTa}_\\text{BASE}$-S2S outperforms Transformer-S2S by nearly 2 ROUGE, which demonstrates the effectiveness of pre-trained models. With even larger pre-trained model $\\text{RoBERTa}_\\text{LARGE}$, $\\text{RoBERTa}$-S2S is comparable with the best published abstractive model UniLM BIBREF43.",
"Based on $\\text{RoBERTa}$-S2S (the sizes of Step and $\\text{RoBERTa}$-S2S are identical), we study the effect of different pre-training tasks (see Section SECREF9). We first pre-train Step on unlabeled documents of CNNDM training split to get quick feedback, denoted as Step (in-domain). From the top part of the third block in Table TABREF25, we can see that Sentence Reordering (SR), Next Sentence Generation (NSG) and Masked Document Generation (MDG) can all improve $\\text{RoBERTa}$-S2S significantly measured by the ROUGE script. Note that according to the ROUGE script, $\\pm 0.22$ ROUGE almost always means a significant difference with $p < 0.05$. Interesting, even Step is pre-trained on 230 million words, it outperforms UniLM that is pre-trained on 3,000 million words BIBREF43. When we pre-train Step on even larger dataset (i.e., GIGA-CM), the results are further improved and Step outperforms all models in comparison, as listed in the bottom part of Table TABREF25.",
"Table TABREF26 presents results on NYT dataset. Following the same evaluation protocol as BIBREF36, we adopted the limited-length recall based ROUGE, where we truncated the predicted summaries to the length of the gold ones. Again, the first and second blocks show results of previous extractive and abstractive models, respectively. Results of Step are listed in the third block. Similar to the trends in CNNDM, Step leads significant performance gains (with $p<0.05$) compared to all other models in Table TABREF26.",
"Among all three pre-training tasks, SR works slightly better than the other two tasks (i.e., NSG and MDG). We also tried to randomly use all the three tasks during training with 1/3 probability each (indicated as ALL). Interesting, we observed that, in general, All outperforms all three tasks when employing unlabeled documents of training splits of CNNDM or NYT, which might be due to limited number of unlabeled documents of the training splits. After adding more data (i.e., GIAG-CM) to pre-training, SR consistently achieves highest ROUGE-2 on both CNNDM and NYT. We conclude that SR is the most effective task for pre-training since sentence reordering task requires comprehensively understanding a document in a wide coverage, going beyond individual words and sentences, which is highly close to the essense of abstractive document summarization."
],
[
"We also conducted human evaluation with 20 documents randomly sampled from the test split of CNNDM. We compared the best preforming Step model (i.e., pre-training on the GIGA-CM dataset using SR task) with human references (denoted as Gold), $\\text{RoBERTa}$-S2S, and two pre-training based models, BERTAbs BIBREF23 and UniLM BIBREF43. Participants were asked to rank the outputs of these systems from best to worst. We report the proportions of system rankings and mean rank (lower is better) in Table TABREF29. The output of Step is selected as the best for the 25% of cases and we obtained lower mean rank than all systems except for Gold, which shows the participants' preference for our model. Then we converted ranking numbers into ratings (i.e., rank $i$ is converted into $6-i$) and applied the student $t$-test on the ratings. Step is significantly better than all other systems in comparison with $p<0.05$. But it still lags behind human. One possible reason is that Step (as well as other systems) only takes the first 512 tokens of a long document as input and thus may lose information residing in the following tokens."
],
[
"We proposed Step, a seq2seq transformer pre-training approach, for abstractive document summarization. Specifically, three pre-training tasks are designed, sentence reordering, next sentence generation, and masked document generation. When we only employ the unlabeled documents in the training splits of summarization datasets to pre-training Step with our proposed tasks, the summarization model based on the pre-trained Step outperforms the best published abstractive system. Involving large scale data to pre-training leads to larger performance gains. By using the best performing pre-training task, Step achieves 0.8 absolute ROUGE-2 improvements on CNN/DailyMail and 2.4 absolute ROUGE-2 improvements on New York Times. In the future, we would like to investigate other tasks to pre-train the seq2seq transformer model. Pre-training for unsupervised abstractive summarization is also an interesting direction and worth exploration."
]
],
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Extractive Summarization",
"Related Work ::: Abstractive Summarization",
"Related Work ::: Pre-training",
"Sequence-to-Sequence Transformer Pre-training",
"Sequence-to-Sequence Transformer Pre-training ::: Architecture",
"Sequence-to-Sequence Transformer Pre-training ::: Pre-training Tasks",
"Sequence-to-Sequence Transformer Pre-training ::: Pre-training Tasks ::: Sentence Reordering",
"Sequence-to-Sequence Transformer Pre-training ::: Pre-training Tasks ::: Next Sentence Generation",
"Sequence-to-Sequence Transformer Pre-training ::: Pre-training Tasks ::: Masked Document Generation",
"Sequence-to-Sequence Transformer Pre-training ::: Fine-tuning",
"Experimental Setup",
"Experimental Setup ::: Datasets",
"Experimental Setup ::: Datasets ::: CNNDM",
"Experimental Setup ::: Datasets ::: NYT",
"Experimental Setup ::: Datasets ::: GIGA-CM",
"Experimental Setup ::: Implementation Details",
"Experimental Setup ::: Evaluations",
"Results ::: Automatic Evaluation",
"Results ::: Human Evaluation",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"76a501b9515d4193ab7a8ffb1deb205281a48991",
"96d2a4126ea53aa76f8c2b0a144d1c2e01942abd"
],
"answer": [
{
"evidence": [
"Based on the above observations, we propose Step (as shorthand for Sequence-to-Sequence TransformEr Pre-training), which can be pre-trained on large scale unlabeled documents. Specifically, we design three tasks for seq2seq model pre-training, namely Sentence Reordering (SR), Next Sentence Generation (NSG), and Masked Document Generation (MDG). SR learns to recover a document with randomly shuffled sentences. NSG generates the next segment of a document based on its preceding segment. MDG recovers a masked document to its original form. After pre-trianing Step using the three tasks on unlabeled documents, we fine-tune it on supervised summarization datasets."
],
"extractive_spans": [],
"free_form_answer": "A task for seq2seq model pra-training that recovers a masked document to its original form.",
"highlighted_evidence": [
"Specifically, we design three tasks for seq2seq model pre-training, namely Sentence Reordering (SR), Next Sentence Generation (NSG), and Masked Document Generation (MDG). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Based on the above observations, we propose Step (as shorthand for Sequence-to-Sequence TransformEr Pre-training), which can be pre-trained on large scale unlabeled documents. Specifically, we design three tasks for seq2seq model pre-training, namely Sentence Reordering (SR), Next Sentence Generation (NSG), and Masked Document Generation (MDG). SR learns to recover a document with randomly shuffled sentences. NSG generates the next segment of a document based on its preceding segment. MDG recovers a masked document to its original form. After pre-trianing Step using the three tasks on unlabeled documents, we fine-tune it on supervised summarization datasets."
],
"extractive_spans": [
"recovers a masked document to its original form"
],
"free_form_answer": "",
"highlighted_evidence": [
"MDG recovers a masked document to its original form. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"3d813e2f3187795e8ed3399472568a7c6347b6ee",
"d03ad5f7e925befe6ae0d20e98157bcf24988b16"
],
"answer": [
{
"evidence": [
"Among all three pre-training tasks, SR works slightly better than the other two tasks (i.e., NSG and MDG). We also tried to randomly use all the three tasks during training with 1/3 probability each (indicated as ALL). Interesting, we observed that, in general, All outperforms all three tasks when employing unlabeled documents of training splits of CNNDM or NYT, which might be due to limited number of unlabeled documents of the training splits. After adding more data (i.e., GIAG-CM) to pre-training, SR consistently achieves highest ROUGE-2 on both CNNDM and NYT. We conclude that SR is the most effective task for pre-training since sentence reordering task requires comprehensively understanding a document in a wide coverage, going beyond individual words and sentences, which is highly close to the essense of abstractive document summarization."
],
"extractive_spans": [
"SR"
],
"free_form_answer": "",
"highlighted_evidence": [
"Among all three pre-training tasks, SR works slightly better than the other two tasks (i.e., NSG and MDG)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Among all three pre-training tasks, SR works slightly better than the other two tasks (i.e., NSG and MDG). We also tried to randomly use all the three tasks during training with 1/3 probability each (indicated as ALL). Interesting, we observed that, in general, All outperforms all three tasks when employing unlabeled documents of training splits of CNNDM or NYT, which might be due to limited number of unlabeled documents of the training splits. After adding more data (i.e., GIAG-CM) to pre-training, SR consistently achieves highest ROUGE-2 on both CNNDM and NYT. We conclude that SR is the most effective task for pre-training since sentence reordering task requires comprehensively understanding a document in a wide coverage, going beyond individual words and sentences, which is highly close to the essense of abstractive document summarization."
],
"extractive_spans": [
"SR"
],
"free_form_answer": "",
"highlighted_evidence": [
"Among all three pre-training tasks, SR works slightly better than the other two tasks (i.e., NSG and MDG)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"What is masked document generation?",
"Which of the three pretraining tasks is the most helpful?"
],
"question_id": [
"193ee49ae0f8827a6e67388a10da59e137e7769f",
"ed2eb4e54b641b7670ab5a7060c7b16c628699ab"
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61"
],
"search_query": [
"long document summarization",
"long document summarization"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Pre-training of STEP. The document (x1, x2, · · · , x8) contains three sentences (i.e., SENT. 1, SENT. 2 and SENT. 3). STEP adopts the SEQ2SEQ Transformer architecture. It takes the transformed document (i.e., a shuffled document, the first segment of a document, or a masked document) as input and learns to recover the original document (or part of the original document) by generation. SR: Sentence Reordering; NSG: Next Sentence Generation; MDG: Masked Document Generation.",
"Table 1: Results on the test split of CNNDM using fulllength F1 based ROUGE-1 (R-1), ROUGE-2 (R-2) and ROUGE-L (R-L). ∗ indicates significant improvements (p < 0.05 measured with the ROUGE script) compared to models in the first two blocks.",
"Table 2: Results on the test set of NYT dataset using limited-length recall based ROUGE. ∗ indicates significant improvements (p < 0.05 measured with the ROUGE script) to models in the first two blocks.",
"Table 3: Human evaluation results: proportions of system rankings. MR: mean rank (the lower the better)."
],
"file": [
"4-Figure1-1.png",
"7-Table1-1.png",
"7-Table2-1.png",
"8-Table3-1.png"
]
} | [
"What is masked document generation?"
] | [
[
"2004.01853-Introduction-2"
]
] | [
"A task for seq2seq model pra-training that recovers a masked document to its original form."
] | 41 |
1710.03348 | What does Attention in Neural Machine Translation Pay Attention to? | Attention in neural machine translation provides the possibility to encode relevant parts of the source sentence at each translation step. As a result, attention is considered to be an alignment model as well. However, there is no work that specifically studies attention and provides analysis of what is being learned by attention models. Thus, the question still remains that how attention is similar or different from the traditional alignment. In this paper, we provide detailed analysis of attention and compare it to traditional alignment. We answer the question of whether attention is only capable of modelling translational equivalent or it captures more information. We show that attention is different from alignment in some cases and is capturing useful information other than alignments. | {
"paragraphs": [
[
"Neural machine translation (NMT) has gained a lot of attention recently due to its substantial improvements in machine translation quality achieving state-of-the-art performance for several languages BIBREF0 , BIBREF1 , BIBREF2 . The core architecture of neural machine translation models is based on the general encoder-decoder approach BIBREF3 . Neural machine translation is an end-to-end approach that learns to encode source sentences into distributed representations and decode these representations into sentences in the target language. Among the different neural MT models, attentional NMT BIBREF4 , BIBREF5 has become popular due to its capability to use the most relevant parts of the source sentence at each translation step. This capability also makes the attentional model superior in translating longer sentences BIBREF4 , BIBREF5 .",
"Figure FIGREF1 shows an example of how attention uses the most relevant source words to generate a target word at each step of the translation. In this paper we focus on studying the relevance of the attended parts, especially cases where attention is `smeared out' over multiple source words where their relevance is not entirely obvious, see, e.g., “would\" and “like\" in Figure FIGREF1 . Here, we ask whether these are due to errors of the attention mechanism or are a desired behavior of the model.",
"Since the introduction of attention models in neural machine translation BIBREF4 various modifications have been proposed BIBREF5 , BIBREF6 , BIBREF7 . However, to the best of our knowledge there is no study that provides an analysis of what kind of phenomena is being captured by attention. There are some works that have looked to attention as being similar to traditional word alignment BIBREF8 , BIBREF6 , BIBREF7 , BIBREF9 . Some of these approaches also experimented with training the attention model using traditional alignments BIBREF8 , BIBREF7 , BIBREF9 . liu-EtAl:2016:COLING have shown that attention could be seen as a reordering model as well as an alignment model.",
"In this paper, we focus on investigating the differences between attention and alignment and what is being captured by the attention mechanism in general. The questions that we are aiming to answer include: Is the attention model only capable of modelling alignment? And how similar is attention to alignment in different syntactic phenomena?",
"Our analysis shows that attention models traditional alignment in some cases more closely while it captures information beyond alignment in others. For instance, attention agrees with traditional alignments to a high degree in the case of nouns. However, it captures other information rather than only the translational equivalent in the case of verbs.",
"This paper makes the following contributions: 1) We provide a detailed comparison of attention in NMT and word alignment. 2) We show that while different attention mechanisms can lead to different degrees of compliance with respect to word alignments, global compliance is not always helpful for word prediction. 3) We show that attention follows different patterns depending on the type of the word being generated. 4) We demonstrate that attention does not always comply with alignment. We provide evidence showing that the difference between attention and alignment is due to attention model capability to attend the context words influencing the current word translation."
],
[
"liu-EtAl:2016:COLING investigate how training the attention model in a supervised manner can benefit machine translation quality. To this end they use traditional alignments obtained by running automatic alignment tools (GIZA++ BIBREF10 and fast_align BIBREF11 ) on the training data and feed it as ground truth to the attention network. They report some improvements in translation quality arguing that the attention model has learned to better align source and target words. The approach of training attention using traditional alignments has also been proposed by others BIBREF9 , BIBREF8 . chen2016guided show that guided attention with traditional alignment helps in the domain of e-commerce data which includes lots of out of vocabulary (OOV) product names and placeholders, but not much in the other domains. alkhouli-EtAl:2016:WMT have separated the alignment model and translation model, reasoning that this avoids propagation of errors from one model to the other as well as providing more flexibility in the model types and training of the models. They use a feed-forward neural network as their alignment model that learns to model jumps in the source side using HMM/IBM alignments obtained by using GIZA++.",
"shi-padhi-knight:2016:EMNLP2016 show that various kinds of syntactic information are being learned and encoded in the output hidden states of the encoder. The neural system for their experimental analysis is not an attentional model and they argue that attention does not have any impact for learning syntactic information. However, performing the same analysis for morphological information, belinkov2017neural show that attention has also some effect on the information that the encoder of neural machine translation system encodes in its output hidden states. As part of their analysis they show that a neural machine translation system that has an attention model can learn the POS tags of the source side more efficiently than a system without attention.",
"Recently, koehn2017six carried out a brief analysis of how much attention and alignment match in different languages by measuring the probability mass that attention gives to alignments obtained from an automatic alignment tool. They also report differences based on the most attended words.",
"The mixed results reported by chen2016guided, alkhouli-EtAl:2016:WMT, liu-EtAl:2016:COLING on optimizing attention with respect to alignments motivates a more thorough analysis of attention models in NMT."
],
[
"This section provides a short background on attention and discusses two most popular attention models which are also used in this paper. The first model is a non-recurrent attention model which is equivalent to the “global attention\" method proposed by DBLPjournalscorrLuongPM15. The second attention model that we use in our investigation is an input-feeding model similar to the attention model first proposed by bahdanau-EtAl:2015:ICLR and turned to a more general one and called input-feeding by DBLPjournalscorrLuongPM15. Below we describe the details of both models.",
"Both non-recurrent and input-feeding models compute a context vector INLINEFORM0 at each time step. Subsequently, they concatenate the context vector to the hidden state of decoder and pass it through a non-linearity before it is fed into the softmax output layer of the translation network. DISPLAYFORM0 ",
"The difference of the two models lays in the way they compute the context vector. In the non-recurrent model, the hidden state of the decoder is compared to each hidden state of the encoder. Often, this comparison is realized as the dot product of vectors. Then the comparison result is fed to a softmax layer to compute the attention weight. DISPLAYFORM0 DISPLAYFORM1 ",
"Here INLINEFORM0 is the hidden state of the decoder at time INLINEFORM1 , INLINEFORM2 is INLINEFORM3 th hidden state of the encoder and INLINEFORM4 is the length of the source sentence. Then the computed alignment weights are used to compute a weighted sum over the encoder hidden states which results in the context vector mentioned above: DISPLAYFORM0 ",
"The input-feeding model changes the context vector computation in a way that at each step INLINEFORM0 the context vector is aware of the previously computed context INLINEFORM1 . To this end, the input-feeding model feeds back its own INLINEFORM2 to the network and uses the resulting hidden state instead of the context-independent INLINEFORM3 , to compare to the hidden states of the encoder. This is defined in the following equations: DISPLAYFORM0 DISPLAYFORM1 ",
"Here, INLINEFORM0 is the function that the stacked LSTM applies to the input, INLINEFORM1 is the last generated target word, and INLINEFORM2 is the output of previous time step of the input-feeding network itself, meaning the output of Equation EQREF2 in the case that context vector has been computed using INLINEFORM3 from Equation EQREF7 ."
],
[
"As mentioned above, it is a commonly held assumption that attention corresponds to word alignments. To verify this, we investigate whether higher consistency between attention and alignment leads to better translations."
],
[
"In order to compare attentions of multiple systems as well as to measure the difference between attention and word alignment, we convert the hard word alignments into soft ones and use cross entropy between attention and soft alignment as a loss function. For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments. The statistics of the data are given in Table TABREF8 . We convert the hard alignments to soft alignments using Equation EQREF10 . For unaligned words, we first assume that they have been aligned to all the words in the source side and then do the conversion. DISPLAYFORM0 ",
"Here INLINEFORM0 is the set of source words aligned to target word INLINEFORM1 and INLINEFORM2 is the number of source words in the set.",
"After conversion of the hard alignments to soft ones, we compute the attention loss as follows: DISPLAYFORM0 ",
"Here INLINEFORM0 is the source sentence and INLINEFORM1 is the weight of the alignment link between source word INLINEFORM2 and the target word (see Equation EQREF10 ). INLINEFORM3 is the attention weight INLINEFORM4 (see Equation EQREF4 ) of the source word INLINEFORM5 , when generating the target word INLINEFORM6 .",
"In our analysis, we also look into the relation between translation quality and the quality of the attention with respect to the alignments. For measuring the quality of attention, we use the attention loss defined in Equation EQREF11 . As a measure of translation quality, we choose the loss between the output of our NMT system and the reference translation at each translation step, which we call word prediction loss. The word prediction loss for word INLINEFORM0 is logarithm of the probability given in Equation EQREF12 . DISPLAYFORM0 ",
"Here INLINEFORM0 is the source sentence, INLINEFORM1 is target word at time step INLINEFORM2 , INLINEFORM3 is the target history given by the reference translation and INLINEFORM4 is given by Equation EQREF2 for either non-recurrent or input-feeding attention models.",
"Spearman's rank correlation is used to compute the correlation between attention loss and word prediction loss: DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are the ranks of the attention losses and word prediction losses, respectively, INLINEFORM2 is the covariance between two input variables, and INLINEFORM3 and INLINEFORM4 are the standard deviations of INLINEFORM5 and INLINEFORM6 .",
"If there is a close relationship between word prediction quality and consistency of attention versus alignment, then there should be high correlation between word prediction loss and attention loss. Figure FIGREF13 shows an example with different levels of consistency between attention and word alignments. For the target words “will\" and “come\" the attention is not focused on the manually aligned word but distributed between the aligned word and other words. The focus of this paper is examining cases where attention does not follow alignment, answering the questions whether those cases represent errors or desirable behavior of the attention model."
],
[
"As another informative variable in our analysis, we look into the attention concentration. While most word alignments only involve one or a few words, attention can be distributed more freely. We measure the concentration of attention by computing the entropy of the attention distribution: DISPLAYFORM0 "
],
[
"We conduct our analysis using the two different attention models described in Section SECREF3 . Our first attention model is the global model without input-feeding as introduced by DBLPjournalscorrLuongPM15. The second model is the input-feeding model BIBREF5 , which uses recurrent attention. Our NMT system is a unidirectional encoder-decoder system as described in BIBREF5 , using 4 recurrent layers.",
"We trained the systems with dimension size of 1,000 and batch size of 80 for 20 epochs. The vocabulary for both source and target side is set to be the 30K most common words. The learning rate is set to be 1 and a maximum gradient norm of 5 has been used. We also use a dropout rate of 0.3 to avoid overfitting."
],
[
"We train both of the systems on the WMT15 German-to-English training data, see Table TABREF18 for some statistics. Table TABREF17 shows the BLEU scores BIBREF12 for both systems on different test sets.",
"Since we use POS tags and dependency roles in our analysis, both of which are based on words, we chose not to use BPE BIBREF13 which operates at the sub-word level.",
"We report alignment error rate (AER) BIBREF14 , which is commonly used to measure alignment quality, in Table TABREF20 to show the difference between attentions and human alignments provided by RWTH German-English dataset. To compute AER over attentions, we follow DBLPjournalscorrLuongPM15 to produce hard alignments from attentions by choosing the most attended source word for each target word. We also use GIZA++ BIBREF10 to produce automatic alignments over the data set to allow for a comparison between automatically generated alignments and the attentions generated by our systems. GIZA++ is run in both directions and alignments are symmetrized using the grow-diag-final-and refined alignment heuristic.",
"As shown in Table TABREF20 , the input-feeding system not only achieves a higher BLEU score, but also uses attentions that are closer to the human alignments.",
"Table TABREF21 compares input-feeding and non-recurrent attention in terms of attention loss computed using Equation EQREF11 . Here the losses between the attention produced by each system and the human alignments is reported. As expected, the difference in attention losses are in line with AER.",
"The difference between these comparisons is that AER only takes the most attended word into account while attention loss considers the entire attention distribution."
],
[
"Based on the results in Section SECREF19 , one might be inclined to conclude that the closer the attention is to the word alignments the better the translation. However, chen2016guided, liu-EtAl:2016:COLING, alkhouli-EtAl:2016:WMT report mixed results by optimizing their NMT system with respect to word prediction and alignment quality. These findings warrant a more fine-grained analysis of attention. To this end, we include POS tags in our analysis and study the patterns of attention based on POS tags of the target words. We choose POS tags because they exhibit some simple syntactic characteristics. We use the coarse grained universal POS tags BIBREF15 given in Table TABREF25 .",
"To better understand how attention accuracy affects translation quality, we analyse the relationship between attention loss and word prediction loss for individual part-of-speech classes. Figure FIGREF22 shows how attention loss differs when generating different POS tags. One can see that attention loss varies substantially across different POS tags. In particular, we focus on the cases of NOUN and VERB which are the most frequent POS tags in the dataset. As shown, the attention of NOUN is the closest to alignments on average. But the average attention loss for VERB is almost two times larger than the loss for NOUN.",
"Considering this difference and the observations in Section SECREF19 , a natural follow-up would be to focus on getting the attention of verbs to be closer to alignments. However, Figure FIGREF22 shows that the average word prediction loss for verbs is actually smaller compared to the loss for nouns. In other words, although the attention for verbs is substantially more inconsistent with the word alignments than for nouns, the NMT system translates verbs more accurately than nouns on average.",
"To formalize this relationship we compute Spearman's rank correlation between word prediction loss and attention loss, based on the POS tags of the target side, for the input-feeding model, see Figure FIGREF27 .",
"The low correlation for verbs confirms that attention to other parts of source sentence rather than the aligned word is necessary for translating verbs and that attention does not necessarily have to follow alignments. However, the higher correlation for nouns means that consistency of attention with alignments is more desirable. This could, in a way, explain the mixed result reported for training attention using alignments BIBREF9 , BIBREF7 , BIBREF8 . Especially the results by chen2016guided in which large improvements are achieved for the e-commerce domain which contains many OOV product names and placeholders, but no or very weak improvements were achieved over common domains."
],
[
"In word alignment, most target words are aligned to one source word. The average number of source words aligned to nouns and verbs is 1.1 and 1.2 respectively. To investigate to what extent this also holds for attention we measure the attention concentration by computing the entropy of the attention distribution, see Equation EQREF16 .",
"Figure FIGREF28 shows the average entropy of attention based on POS tags. As shown, nouns have one of the lowest entropies meaning that on average the attention for nouns tends to be concentrated. This also explains the closeness of the attention to alignments for nouns. In addition, the correlation between attention entropy and attention loss in case of nouns is high as shown in Figure FIGREF28 . This means that attention entropy can be used as a measure of closeness of attention to alignment in the case of nouns.",
"The higher attention entropy for verbs, in Figure FIGREF28 , shows that the attention is more distributed compared to nouns. The low correlation between attention entropy and word prediction loss (see Figure FIGREF32 ) shows that attention concentration is not required when translating into verbs. This also confirms that the correct translation of verbs requires the systems to pay attention to different parts of the source sentence.",
"Another interesting observation here is the low correlation for pronouns (PRON) and particles (PRT), see Figure FIGREF32 . As can be seen in Figure FIGREF28 , these tags have more distributed attention comparing to nouns, for example. This could either mean that the attention model does not know where to focus or it deliberately pays attention to multiple, somehow relevant, places to be able to produce a better translation. The latter is supported by the relatively low word prediction losses, shown in the Figure FIGREF22 ."
],
[
"To further understand under which conditions attention is paid to words other than the aligned words, we study the distribution of attention over the source words. First, we measure how much attention is paid to the aligned words for each POS tag, on average. To this end, we compute the percentage of the probability mass that the attention model has assigned to aligned words for each POS tag, see Table TABREF35 .",
"One can notice that less than half of the attention is paid to alignment points for most of the POS tags. To examine how the rest of attention in each case has been distributed over the source sentence we measure the attention distribution over dependency roles in the source side. We first parse the source side of RWTH data using the ParZu parser BIBREF16 . Then we compute how the attention probability mass given to the words other than the alignment points, is distributed over dependency roles. Table TABREF33 gives the most attended roles for each POS tag. Here, we focus on POS tags discussed earlier. One can see that the most attended roles when translating to nouns include adjectives and determiners and in the case of translating to verbs, it includes auxiliary verbs, adverbs (including negation), subjects, and objects."
],
[
"In this paper, we have studied attention in neural machine translation and provided an analysis of the relation between attention and word alignment. We have shown that attention agrees with traditional alignment to a certain extent. However, this differs substantially by attention mechanism and the type of the word being generated. We have shown that attention has different patterns based on the POS tag of the target word. The concentrated pattern of attention and the relatively high correlations for nouns show that training the attention with explicit alignment labels is useful for generating nouns. However, this is not the case for verbs, since the large portion of attention being paid to words other than alignment points, is already capturing other relevant information. Training attention with alignments in this case will force the attention model to forget these useful information. This explains the mixed results reported when guiding attention to comply with alignments BIBREF9 , BIBREF7 , BIBREF8 ."
],
[
"This research was funded in part by the Netherlands Organization for Scientific Research (NWO) under project numbers 639.022.213 and 612.001.218."
]
],
"section_name": [
"Introduction",
"Related Work",
"Attention Models",
"Comparing Attention with Alignment",
"Measuring Attention-Alignment Accuracy",
"Measuring Attention Concentration",
"Empirical Analysis of Attention Behaviour",
"Impact of Attention Mechanism",
"Alignment Quality Impact on Translation",
"Attention Concentration",
"Attention Distribution",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"3d92228db538f65dd9270be4f1239025075556aa",
"ff36a022df95271312240a036ef6cbea768ffd13"
],
"answer": [
{
"evidence": [
"Our analysis shows that attention models traditional alignment in some cases more closely while it captures information beyond alignment in others. For instance, attention agrees with traditional alignments to a high degree in the case of nouns. However, it captures other information rather than only the translational equivalent in the case of verbs.",
"To better understand how attention accuracy affects translation quality, we analyse the relationship between attention loss and word prediction loss for individual part-of-speech classes. Figure FIGREF22 shows how attention loss differs when generating different POS tags. One can see that attention loss varies substantially across different POS tags. In particular, we focus on the cases of NOUN and VERB which are the most frequent POS tags in the dataset. As shown, the attention of NOUN is the closest to alignments on average. But the average attention loss for VERB is almost two times larger than the loss for NOUN."
],
"extractive_spans": [
"it captures other information rather than only the translational equivalent in the case of verbs"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our analysis shows that attention models traditional alignment in some cases more closely while it captures information beyond alignment in others. For instance, attention agrees with traditional alignments to a high degree in the case of nouns. However, it captures other information rather than only the translational equivalent in the case of verbs.",
"One can see that attention loss varies substantially across different POS tags. In particular, we focus on the cases of NOUN and VERB which are the most frequent POS tags in the dataset. As shown, the attention of NOUN is the closest to alignments on average. But the average attention loss for VERB is almost two times larger than the loss for NOUN."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"One can notice that less than half of the attention is paid to alignment points for most of the POS tags. To examine how the rest of attention in each case has been distributed over the source sentence we measure the attention distribution over dependency roles in the source side. We first parse the source side of RWTH data using the ParZu parser BIBREF16 . Then we compute how the attention probability mass given to the words other than the alignment points, is distributed over dependency roles. Table TABREF33 gives the most attended roles for each POS tag. Here, we focus on POS tags discussed earlier. One can see that the most attended roles when translating to nouns include adjectives and determiners and in the case of translating to verbs, it includes auxiliary verbs, adverbs (including negation), subjects, and objects."
],
"extractive_spans": [],
"free_form_answer": "Alignment points of the POS tags.",
"highlighted_evidence": [
"One can notice that less than half of the attention is paid to alignment points for most of the POS tags. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"81f159f32a88e7a86ada8f656c30a15a7fe24e6d",
"8717f0a48f7e4cae29d3115425a741f828a8136e"
],
"answer": [
{
"evidence": [
"We train both of the systems on the WMT15 German-to-English training data, see Table TABREF18 for some statistics. Table TABREF17 shows the BLEU scores BIBREF12 for both systems on different test sets.",
"In order to compare attentions of multiple systems as well as to measure the difference between attention and word alignment, we convert the hard word alignments into soft ones and use cross entropy between attention and soft alignment as a loss function. For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments. The statistics of the data are given in Table TABREF8 . We convert the hard alignments to soft alignments using Equation EQREF10 . For unaligned words, we first assume that they have been aligned to all the words in the source side and then do the conversion. DISPLAYFORM0"
],
"extractive_spans": [
"WMT15 German-to-English",
"RWTH German-English dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"We train both of the systems on the WMT15 German-to-English training data, see Table TABREF18 for some statistics.",
"For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to compare attentions of multiple systems as well as to measure the difference between attention and word alignment, we convert the hard word alignments into soft ones and use cross entropy between attention and soft alignment as a loss function. For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments. The statistics of the data are given in Table TABREF8 . We convert the hard alignments to soft alignments using Equation EQREF10 . For unaligned words, we first assume that they have been aligned to all the words in the source side and then do the conversion. DISPLAYFORM0"
],
"extractive_spans": [
"RWTH German-English dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"For this purpose, we use manual alignments provided by RWTH German-English dataset as the hard alignments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"43ffae0431febc50910579a1dabb1576abae8bd3",
"ee927b57c4713480623e0c083591282a329d0764"
],
"answer": [
{
"evidence": [
"To better understand how attention accuracy affects translation quality, we analyse the relationship between attention loss and word prediction loss for individual part-of-speech classes. Figure FIGREF22 shows how attention loss differs when generating different POS tags. One can see that attention loss varies substantially across different POS tags. In particular, we focus on the cases of NOUN and VERB which are the most frequent POS tags in the dataset. As shown, the attention of NOUN is the closest to alignments on average. But the average attention loss for VERB is almost two times larger than the loss for NOUN.",
"FLOAT SELECTED: Figure 6: Correlation of attention entropy and word prediction loss for the input-feeding system."
],
"extractive_spans": [],
"free_form_answer": "For certain POS tags, e.g. VERB, PRON.",
"highlighted_evidence": [
"As shown, the attention of NOUN is the closest to alignments on average. But the average attention loss for VERB is almost two times larger than the loss for NOUN.",
"FLOAT SELECTED: Figure 6: Correlation of attention entropy and word prediction loss for the input-feeding system."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As another informative variable in our analysis, we look into the attention concentration. While most word alignments only involve one or a few words, attention can be distributed more freely. We measure the concentration of attention by computing the entropy of the attention distribution: DISPLAYFORM0"
],
"extractive_spans": [
"most word alignments only involve one or a few words, attention can be distributed more freely"
],
"free_form_answer": "",
"highlighted_evidence": [
"While most word alignments only involve one or a few words, attention can be distributed more freely."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What useful information does attention capture?",
"What datasets are used?",
"In what cases is attention different from alignment?"
],
"question_id": [
"beac555c4aea76c88f19db7cc901fa638765c250",
"91e326fde8b0a538bc34d419541b5990d8aae14b",
"044f922604b4b3f42ae381419fd5cd5624fa0637"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Visualization of the attention paid to the relevant parts of the source sentence for each generated word of a translation example. See how the attention is ‘smeared out’ over multiple source words in the case of “would” and “like”.",
"Table 1: Statistics of manual alignments provided by RWTH German-English data.",
"Figure 2: An example of inconsistent attention and alignment. The outlined cells show the manual alignments from the RWTH dataset (see Table 1). See how attention is deviated from alignment points in the case of “will” and “come”.",
"Table 2: Performance of our experimental system in BLEU on different standard WMT test sets.",
"Table 5: Average loss between attention generated by input-feeding and non-recurrent systems and the manual alignment over RWTH GermanEnglish data.",
"Figure 3: Average attention losses and word prediction losses from the input-feeding system.",
"Table 6: List of the universal POS tags used in our analysis.",
"Figure 4: Correlation between word prediction loss and attention loss for the input-feeding model.",
"Figure 5: Attention entropy and its correlation with attention loss for the input-feeding system.",
"Figure 6: Correlation of attention entropy and word prediction loss for the input-feeding system.",
"Table 7: The most attended dependency roles with their received attention percentage from the attention probability mass paid to the words other than the alignment points. Here, we focus on the POS tags discussed earlier.",
"Table 8: Distribution of attention probability mass (in %) over alignment points and the rest of the words for each POS tag."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"5-Table2-1.png",
"5-Table5-1.png",
"6-Figure3-1.png",
"6-Table6-1.png",
"6-Figure4-1.png",
"7-Figure5-1.png",
"7-Figure6-1.png",
"8-Table7-1.png",
"8-Table8-1.png"
]
} | [
"What useful information does attention capture?",
"In what cases is attention different from alignment?"
] | [
[
"1710.03348-Attention Distribution-1",
"1710.03348-Introduction-4",
"1710.03348-Alignment Quality Impact on Translation-1"
],
[
"1710.03348-7-Figure6-1.png",
"1710.03348-Alignment Quality Impact on Translation-1"
]
] | [
"Alignment points of the POS tags.",
"For certain POS tags, e.g. VERB, PRON."
] | 42 |
1809.02286 | Dynamic Compositionality in Recursive Neural Networks with Structure-aware Tag Representations | Most existing recursive neural network (RvNN) architectures utilize only the structure of parse trees, ignoring syntactic tags which are provided as by-products of parsing. We present a novel RvNN architecture that can provide dynamic compositionality by considering comprehensive syntactic information derived from both the structure and linguistic tags. Specifically, we introduce a structure-aware tag representation constructed by a separate tag-level tree-LSTM. With this, we can control the composition function of the existing word-level tree-LSTM by augmenting the representation as a supplementary input to the gate functions of the tree-LSTM. In extensive experiments, we show that models built upon the proposed architecture obtain superior or competitive performance on several sentence-level tasks such as sentiment analysis and natural language inference when compared against previous tree-structured models and other sophisticated neural models. | {
"paragraphs": [
[
"One of the most fundamental topics in natural language processing is how best to derive high-level representations from constituent parts, as natural language meanings are a function of their constituent parts. How best to construct a sentence representation from distributed word embeddings is an example domain of this larger issue. Even though sequential neural models such as recurrent neural networks (RNN) BIBREF0 and their variants including Long Short-Term Memory (LSTM) BIBREF1 and Gated Recurrent Unit (GRU) BIBREF2 have become the de-facto standard for condensing sentence-level information from a sequence of words into a fixed vector, there have been many lines of research towards better sentence representation using other neural architectures, e.g. convolutional neural networks (CNN) BIBREF3 or self-attention based models BIBREF4 .",
"From a linguistic point of view, the underlying tree structure—as expressed by its constituency and dependency trees—of a sentence is an integral part of its meaning. Inspired by this fact, some recursive neural network (RvNN) models are designed to reflect the syntactic tree structure, achieving impressive results on several sentence-level tasks such as sentiment analysis BIBREF5 , BIBREF6 , machine translation BIBREF7 , natural language inference BIBREF8 , and discourse relation classification BIBREF9 .",
"However, some recent works have BIBREF10 , BIBREF11 proposed latent tree models, which learn to construct task-specific tree structures without explicit supervision, bringing into question the value of linguistically-motivated recursive neural models. Witnessing the surprising performance of the latent tree models on some sentence-level tasks, there arises a natural question: Are linguistic tree structures the optimal way of composing sentence representations for NLP tasks?",
"In this paper, we demonstrate that linguistic priors are in fact useful for devising effective neural models for sentence representations, showing that our novel architecture based on constituency trees and their tag information obtains superior performance on several sentence-level tasks, including sentiment analysis and natural language inference.",
"A chief novelty of our approach is that we introduce a small separate tag-level tree-LSTM to control the composition function of the existing word-level tree-LSTM, which is in charge of extracting helpful syntactic signals for meaningful semantic composition of constituents by considering both the structures and linguistic tags of constituency trees simultaneously. In addition, we demonstrate that applying a typical LSTM to preprocess the leaf nodes of a tree-LSTM greatly improves the performance of the tree models. Moreover, we propose a clustered tag set to replace the existing tags on the assumption that the original syntactic tags are too fined-grained to be useful in neural models.",
"In short, our contributions in this work are as follows:"
],
[
"Recursive neural networks (RvNN) are a kind of neural architecture which model sentences by exploiting syntactic structure. While earlier RvNN models proposed utilizing diverse composition functions, including feed-forward neural networks BIBREF12 , matrix-vector multiplication BIBREF5 , and tensor computation BIBREF6 , tree-LSTMs BIBREF13 remain the standard for several sentence-level tasks.",
"Even though classic RvNNs have demonstrated superior performance on a variety of tasks, their inflexibility, i.e. their inability to handle dynamic compositionality for different syntactic configurations, is a considerable weakness. For instance, it would be desirable if our model could distinguish e.g. adjective-noun composition from that of verb-noun or preposition-noun composition, as models failing to make such a distinction ignore real-world syntactic considerations such as `-arity' of function words (i.e. types), and the adjunct/argument distinction.",
"To enable dynamic compositionality in recursive neural networks, many previous works BIBREF14 , BIBREF15 , BIBREF16 , BIBREF9 , BIBREF17 , BIBREF18 , BIBREF19 have proposed various methods.",
"One main direction of research leverages tag information, which is produced as a by-product of parsing. In detail, BIBREF16 ( BIBREF16 ) suggested TG-RNN, a model employing different composition functions according to POS tags, and TE-RNN/TE-RNTN, models which leverage tag embeddings as additional inputs for the existing tree-structured models. Despite the novelty of utilizing tag information, the explosion of the number of parameters (in case of the TG-RNN) and the limited performance of the original models (in case of the TE-RNN/TE-RNTN) have prevented these models from being widely adopted. Meanwhile, BIBREF9 ( BIBREF9 ) and BIBREF18 ( BIBREF18 ) proposed models based on a tree-LSTM which also uses the tag vectors to control the gate functions of the tree-LSTM. In spite of their impressive results, there is a limitation that the trained tag embeddings are too simple to reflect the rich information which tags provide in different syntactic structures. To alleviate this problem, we introduce structure-aware tag representations in the next section.",
"Another way of building dynamic compositionality into RvNNs is to take advantage of a meta-network (or hyper-network). Inspired by recent works on dynamic parameter prediction, DC-TreeLSTMs BIBREF17 dynamically create the parameters for compositional functions in a tree-LSTM. Specifically, the model has two separate tree-LSTM networks whose architectures are similar, but the smaller of the two is utilized to calculate the weights of the bigger one. A possible problem for this model is that it may be easy to be trained such that the role of each tree-LSTM is ambiguous, as they share the same input, i.e. word information. Therefore, we design two disentangled tree-LSTMs in our model so that one focuses on extracting useful features from only syntactic information while the other composes semantic units with the aid of the features. Furthermore, our model reduces the complexity of computation by utilizing typical tree-LSTM frameworks instead of computing the weights for each example.",
"Finally, some recent works BIBREF10 , BIBREF11 have proposed latent tree-structured models that learn how to formulate tree structures from only sequences of tokens, without the aid of syntactic trees or linguistic information. The latent tree models have the advantage of being able to find the optimized task-specific order of composition rather than a sequential or syntactic one. In experiments, we compare our model with not only syntactic tree-based models but also latent tree models, demonstrating that modeling with explicit linguistic knowledge can be an attractive option."
],
[
"In this section, we introduce a novel RvNN architecture, called SATA Tree-LSTM (Structure-Aware Tag Augmented Tree-LSTM). This model is similar to typical Tree-LSTMs, but provides dynamic compositionality by augmenting a separate tag-level tree-LSTM which produces structure-aware tag representations for each node in a tree. In other words, our model has two independent tree-structured modules based on the same constituency tree, one of which (word-level tree-LSTM) is responsible for constructing sentence representations given a sequence of words as usual, while the other (tag-level tree-LSTM) provides supplementary syntactic information to the former.",
"In section 3.1, we first review tree-LSTM architectures. Then in section 3.2, we introduce a tag-level tree-LSTM and structure-aware tag representations. In section 3.3, we discuss an additional technique to boost the performance of tree-structured models, and in section 3.4, we describe the entire architecture of our model in detail."
],
[
"The LSTM BIBREF1 architecture was first introduced as an extension of the RNN architecture to mitigate the vanishing and exploding gradient problems. In addition, several works have discovered that applying the LSTM cell into tree structures can be an effective means of modeling sentence representations.",
"To be formal, the composition function of the cell in a tree-LSTM can be formulated as follows: ",
"$$ \n\\begin{bmatrix}\n\\mathbf {i} \\\\\n\\mathbf {f}_l \\\\\n\\mathbf {f}_r \\\\\n\\mathbf {o} \\\\\n\\mathbf {g}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\sigma \\\\\n\\sigma \\\\\n\\sigma \\\\\n\\sigma \\\\\n\\tanh \\end{bmatrix}\n\\Bigg ( \\mathbf {W}\n\\begin{bmatrix}\n\\mathbf {h}_l \\\\\n\\mathbf {h}_r \\\\\n\\end{bmatrix}\n+ \\mathbf {b} \\Bigg )$$ (Eq. 8) ",
"$$ \n\\mathbf {c} = \\mathbf {f}_l \\odot \\mathbf {c}_l + \\mathbf {f}_r \\odot \\mathbf {c}_r + \\mathbf {i} \\odot \\mathbf {g}\\\\$$ (Eq. 9) ",
"where $\\mathbf {h}, \\mathbf {c} \\in \\mathbb {R}^{d}$ indicate the hidden state and cell state of the LSTM cell, and $\\mathbf {h}_l, \\mathbf {h}_r, \\mathbf {c}_l, \\mathbf {c}_r \\in \\mathbb {R}^{d}$ the hidden states and cell states of a left and right child. $\\mathbf {g} \\in \\mathbb {R}^{d}$ is the newly composed input for the cell and $\\mathbf {i}, \\mathbf {f}_{l}, \\mathbf {f}_{r}, \\mathbf {o} \\in \\mathbb {R}^{d}$ represent an input gate, two forget gates (left, right), and an output gate respectively. $\\mathbf {W} \\in \\mathbb {R}^{5d\\times 2d}$ and $\\mathbf {b} \\in \\mathbb {R}^{5d}$ are trainable parameters. $\\sigma $ corresponds to the sigmoid function, $\\tanh $ to the hyperbolic tangent, and $\\odot $ to element-wise multiplication.",
"Note the equations assume that there are only two children for each node, i.e. binary or binarized trees, following the standard in the literature. While RvNN models can be constructed on any tree structure, in this work we only consider constituency trees as inputs.",
"In spite of the obvious upside that recursive models have in being so flexible, they are known for being difficult to fully utilize with batch computations as compared to other neural architectures because of the diversity of structure found across sentences. To alleviate this problem, BIBREF8 ( BIBREF8 ) proposed the SPINN model, which brings a shift-reduce algorithm to the tree-LSTM. As SPINN simplifies the process of constructing a tree into only two operations, i.e. shift and reduce, it can support more effective parallel computations while enjoying the advantages of tree structures. For efficiency, our model also starts from our own SPINN re-implementation, whose function is exactly the same as that of the tree-LSTM."
],
[
"In most previous works using linguistic tag information BIBREF16 , BIBREF9 , BIBREF18 , tags are usually represented as simple low-dimensional dense vectors, similar to word embeddings. This approach seems reasonable in the case of POS tags that are attached to the corresponding words, but phrase-level constituent tags (e.g. NP, VP, ADJP) vary greatly in size and shape, making them less amenable to uniform treatment. For instance, even the same phrase tags within different syntactic contexts can vary greatly in size and internal structure, as the case of NP tags in Figure 1 shows. Here, the NP consisting of DT[the]-NN[stories] has a different internal structure than the NP consisting of NP[the film 's]-NNS[shortcomings].",
"One way of deriving structure-aware tag representations from the original tag embeddings is to introduce a separate tag-level tree-LSTM which accepts the typical tag embeddings at each node of a tree and outputs the computed structure-aware tag representations for the nodes. Note that the module concentrates on extracting useful syntactic features by considering only the tags and structures of the trees, excluding word information.",
"Formally, we denote a tag embedding for the tag attached to each node in a tree as $\\textbf {e} \\in \\mathbb {R}^{d_\\text{T}}$ . Then, the function of each cell in the tag tree-LSTM is defined in the following way. Leaf nodes are defined by the following: ",
"$$ \n\\begin{bmatrix}\n\\hat{\\mathbf {c}} \\\\\n\\hat{\\mathbf {h}} \\\\\n\\end{bmatrix}\n= \\tanh {\\left(\\mathbf {U}_\\text{T} \\mathbf {e} + \\mathbf {a}_\\text{T}\\right)}$$ (Eq. 13) ",
"while non-leaf nodes are defined by the following: ",
"$$ \n\\begin{bmatrix}\n\\hat{\\mathbf {i}} \\\\\n\\hat{\\mathbf {f}}_l \\\\\n\\hat{\\mathbf {f}}_r \\\\\n\\hat{\\mathbf {o}} \\\\\n\\hat{\\mathbf {g}}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\sigma \\\\\n\\sigma \\\\\n\\sigma \\\\\n\\sigma \\\\\n\\tanh \\end{bmatrix}\n\\Bigg ( \\mathbf {W_\\text{T}}\n\\begin{bmatrix}\n\\hat{\\mathbf {h}}_l \\\\\n\\hat{\\mathbf {h}}_r \\\\\n\\mathbf {e} \\\\\n\\end{bmatrix}\n+ \\mathbf {b}_\\text{T} \\Bigg )$$ (Eq. 14) ",
"$$ \n\\hat{\\mathbf {c}} = \\hat{\\mathbf {f}}_l \\odot \\hat{\\mathbf {c}}_l + \\hat{\\mathbf {f}}_r \\odot \\hat{\\mathbf {c}}_r + \\hat{\\mathbf {i}} \\odot \\hat{\\mathbf {g}}\\\\$$ (Eq. 15) ",
"where $\\hat{\\mathbf {h}}, \\hat{\\mathbf {c}} \\in \\mathbb {R}^{d_\\text{T}}$ represent the hidden state and cell state of each node in the tag tree-LSTM. We regard the hidden state ( $\\hat{\\mathbf {h}}$ ) as a structure-aware tag representation for the node. $ \\mathbf {U}_\\text{T} \\in \\mathbb {R}^{2d_\\text{T} \\times d_\\text{T}}, \\textbf {a}_\\text{T} \\in \\mathbb {R}^{2d_\\text{T}}, \\mathbf {W}_\\text{T} \\in \\mathbb {R}^{5d_\\text{T} \\times 3d_\\text{T}}$ , and $\\mathbf {b}_\\text{T} \\in \\mathbb {R}^{5d_\\text{T}}$ are trainable parameters. The rest of the notation follows equations 8 , 9 , and 10 . In case of leaf nodes, the states are computed by a simple non-linear transformation. Meanwhile, the composition function in a non-leaf node absorbs the tag embedding ( $\\mathbf {e}$ ) as an additional input as well as the hidden states of the two children nodes. The benefit of revising tag representations according to the internal structure is that the derived embedding is a function of the corresponding makeup of the node, rather than a monolithic, categorical tag.",
"With regard to the tags themselves, we conjecture that the taxonomy of the tags currently in use in many NLP systems is too complex to be utilized effectively in deep neural models, considering the specificity of many tag sets and the limited amount of data with which to train. Thus, we cluster POS (word-level) tags into 12 groups following the universal POS tagset BIBREF20 and phrase-level tags into 11 groups according to criteria analogous to the case of words, resulting in 23 tag categories in total. In this work, we use the revised coarse-grained tags instead of the original ones. For more details, we refer readers to the supplemental materials."
],
[
"An inherent shortcoming of RvNNs relative to sequential models is that each intermediate representation in a tree is unaware of its external context until all the information is gathered together at the root node. In other words, each composition process is prone to be locally optimized rather than globally optimized.",
"To mitigate this problem, we propose using a leaf-LSTM following the convention of some previous works BIBREF21 , BIBREF7 , BIBREF11 , which is a typical LSTM that accepts a sequence of words in order. Instead of leveraging word embeddings directly, we can use each hidden state and cell state of the leaf-LSTM as input tokens for leaf nodes in a tree-LSTM, anticipating the proper contextualization of the input sequence.",
"Formally, we denote a sequence of words in an input sentence as $w_{1:n}$ ( $n$ : the length of the sentence), and the corresponding word embeddings as $\\mathbf {x}_{1:n}$ . Then, the operation of the leaf-LSTM at time $t$ can be formulated as, ",
"$$ \n\\begin{bmatrix}\n\\tilde{\\mathbf {i}} \\\\\n\\tilde{\\mathbf {f}} \\\\\n\\tilde{\\mathbf {o}} \\\\\n\\tilde{\\mathbf {g}}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\sigma \\\\\n\\sigma \\\\\n\\sigma \\\\\n\\tanh \\end{bmatrix}\n\\Bigg ( \\mathbf {W}_\\text{L}\n\\begin{bmatrix}\n\\tilde{\\mathbf {h}}_{t-1} \\\\\n\\mathbf {x}_t \\\\\n\\end{bmatrix}\n+ \\mathbf {b}_\\text{L} \\Bigg )$$ (Eq. 18) ",
"$$ \n\\tilde{\\mathbf {c}}_t = \\tilde{\\mathbf {f}} \\odot \\tilde{\\mathbf {c}}_{t-1} + \\tilde{\\mathbf {i}} \\odot \\tilde{\\mathbf {g}}\\\\$$ (Eq. 19) ",
"where $\\mathbf {x}_t \\in \\mathbb {R}^{d_w}$ indicates an input word vector and $\\tilde{\\mathbf {h}}_t$ , $\\tilde{\\mathbf {c}}_t \\in \\mathbb {R}^{d_h}$ represent the hidden and cell state of the LSTM at time $t$ ( $\\tilde{\\mathbf {h}}_{t-1}$ corresponds to the hidden state at time $t$ -1). $\\mathbf {W}_\\text{L}$ and $\\mathbf {b}_\\text{L} $ are learnable parameters. The remaining notation follows that of the tree-LSTM above.",
"In experiments, we demonstrate that introducing a leaf-LSTM fares better at processing the input words of a tree-LSTM compared to using a feed-forward neural network. We also explore the possibility of its bidirectional setting in ablation study."
],
[
"In this section, we define SATA Tree-LSTM (Structure-Aware Tag Augmented Tree-LSTM, see Figure 2 ) which joins a tag-level tree-LSTM (section 3.2), a leaf-LSTM (section 3.3), and the original word tree-LSTM together.",
"As above we denote a sequence of words in an input sentence as $w_{1:n}$ and the corresponding word embeddings as $\\mathbf {x}_{1:n}$ . In addition, a tag embedding for the tag attached to each node in a tree is denoted by $\\textbf {e} \\in \\mathbb {R}^{d_\\text{T}}$ . Then, we derive the final sentence representation for the input sentence with our model in two steps.",
"First, we compute structure-aware tag representations ( $\\hat{\\mathbf {h}}$ ) for each node of a tree using the tag tree-LSTM (the right side of Figure 2 ) as follows: ",
"$$ \n\\begin{bmatrix}\n\\hat{\\mathbf {c}} \\\\\n\\hat{\\mathbf {h}} \\\\\n\\end{bmatrix}\n=\n{\\left\\lbrace \\begin{array}{ll}\n\\text{Tag-Tree-LSTM}(\\mathbf {e}) & \\text{if a leaf node} \\\\\n\\text{Tag-Tree-LSTM}(\\hat{\\mathbf {h}}_l, \\hat{\\mathbf {h}}_r, \\mathbf {e}) & \\text{otherwise}\n\\end{array}\\right.}$$ (Eq. 23) ",
"where Tag-Tree-LSTM indicates the module we described in section 3.2.",
"Second, we combine semantic units recursively on the word tree-LSTM in a bottom-up fashion. For leaf nodes, we leverage the Leaf-LSTM (the bottom-left of Figure 2 , explained in section 3.3) to compute $\\tilde{\\mathbf {c}}_{t}$ and $\\tilde{\\mathbf {h}}_{t}$ in sequential order, with the corresponding input $\\mathbf {x}_t$ . ",
"$$ \n\\begin{bmatrix}\n\\tilde{\\mathbf {c}}_{t} \\\\\n\\tilde{\\mathbf {h}}_{t} \\\\\n\\end{bmatrix}\n= \\text{Leaf-LSTM}(\\tilde{\\textbf {h}}_{t-1}, \\textbf {x}_t)$$ (Eq. 24) ",
"Then, the $\\tilde{\\mathbf {c}}_{t}$ and $\\tilde{\\mathbf {h}}_{t}$ can be utilized as input tokens to the word tree-LSTM, with the left (right) child of the target node corresponding to the $t$ th word in the input sentence. ",
"$$ \n\\begin{bmatrix}\n\\check{\\textbf {c}}_{\\lbrace l, r\\rbrace } \\\\\n\\check{\\textbf {h}}_{\\lbrace l, r\\rbrace }\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\tilde{\\textbf {c}}_{t} \\\\\n\\tilde{\\textbf {h}}_{t}\n\\end{bmatrix}$$ (Eq. 25) ",
"In the non-leaf node case, we calculate phrase representations for each node in the word tree-LSTM (the upper-left of Figure 2 ) recursively as follows: ",
"$$ \n\\check{\\mathbf {g}} = \\tanh {\\left( \\mathbf {U}_\\text{w}\n\\begin{bmatrix}\n\\check{\\mathbf {h}}_l \\\\\n\\check{\\mathbf {h}}_r \\\\\n\\end{bmatrix}\n+ \\mathbf {a}_\\text{w} \\right)}$$ (Eq. 26) ",
"$$ \n\\begin{bmatrix}\n\\check{\\mathbf {i}} \\\\\n\\check{\\mathbf {f}}_l \\\\\n\\check{\\mathbf {f}}_r \\\\\n\\check{\\mathbf {o}}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\sigma \\\\\n\\sigma \\\\\n\\sigma \\\\\n\\sigma \\end{bmatrix}\n\\Bigg ( \\mathbf {W_\\text{w}}\n\\begin{bmatrix}\n\\check{\\mathbf {h}}_l \\\\\n\\check{\\mathbf {h}}_r \\\\\n\\hat{\\mathbf {h}} \\\\\n\\end{bmatrix}\n+ \\mathbf {b}_\\text{w} \\Bigg )$$ (Eq. 27) ",
"where $\\check{\\mathbf {h}}$ , $\\check{\\mathbf {c}} \\in \\mathbb {R}^{d_h}$ represent the hidden and cell state of each node in the word tree-LSTM. $\\mathbf {U}_\\text{w} \\in \\mathbb {R}^{d_h \\times 2d_h}$ , $\\mathbf {W}_\\text{w} \\in \\mathbb {R}^{4d_h \\times \\left(2d_h+d_\\text{T}\\right)}$ , $\\mathbf {a}_\\text{w} \\in \\mathbb {R}^{d_h}$ , $\\mathbf {b}_\\text{w} \\in \\mathbb {R}^{4d_h}$ are learned parameters. The remaining notation follows those of the previous sections. Note that the structure-aware tag representations ( $\\hat{\\mathbf {h}}$ ) are only utilized to control the gate functions of the word tree-LSTM in the form of additional inputs, and are not involved in the semantic composition ( $\\check{\\mathbf {g}}$ ) directly.",
"Finally, the hidden state of the root node ( $\\check{\\mathbf {h}}_\\text{root}$ ) in the word-level tree-LSTM becomes the final sentence representation of the input sentence."
],
[
"One of the most basic approaches to evaluate a sentence encoder is to measure the classification performance with the sentence representations made by the encoder. Thus, we conduct experiments on the following five datasets. (Summary statistics for the datasets are reported in the supplemental materials.)",
"MR: A group of movie reviews with binary (positive / negative) classes. BIBREF22 ",
"SST-2: Stanford Sentiment Treebank BIBREF6 . Similar to MR, but each review is provided in the form of a binary parse tree whose nodes are annotated with numeric sentiment values. For SST-2, we only consider binary (positive / negative) classes.",
"SST-5: Identical to SST-2, but the reviews are grouped into fine-grained (very negative, negative, neutral, positive, very positive) classes.",
"SUBJ: Sentences grouped as being either subjective or objective (binary classes). BIBREF23 ",
"TREC: A dataset which groups questions into six different question types (classes). BIBREF24 ",
"As a preprocessing step, we construct parse trees for the sentences in the datasets using the Stanford PCFG parser BIBREF25 . Because syntactic tags are by-products of constituency parsing, we do not need further preprocessing.",
"To classify the sentence given our sentence representation ( $\\check{\\mathbf {h}}_\\text{root}$ ), we use one fully-connected layer with a ReLU activation, followed by a softmax classifier. The final predicted probability distribution of the class $y$ given the sentence $w_{1:n}$ is defined as follows, ",
"$$\\mathbf {s} = \\text{ReLU}(\\mathbf {W}_\\text{s} \\check{\\mathbf {h}}_\\text{root}+ \\mathbf {b}_\\text{s})$$ (Eq. 37) ",
"$$p(y|w_{1:n}) = \\text{softmax}(\\mathbf {W}_\\text{c}\\mathbf {s} + \\mathbf {b}_\\text{c})$$ (Eq. 38) ",
"where $\\textbf {s} \\in \\mathbb {R}^{d_\\text{s}}$ is the computed task-specific sentence representation for the classifier, and $\\textbf {W}_\\text{s} \\in \\mathbb {R}^{d_\\text{s} \\times d_h}$ , $\\textbf {W}_\\text{c} \\in \\mathbb {R}^{d_\\text{c} \\times d_s}$ , $\\textbf {b}_\\text{s} \\in \\mathbb {R}^{d_s}$ , $\\textbf {b}_\\text{c} \\in \\mathbb {R}^{d_c}$ are trainable parameters. As an objective function, we use the cross entropy of the predicted and true class distributions.",
"The results of the experiments on the five datasets are shown in table 1 . In this table, we report the test accuracy of our model and various other models on each dataset in terms of percentage. To consider the effects of random initialization, we report the best numbers obtained from each several runs with hyper-parameters fixed.",
"Compared with the previous syntactic tree-based models as well as other neural models, our SATA Tree-LSTM shows superior or competitive performance on all tasks. Specifically, our model achieves new state-of-the-art results within the tree-structured model class on 4 out of 5 sentence classification tasks—SST-2, SST-5, MR, and TREC. The model shows its strength, in particular, when the datasets provide phrase-level supervision to facilitate tree structure learning (i.e. SST-2, SST-5). Moreover, the numbers we report for SST-5 and TREC are competitive to the existing state-of-the-art results including ones from structurally pre-trained models such as ELMo BIBREF26 , proving our model's superiority. Note that the SATA Tree-LSTM also outperforms the recent latent tree-based model, indicating that modeling a neural model with explicit linguistic knowledge can be an attractive option.",
"On the other hand, a remaining concern is that our SATA Tree-LSTM is not robust to random seeds when the size of a dataset is relatively small, as tag embeddings are randomly initialized rather than relying on pre-trained ones in contrast with the case of words. From this observation, we could find out there needs a direction of research towards pre-trained tag embeddings.",
"To estimate the performance of our model beyond the tasks requiring only one sentence at a time, we conduct an experiment on the Stanford Natural Language Inference BIBREF34 dataset, each example of which consists of two sentences, the premise and the hypothesis. Our objective given the data is to predict the correct relationship between the two sentences among three options— contradiction, neutral, or entailment.",
"We use the siamese architecture to encode both the premise ( $p_{1:m}$ ) and hypothesis ( $h_{1:n}$ ) following the standard of sentence-encoding models in the literature. (Specifically, $p_{1:m}$ is encoded as $\\check{\\mathbf {h}}_\\text{root}^p \\in \\mathbb {R}^{d_h}$ and $h_{1:n}$ is encoded as $\\check{\\mathbf {h}}_\\text{root}^h \\in \\mathbb {R}^{d_h}$ with the same encoder.) Then, we leverage some heuristics BIBREF35 , followed by one fully-connected layer with a ReLU activation and a softmax classifier. Specifically, ",
"$$\\mathbf {z} = \\left[ \\check{\\mathbf {h}}_\\text{root}^p; \\check{\\mathbf {h}}_\\text{root}^h; | \\check{\\mathbf {h}}_\\text{root}^p - \\check{\\mathbf {h}}_\\text{root}^h |; \\check{\\mathbf {h}}_\\text{root}^p \\odot \\check{\\mathbf {h}}_\\text{root}^h \\right]$$ (Eq. 41) ",
"$$\\mathbf {s} = \\text{ReLU}(\\mathbf {W}_\\text{s} \\mathbf {z} + \\mathbf {b}_\\text{s})$$ (Eq. 42) ",
"where $\\textbf {z} \\in \\mathbb {R}^{4d_h}$ , $\\textbf {s} \\in \\mathbb {R}^{d_s}$ are intermediate features for the classifier and $\\textbf {W}_\\text{s} \\in \\mathbb {R}^{d_\\text{s} \\times 4d_h}$ , $\\textbf {W}_\\text{c} \\in \\mathbb {R}^{d_\\text{c} \\times d_s}$ , $\\textbf {b}_\\text{s} \\in \\mathbb {R}^{d_s}$ , $\\textbf {b}_\\text{c} \\in \\mathbb {R}^{d_c}$ are again trainable parameters.",
"Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model.",
"Even though our model has proven its mettle, the effect of tag information seems relatively weak in the case of SNLI, which contains a large amount of data compared to the others. One possible explanation is that neural models may learn some syntactic rules from large amounts of text when the text size is large enough, reducing the necessity of external linguistic knowledge. We leave the exploration of the effectiveness of tags relative to data size for future work.",
"Here we go over the settings common across our models during experimentation. For more task-specific details, refer to the supplemental materials.",
"For our input embeddings, we used 300 dimensional 840B GloVe BIBREF39 as pre-trained word embeddings, and tag representations were randomly sampled from the uniform distribution [-0.005, 0.005]. Tag vectors are revised during training while the fine-tuning of the word embedding depends on the task. Our models were trained using the Adam BIBREF40 or Adadelta BIBREF41 optimizer, depending on task. For regularization, weight decay is added to the loss function except for SNLI following BIBREF42 ( BIBREF42 ) and Dropout BIBREF43 is also applied for the word embeddings and task-specific classifiers. Moreover, batch normalization BIBREF44 is adopted for the classifiers. As a default, all the weights in the model are initialized following BIBREF45 ( BIBREF45 ) and the biases are set to 0. The total norm of the gradients of the parameters is clipped not to be over 5 during training.",
"Our best models for each dataset were chosen by validation accuracy in cases where a validation set was provided as a part of the dataset. Otherwise, we perform a grid search on probable hyper-parameter settings, or run 10-fold cross-validation in cases where even a test set does not exist."
],
[
"In this section, we design an ablation study on the core modules of our model to explore their effectiveness. The dataset used in this experiment is SST-2. To conduct the experiment, we only replace the target module with other candidates while maintaining the other settings. To be specific, we focus on two modules, the leaf-LSTM and structure-aware tag embeddings (tag-level tree-LSTM). In the first case, the leaf-LSTM is replaced with a fully-connected layer with a $\\tanh $ activation or Bi-LSTM. In the second case, we replace the structure-aware tag embeddings with naive tag embeddings or do not employ them at all.",
"The experimental results are depicted in Figure 3 . As the chart shows, our model outperforms all the other options we have considered. In detail, the left part of the chart shows that the leaf-LSTM is the most effective option compared to its competitors. Note that the sequential leaf-LSTM is somewhat superior or competitive than the bidirectional leaf-LSTM when both have a comparable number of parameters. We conjecture this may because a backward LSTM does not add additional useful knowledge when the structure of a sentence is already known. In conclusion, we use the uni-directional LSTM as a leaf module because of its simplicity and remarkable performance.",
"Meanwhile, the right part of the figure demonstrates that our newly introduced structure-aware embeddings have a real impact on improving the model performance. Interestingly, employing the naive tag embeddings made no difference in terms of the test accuracy, even though the absolute validation accuracy increased (not reported in the figure). This result supports our assumption that tag information should be considered in the structure."
],
[
"In previous sections, we have numerically demonstrated that our model is effective in encouraging useful composition of semantic units. Here, we directly investigate the computed representations for each node of a tree, showing that the remarkable performance of our model is mainly due to the gradual and recursive composition of the intermediate representations on the syntactic structure.",
"To observe the phrase-level embeddings at a glance, we draw a scatter plot in which a point represents the corresponding intermediate representation. We utilize PCA (Principal Component Analysis) to project the representations into a two-dimensional vector space. As a target parse tree, we reuse the one seen in Figure 1 . The result is shown in Figure 4 .",
"From this figure, we confirm that the intermediate representations have a hierarchy in the semantic space, which is very similar to that of the parse tree. In other words, as many tree-structured models pursue, we can see the tendency of constructing the representations from the low-level (the bottom of the figure) to the high-level (the top-left and top-right of the figure), integrating the meaning of the constituents recursively. An interesting thing to note is that the final sentence representation is near that of the phrase `, the stories are quietly moving.' rather than that of `Despite the film's shortcomings', catching the main meaning of the sentence."
],
[
"We have proposed a novel RvNN architecture to fully utilize linguistic priors. A newly introduced tag-level tree-LSTM demonstrates that it can effectively control the composition function of the corresponding word-level tree-LSTM. In addition, the proper contextualization of the input word vectors results in significant performance improvements on several sentence-level tasks. For future work, we plan to explore a new way of exploiting dependency trees effectively, similar to the case of constituency trees."
],
[
"We thank anonymous reviewers for their constructive and fruitful comments. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF2016M3C4A7952587)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model",
"Tree-LSTM",
"Structure-aware Tag Representation",
"Leaf-LSTM",
"SATA Tree-LSTM",
"Quantitative Analysis",
"Ablation Study",
"Qualitative Analysis",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"3e06bc9ee9a23dec44ccdcf9d7e30caf8b3d6472",
"6accd64ef5476c637eaf6bc92d82e47c70306c64"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.",
"Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model."
],
"extractive_spans": [],
"free_form_answer": "Various tree structured neural networks including variants of Tree-LSTM, Tree-based CNN, RNTN, and non-tree models including variants of LSTMs, CNNs, residual, and self-attention based networks",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.",
"Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).)"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.",
"Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model."
],
"extractive_spans": [],
"free_form_answer": "Sentence classification baselines: RNTN (Socher et al. 2013), AdaMC-RNTN (Dong et al. 2014), TE-RNTN (Qian et al. 2015), TBCNN (Mou et al. 2015), Tree-LSTM (Tai, Socher, and Manning 2015), AdaHT-LSTM-CM (Liu, Qiu, and Huang 2017), DC-TreeLSTM (Liu, Qiu, and Huang 2017), TE-LSTM (Huang, Qian, and Zhu 2017), BiConTree (Teng and Zhang 2017), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), TreeNet (Cheng et al. 2018), CNN (Kim 2014), AdaSent (Zhao, Lu, and Poupart 2015), LSTM-CNN (Zhou et al. 2016), byte-mLSTM (Radford, Jozefowicz, and Sutskever 2017), BCN + Char + CoVe (McCann et al. 2017), BCN + Char + ELMo (Peters et al. 2018). \nStanford Natural Language Inference baselines: Latent Syntax Tree-LSTM (Yogatama et al. 2017), Tree-based CNN (Mou et al. 2016), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), NSE (Munkhdalai and Yu 2017), Reinforced Self- Attention Network (Shen et al. 2018), Residual stacked encoders: (Nie and Bansal 2017), BiLSTM with generalized pooling (Chen, Ling, and Zhu 2018).",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.",
"Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"197290cb509b9a046b311719c6ce1ce408f3be8a",
"043654eefd60242ac8da08ddc1d4b8d73f86f653"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"Which baselines did they compare against?"
],
"question_id": [
"0ad4359e3e7e5e5f261c2668fe84c12bc762b3b8"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A constituency tree example from Stanford Sentiment Treebank.",
"Figure 2: A diagram of SATA Tree-LSTM. The model has two separate tree-LSTM modules, the right of which (tag tree-LSTM) extracts a structure-aware tag representation to control the composition function of the remaining tree-LSTM (word tree-LSTM). Fully-connected: onelayered non-linear transformation.",
"Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.",
"Table 2: The accuracy of diverse models on Stanford Natural Language Inference. For fair comparison, we only consider sentence-encoding based models. Our model achieves a comparable result with a moderate number of parameters. ?: Latent tree models.",
"Figure 4: A scatter plot whose points represent the intermediate representations for each node of the tree in Figure 1. From this figure, we can see the tendency of constructing the representations recursively from the low to the high level.",
"Figure 3: An ablation study on the core modules of our model. The test accuracy of each model on SST-2 is reported. The results demonstrate that the modules play an important role for achieving the superior performance of our model. FC: A fully connected-layer with a tanh function. w/o tags: Tag embeddings are not used. w/ tags: The naive tag embeddings are directly inserted into each node of a tree."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Figure4-1.png",
"7-Figure3-1.png"
]
} | [
"Which baselines did they compare against?"
] | [
[
"1809.02286-6-Table1-1.png",
"1809.02286-Quantitative Analysis-19"
]
] | [
"Sentence classification baselines: RNTN (Socher et al. 2013), AdaMC-RNTN (Dong et al. 2014), TE-RNTN (Qian et al. 2015), TBCNN (Mou et al. 2015), Tree-LSTM (Tai, Socher, and Manning 2015), AdaHT-LSTM-CM (Liu, Qiu, and Huang 2017), DC-TreeLSTM (Liu, Qiu, and Huang 2017), TE-LSTM (Huang, Qian, and Zhu 2017), BiConTree (Teng and Zhang 2017), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), TreeNet (Cheng et al. 2018), CNN (Kim 2014), AdaSent (Zhao, Lu, and Poupart 2015), LSTM-CNN (Zhou et al. 2016), byte-mLSTM (Radford, Jozefowicz, and Sutskever 2017), BCN + Char + CoVe (McCann et al. 2017), BCN + Char + ELMo (Peters et al. 2018). \nStanford Natural Language Inference baselines: Latent Syntax Tree-LSTM (Yogatama et al. 2017), Tree-based CNN (Mou et al. 2016), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), NSE (Munkhdalai and Yu 2017), Reinforced Self- Attention Network (Shen et al. 2018), Residual stacked encoders: (Nie and Bansal 2017), BiLSTM with generalized pooling (Chen, Ling, and Zhu 2018)."
] | 44 |
1809.01202 | Causal Explanation Analysis on Social Media | Understanding causal explanations - reasons given for happenings in one's life - has been found to be an important psychological factor linked to physical and mental health. Causal explanations are often studied through manual identification of phrases over limited samples of personal writing. Automatic identification of causal explanations in social media, while challenging in relying on contextual and sequential cues, offers a larger-scale alternative to expensive manual ratings and opens the door for new applications (e.g. studying prevailing beliefs about causes, such as climate change). Here, we explore automating causal explanation analysis, building on discourse parsing, and presenting two novel subtasks: causality detection (determining whether a causal explanation exists at all) and causal explanation identification (identifying the specific phrase that is the explanation). We achieve strong accuracies for both tasks but find different approaches best: an SVM for causality prediction (F1 = 0.791) and a hierarchy of Bidirectional LSTMs for causal explanation identification (F1 = 0.853). Finally, we explore applications of our complete pipeline (F1 = 0.868), showing demographic differences in mentions of causal explanation and that the association between a word and sentiment can change when it is used within a causal explanation. | {
"paragraphs": [
[
"Explanations of happenings in one's life, causal explanations, are an important topic of study in social, psychological, economic, and behavioral sciences. For example, psychologists have analyzed people's causal explanatory style BIBREF0 and found strong negative relationships with depression, passivity, and hostility, as well as positive relationships with life satisfaction, quality of life, and length of life BIBREF1 , BIBREF2 , BIBREF0 .",
"To help understand the significance of causal explanations, consider how they are applied to measuring optimism (and its converse, pessimism) BIBREF0 . For example, in “My parser failed because I always have bugs.”, the emphasized text span is considered a causal explanation which indicates pessimistic personality – a negative event where the author believes the cause is pervasive. However, in “My parser failed because I barely worked on the code.”, the explanation would be considered a signal of optimistic personality – a negative event for which the cause is believed to be short-lived.",
"Language-based models which can detect causal explanations from everyday social media language can be used for more than automating optimism detection. Language-based assessments would enable other large-scale downstream tasks: tracking prevailing causal beliefs (e.g., about climate change or autism), better extracting process knowledge from non-fiction (e.g., gravity causes objects to move toward one another), or detecting attribution of blame or praise in product or service reviews (“I loved this restaurant because the fish was cooked to perfection”).",
"In this paper, we introduce causal explanation analysis and its subtasks of detecting the presence of causality (causality prediction) and identifying explanatory phrases (causal explanation identification). There are many challenges to achieving these task. First, the ungrammatical texts in social media incur poor syntactic parsing results which drastically affect the performance of discourse relation parsing pipelines . Many causal relations are implicit and do not contain any discourse markers (e.g., `because'). Further, Explicit causal relations are also more difficult in social media due to the abundance of abbreviations and variations of discourse connectives (e.g., `cuz' and `bcuz').",
"Prevailing approaches for social media analyses, utilizing traditional linear models or bag of words models (e.g., SVM trained with n-gram, part-of-speech (POS) tags, or lexicon-based features) alone do not seem appropriate for this task since they simply cannot segment the text into meaningful discourse units or discourse arguments such as clauses or sentences rather than random consecutive token sequences or specific word tokens. Even when the discourse units are clear, parsers may still fail to accurately identify discourse relations since the content of social media is quite different than that of newswire which is typically used for discourse parsing.",
"In order to overcome these difficulties of discourse relation parsing in social media, we simplify and minimize the use of syntactic parsing results and capture relations between discourse arguments, and investigate the use of a recursive neural network model (RNN). Recent work has shown that RNNs are effective for utilizing discourse structures for their downstream tasks BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , but they have yet to be directly used for discourse relation prediction in social media. We evaluated our model by comparing it to off-the-shelf end-to-end discourse relation parsers and traditional models. We found that the SVM and random forest classifiers work better than the LSTM classifier for the causality detection, while the LSTM classifier outperforms other models for identifying causal explanation.",
"The contributions of this work include: (1) the proposal of models for both (a) causality prediction and (b) causal explanation identification, (2) the extensive evaluation of a variety of models from social media classification models and discourse relation parsers to RNN-based application models, demonstrating that feature-based models work best for causality prediction while RNNs are superior for the more difficult task of causal explanation identification, (3) performance analysis on architectural differences of the pipeline and the classifier structures, (4) exploration of the applications of causal explanation to downstream tasks, and (5) release of a novel, anonymized causality Facebook dataset along with our causality prediction and causal explanation identification models."
],
[
"Identifying causal explanations in documents can be viewed as discourse relation parsing. The Penn Discourse Treebank (PDTB) BIBREF7 has a `Cause' and `Pragmatic Cause' discourse type under a general `Contingency' class and Rhetorical Structure Theory (RST) BIBREF8 has a `Relations of Cause'. In most cases, the development of discourse parsers has taken place in-domain, where researchers have used the existing annotations of discourse arguments in newswire text (e.g. Wall Street Journal) from the discourse treebank and focused on exploring different features and optimizing various types of models for predicting relations BIBREF9 , BIBREF10 , BIBREF11 . In order to further develop automated systems, researchers have proposed end-to-end discourse relation parsers, building models which are trained and evaluated on the annotated PDTB and RST Discourse Treebank (RST DT). These corpora consist of documents from Wall Street Journal (WSJ) which are much more well-organized and grammatical than social media texts BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 .",
"Only a few works have attempted to parse discourse relations for out-of-domain problems such as text categorizations on social media texts; Ji and Bhatia used models which are pretrained with RST DT for building discourse structures from movie reviews, and Son adapted the PDTB discourse relation parsing approach for capturing counterfactual conditionals from tweets BIBREF4 , BIBREF3 , BIBREF16 . These works had substantial differences to what propose in this paper. First, Ji and Bhatia used a pretrained model (not fully optimal for some parts of the given task) in their pipeline; Ji's model performed worse than the baseline on the categorization of legislative bills, which is thought to be due to legislative discourse structures differing from those of the training set (WSJ corpus). Bhatia also used a pretrained model finding that utilizing discourse relation features did not boost accuracy BIBREF4 , BIBREF3 . Both Bhatia and Son used manual schemes which may limit the coverage of certain types of positive samples– Bhatia used a hand-crafted schema for weighting discourse structures for the neural network model and Son manually developed seven surface forms of counterfactual thinking for the rule-based system BIBREF4 , BIBREF16 . We use social-media-specific features from pretrained models which are directly trained on tweets and we avoid any hand-crafted rules except for those included in the existing discourse argument extraction techniques.",
"The automated systems for discourse relation parsing involve multiple subtasks from segmenting the whole text into discourse arguments to classifying discourse relations between the arguments. Past research has found that different types of models and features yield varying performance for each subtask. Some have optimized models for discourse relation classification (i.e. given a document indicating if the relation existing) without discourse argument parsing using models such as Naive-Bayes or SVMs, achieve relatively stronger accuracies but a simpler task than that associated with discourse arguments BIBREF10 , BIBREF11 , BIBREF9 . Researchers who, instead, tried to build the end-to-end parsing pipelines considered a wider range of approaches including sequence models and RNNs BIBREF12 , BIBREF15 , BIBREF14 , BIBREF17 . Particularly, when they tried to utilize the discourse structures for out-domain applications, they used RNN-based models and found that those models are advantageous for their downstream tasks BIBREF4 , BIBREF3 .",
"In our case, for identifying causal explanations from social media using discourse structure, we build an RNN-based model for its structural effectiveness in this task (see details in section UID13 ). However, we also note that simpler models such as SVMs and logistic regression obtained the state-of-the-art performances for text categorization tasks in social media BIBREF18 , BIBREF19 , so we build relatively simple models with different properties for each stage of the full pipeline of our parser."
],
[
"We build our model based on PDTB-style discourse relation parsing since PDTB has a relatively simpler text segmentation method; for explicit discourse relations, it finds the presence of discourse connectives within a document and extracts discourse arguments which parametrize the connective while for implicit relations, it considers all adjacent sentences as candidate discourse arguments."
],
[
"We created our own causal explanation dataset by collecting 3,268 random Facebook status update messages. Three well-trained annotators manually labeled whether or not each message contains the causal explanation and obtained 1,598 causality messages with substantial agreement ( $\\kappa =0.61$ ). We used the majority vote for our gold standard. Then, on each causality message, annotators identified which text spans are causal explanations.",
"For each task, we used 80% of the dataset for training our model and 10% for tuning the hyperparameters of our models. Finally, we evaluated all of our models on the remaining 10% (Table 1 and Table 2 ). For causal explanation detection task, we extracted discourse arguments using our parser and selected discourse arguments which most cover the annotated causal explanation text span as our gold standard."
],
[
"We build two types of models. First, we develop feature-based models which utilize features of the successful models in social media analysis and causal relation discourse parsing. Then, we build a recursive neural network model which uses distributed representation of discourse arguments as this approach can even capture latent properties of causal relations which may exist between distant discourse arguments. We specifically selected bidirectional LSTM since the model with the discourse distributional structure built in this form outperformed the traditional models in similar NLP downstream tasks BIBREF3 .",
"As the first step of our pipeline, we use Tweebo parser BIBREF20 to extract syntactic features from messages. Then, we demarcate sentences using punctuation (`,') tag and periods. Among those sentences, we find discourse connectives defined in PDTB annotation along with a Tweet POS tag for conjunction words which can also be a discourse marker. In order to decide whether these connectives are really discourse connectives (e.g., I went home, but he stayed) as opposed to simple connections of two words (I like apple and banana) we see if verb phrases exist before and after the connective by using dependency parsing results. Although discourse connective disambiguation is a complicated task which can be much improved by syntactic features BIBREF21 , we try to minimize effects of syntactic parsing and simplify it since it is highly error-prone in social media. Finally, according to visual inspection, emojis (`E' tag) are crucial for discourse relation in social media so we take them as separate discourse arguments (e.g.,in “My test result... :(” the sad feeling is caused by the test result, but it cannot be captured by plain word tokens).",
"We trained a linear SVM, an rbf SVM, and a random forest with N-gram, charater N-gram, and tweet POS tags, sentiment tags, average word lengths and word counts from each message as they have a pivotal role in the models for many NLP downstream tasks in social media BIBREF19 , BIBREF18 . In addition to these features, we also extracted First-Last, First3 features and Word Pairs from every adjacent pair of discourse arguments since these features were most helpful for causal relation prediction BIBREF9 . First-Last, First3 features are first and last word and first three words of two discourse arguments of the relation, and Word Pairs are the cross product of words of those discourse arguments. These two features enable our model to capture interaction between two discourse arguments. BIBREF9 reported that these two features along with verbs, modality, context, and polarity (which can be captured by N-grams, sentiment tags and POS tags in our previous features) obtained the best performance for predicting Contingency class to which causality belongs.",
"We load the GLOVE word embedding BIBREF22 trained in Twitter for each token of extracted discourse arguments from messages. For the distributional representation of discourse arguments, we run a Word-level LSTM on the words' embeddings within each discourse argument and concatenate last hidden state vectors of forward LSTM ( $\\overrightarrow{h}$ ) and backward LSTM ( $\\overleftarrow{h}$ ) which is suggested by BIBREF3 ( $DA = [\\overrightarrow{h};\\overleftarrow{h}]$ ). Then, we feed the sequence of the vector representation of discourse arguments to the Discourse-argument-level LSTM (DA-level LSTM) to make a final prediction with log softmax function. With this structure, the model can learn the representation of interaction of tokens inside each discourse argument, then capture discourse relations across all of the discourse arguments in each message (Figure 2 ). In order to prevent the overfitting, we added a dropout layer between the Word-level LSTM and the DA-level LSTM layer.",
"We also explore subsets of the full RNN architecture, specifically with one of the two LSTM layers removed. In the first model variant, we directly input all word embeddings of a whole message to a BiLSTM layer and make prediction (Word LSTM) without the help of the distributional vector representations of discourse arguments. In the second model variant, we take the average of all word embeddings of each discourse argument ( $DA_k=\\frac{1}{N_k} \\sum _{i=1}^{N_k}W_{i}$ ), and use them as inputs to a BiLSTM layer (DA AVG LSTM) as the average vector of embeddings were quite effective for representing the whole sequence BIBREF3 , BIBREF5 . As with the full architectures, for CP both of these variants ends with a many-to-one classification per message, while the CEI model ends with a sequence of classifications."
],
[
"We explored three types of models (RBF SVM, Linear SVM, and Random Forest Classifier) which have previously been shown empirically useful for the language analysis in social media. We filtered out low frequency Word Pairs features as they tend to be noisy and sparse BIBREF9 . Then, we conducted univariate feature selection to restrict all remaining features to those showing at least a small relationship with the outcome. Specifically, we keep all features passing a family-wise error rate of $\\alpha = 60$ with the given outcome. After comparing the performance of the optimized version of each model, we also conducted a feature ablation test on the best model in order to see how much each feature contributes to the causality prediction.",
"We used bidirectional LSTMs for causality classification and causal explanation identification since the discourse arguments for causal explanation can show up either before and after the effected events or results and we want our model to be optimized for both cases. However, there is a risk of overfitting due to the dataset which is relatively small for the high complexity of the model, so we added a dropout layer (p=0.3) between the Word-level LSTM and the DA-level LSTM.",
"For tuning our model, we explore the dimensionality of word vector and LSTM hidden state vectors of discourse arguments of 25, 50, 100, and 200 as pretrained GLOVE vectors were trained in this setting. For optimization, we used Stochastic Gradient Descent (SGD) and Adam BIBREF23 with learning rates 0.01 and 0.001.",
"We ignore missing word embeddings because our dataset is quite small for retraining new word embeddings. However, if embeddings are extracted as separate discourse arguments, we used the average of all vectors of all discourse arguments in that message. Average embeddings have performed well for representing text sequences in other tasks BIBREF5 .",
"We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message). Then, we compare how models work for each task and disassembled them to inspect how each part of the models can affect their final prediction performances. We conducted McNemar's test to determine whether the performance differences are statistically significant at $p < .05$ ."
],
[
"We investigated various models for both causality detection and explanation identification. Based on their performances on the task, we analyzed the relationships between the types of models and the tasks, and scrutinized further for the best performing models. For performance analysis, we reported weighted F1 of classes."
],
[
"In order to classify whether a message contains causal relation, we compared off-the-shelf PDTB parsers, linear SVM, RBF SVM, Random forest and LSTM classifiers. The off-the-shelf parsers achieved the lowest accuracies ( BIBREF12 and BIBREF13 in Table 3 ). This result can be expected since 1) these models were trained with news articles and 2) they are trained for all possible discourse relations in addition to causal relations (e.g., contrast, condition, etc). Among our suggested models, SVM and random forest classifier performed better than LSTM and, in the general trend, the more complex the models were, the worse they performed. This suggests that the models with more direct and simpler learning methods with features might classify the causality messages better than the ones more optimized for capturing distributional information or non-linear relationships of features.",
"Table 4 shows the results of a feature ablation test to see how each feature contributes to causality classification performance of the linear SVM classifier. POS tags caused the largest drop in F1. We suspect POS tags played a unique role because discourse connectives can have various surface forms (e.g., because, cuz, bcuz, etc) but still the same POS tag `P'. Also POS tags can capture the occurrences of modal verbs, a feature previously found to be very useful for detecting similar discourse relations BIBREF9 . N-gram features caused 0.022 F1 drop while sentiment tags did not affect the model when removed. Unlike the previous work where First-Last, First3 and Word pairs tended to gain a large F1 increase for multiclass discourse relation prediction, in our case, they did not affect the prediction performance compared to other feature types such as POS tags or N-grams."
],
[
"In this task, the model identifies causal explanations given the discourse arguments of the causality message. We explored over the same models as those we used for causality (sans the output layer), and found the almost opposite trend of performances (see Table 5 ). The Linear SVM obtained lowest F1 while the LSTM model made the best identification performance. As opposed to the simple binary classification of the causality messages, in order to detect causal explanation, it is more beneficial to consider the relation across discourse arguments of the whole message and implicit distributional representation due to the implicit causal relations between two distant arguments."
],
[
"For causality prediction, we experimented with only word tokens in the whole message without help of Word-level LSTM layer (Word LSTM), and F1 dropped by 0.064 (CP in Table 6 ). Also, when we used the average of the sequence of word embeddings of each discourse argument as an input to the DA-level LSTM and it caused F1 drop of 0.073. This suggests that the information gained from both the interaction of words in and in between discourse arguments help when the model utilizes the distributional representation of the texts.",
"For causal explanation identification, in order to test how the LSTM classifier works without its capability of capturing the relations between discourse arguments, we removed DA-level LSTM layer and ran the LSTM directly on the word embedding sequence for each discourse argument for classifying whether the argument is causal explanation, and the model had 0.061 F1 drop (Word LSTM in CEI in Table 6 ). Also, when we ran DA-level LSTM on the average vectors of the word sequences of each discourse argument of messages, F1 decreased to 0.818. This follows the similar pattern observed from other types of models performances (i.e., SVMs and Random Forest classifiers) that the models with higher complexity for capturing the interaction of discourse arguments tend to identify causal explanation with the higher accuracies.",
"For CEI task, we found that when the model ran on the sequence representation of discourse argument (DA AVG LSTM), its performance was higher than the plain sequence of word embeddings (Word LSTM). Finally, in both subtasks, when the models ran on both Word-level and DA-Level (Full LSTM), they obtained the highest performance."
],
[
"Evaluations thus far zeroed-in on each subtask of causal explanation analysis (i.e. CEI only focused on data already identified to contain causal explanations). Here, we seek to evaluate the complete pipeline of CP and CEI, starting from all of test data (those or without causality) and evaluating the final accuracy of CEI predictions. This is intended to evaluate CEI performance under an applied setting where one does not already know whether a document has a causal explanation.",
"There are several approaches we could take to perform CEI starting from unannotated data. We could simply run CEI prediction by itself (CEI Only) or the pipeline of CP first and then only run CEI on documents predicted as causal (CP + CEI). Further, the CEI model could be trained only on those documents annotated causal (as was done in the previous experiments) or on all training documents including many that are not causal.",
"Table 7 show results varying the pipeline and how CEI was trained. Though all setups performed decent ( $F1 > 0.81$ ) we see that the pipelined approach, first predicting causality (with the linear SVM) and then predicting causal explanations only for those with marked causal (CP + CEI $_{causal}$ ) yielded the strongest results. This also utilized the CEI model only trained on those annotated causal. Besides performance, an added benefit from this two step approach is that the CP step is less computational intensive of the CEI step and approximately 2/3 of documents will never need the CEI step applied.",
"We had an inevitable limitation on the size of our dataset, since there is no other causality dataset over social media and the annotation required an intensive iterative process. This might have limited performances of more complex models, but considering the processing time and the computation load, the combination of the linear model and the RNN-based model of our pipeline obtained both the high performance and efficiency for the practical applications to downstream tasks. In other words, it's possible the linear model will not perform as well if the training size is increased substantially. However, a linear model could still be used to do a first-pass, computationally efficient labeling, in order to shortlist social media posts for further labeling from an LSTM or more complex model."
],
[
"Here, we explore the use of causal explanation analysis for downstream tasks. First we look at the relationship between use of causal explanation and one's demographics: age and gender. Then, we consider their use in sentiment analysis for extracting the causes of polarity ratings. Research involving human subjects was approved by the University of Pennsylvania Institutional Review Board."
],
[
"We developed a pipeline for causal explanation analysis over social media text, including both causality prediction and causal explanation identification. We examined a variety of model types and RNN architectures for each part of the pipeline, finding an SVM best for causality prediction and a hierarchy of BiLSTMs for causal explanation identification, suggesting the later task relies more heavily on sequential information. In fact, we found replacing either layer of the hierarchical LSTM architecture (the word-level or the DA-level) with a an equivalent “bag of features” approach resulted in reduced accuracy. Results of our whole pipeline of causal explanation analysis were found quite strong, achieving an $F1=0.868$ at identifying discourse arguments that are causal explanations.",
"Finally, we demonstrated use of our models in applications, finding associations between demographics and rate of mentioning causal explanations, as well as showing differences in the top words predictive of negative ratings in Yelp reviews. Utilization of discourse structure in social media analysis has been a largely untapped area of exploration, perhaps due to its perceived difficulty. We hope the strong results of causal explanation identification here leads to the integration of more syntax and deeper semantics into social media analyses and ultimately enables new applications beyond the current state of the art."
],
[
"This work was supported, in part, by a grant from the Templeton Religion Trust (ID #TRT0048). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We also thank Laura Smith, Yiyi Chen, Greta Jawel and Vanessa Hernandez for their work in identifying causal explanations."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methods",
"Dataset",
"Model",
"Experiment",
"Results",
"Causality Prediction",
"Causal Explanation Identification",
"Architectural Variants",
"Complete Pipeline",
"Exploration",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"3e7f7c8a5572cbd61176a48ad9a4d1006c6afa0d",
"0a32ef84af8862efb96994fa0d87a3728068800d"
],
"answer": [
{
"evidence": [
"We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message). Then, we compare how models work for each task and disassembled them to inspect how each part of the models can affect their final prediction performances. We conducted McNemar's test to determine whether the performance differences are statistically significant at $p < .05$ ."
],
"extractive_spans": [
"state-of-the-art PDTB taggers"
],
"free_form_answer": "",
"highlighted_evidence": [
"We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 5: Causal explanation identification performance. Bold indicates significant imrpovement over next best model (p < .05)"
],
"extractive_spans": [],
"free_form_answer": "Linear SVM, RBF SVM, and Random Forest",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Causal explanation identification performance. Bold indicates significant imrpovement over next best model (p < .05)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2b413669fd1e681656c8d43a27df86e649065edf",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"71bc2caa46a859720ebdd335eb2f8b03a746cd0c",
"ccbec5bde08058f25d1a44710d295c1e2104ab54"
],
"answer": [
{
"evidence": [
"We created our own causal explanation dataset by collecting 3,268 random Facebook status update messages. Three well-trained annotators manually labeled whether or not each message contains the causal explanation and obtained 1,598 causality messages with substantial agreement ( $\\kappa =0.61$ ). We used the majority vote for our gold standard. Then, on each causality message, annotators identified which text spans are causal explanations."
],
"extractive_spans": [
"Facebook status update messages"
],
"free_form_answer": "",
"highlighted_evidence": [
"We created our own causal explanation dataset by collecting 3,268 random Facebook status update messages."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We created our own causal explanation dataset by collecting 3,268 random Facebook status update messages. Three well-trained annotators manually labeled whether or not each message contains the causal explanation and obtained 1,598 causality messages with substantial agreement ( $\\kappa =0.61$ ). We used the majority vote for our gold standard. Then, on each causality message, annotators identified which text spans are causal explanations."
],
"extractive_spans": [
"Facebook status update messages"
],
"free_form_answer": "",
"highlighted_evidence": [
"We created our own causal explanation dataset by collecting 3,268 random Facebook status update messages."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2b413669fd1e681656c8d43a27df86e649065edf",
"c7d4a630661cd719ea504dba56393f78278b296b"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What baselines did they consider?",
"What types of social media did they consider?"
],
"question_id": [
"4cbe5a36b492b99f9f9fea8081fe4ba10a7a0e94",
"a4d115220438c0ded06a91ad62337061389a6747"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A casual relation characterizes the connection between two discourse arguments, one of which is the causal explanation.",
"Table 1: Number of messages containing causality or not in our dataset.",
"Table 2: The number of discourse arguments in causality messages. Across 1,598 total causality messages, we found 7,015 discourse arguments (Total DA) and the one which covers annotated causal explanation are used as causal explanation discourse arguments (CE DA)",
"Figure 2: LSTM classifier for causality detection and explanation identification",
"Table 5: Causal explanation identification performance. Bold indicates significant imrpovement over next best model (p < .05)",
"Table 3: Causality prediction performance across different predictive models. Bold indicates significant improvement over the LSTM",
"Table 4: Feature ablation test of Linear SVM for causality prediction",
"Table 6: The effect of Word-level LSTM (Word LSTM) and discourse argument LSTM (DA AVG LSTM) for causality prediction (CP) and causal explanation identification (CEI). Note that, as described in methods, there are architectural differences for CP and CEI models with the same names, most notably that the output layer is always a single classification for CP and a sequence of classifications for CEI.",
"Table 7: The effect of Linear SVM Cauality model (CP) within our pipeline. CEIall: LSTM CEI models trained on all messages; CEIcausal: LSTM CEI models trained only on causality messages (CEIcausal); CP + CEIall|causal: the combination of Linear SVM and each LSTM model. Bold: significant (p < .05) increase in F1 over the next best model, suggesting the two-step approach worked best.",
"Table 8: Top words most associated with negative reviews from within causal explanations (CE) and outside of causal explanation (Non-CE)."
],
"file": [
"1-Figure1-1.png",
"4-Table1-1.png",
"4-Table2-1.png",
"5-Figure2-1.png",
"6-Table5-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"7-Table6-1.png",
"7-Table7-1.png",
"8-Table8-1.png"
]
} | [
"What baselines did they consider?"
] | [
[
"1809.01202-6-Table5-1.png",
"1809.01202-Experiment-4"
]
] | [
"Linear SVM, RBF SVM, and Random Forest"
] | 45 |
1909.02027 | An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction | Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope---i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. We evaluate a range of benchmark classifiers on our dataset along with several different out-of-scope identification schemes. We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries. Our dataset and evaluation fill an important gap in the field, offering a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems. | {
"paragraphs": [
[
"Task-oriented dialog systems have become ubiquitous, providing a means for billions of people to interact with computers using natural language. Moreover, the recent influx of platforms and tools such as Google's DialogFlow or Amazon's Lex for building and deploying such systems makes them even more accessible to various industries and demographics across the globe.",
"Tools for developing such systems start by guiding developers to collect training data for intent classification: the task of identifying which of a fixed set of actions the user wishes to take based on their query. Relatively few public datasets exist for evaluating performance on this task, and those that do exist typically cover only a very small number of intents (e.g. BIBREF0, which has 7 intents). Furthermore, such resources do not facilitate analysis of out-of-scope queries: queries that users may reasonably make, but fall outside of the scope of the system-supported intents.",
"Figure FIGREF1 shows example query-response exchanges between a user and a task-driven dialog system for personal finance. In the first user-system exchange, the system correctly identifies the user's intent as an in-scope balance query. In the second and third exchanges, the user queries with out-of-scope inputs. In the second exchange, the system incorrectly identifies the query as in-scope and yields an unrelated response. In the third exchange, the system correctly classifies the user's query as out-of-scope, and yields a fallback response.",
"Out-of-scope queries are inevitable for a task-oriented dialog system, as most users will not be fully cognizant of the system's capabilities, which are limited by the fixed number of intent classes. Correctly identifying out-of-scope cases is thus crucial in deployed systems—both to avoid performing the wrong action and also to identify potential future directions for development. However, this problem has seen little attention in analyses and evaluations of intent classification systems.",
"This paper fills this gap by analyzing intent classification performance with a focus on out-of-scope handling. To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. The queries cover 150 intents, plus out-of-scope queries that do not fall within any of the 150 in-scope intents.",
"We evaluate a range of benchmark classifiers and out-of-scope handling methods on our dataset. BERT BIBREF1 yields the best in-scope accuracy, scoring 96% or above even when we limit the training data or introduce class imbalance. However, all methods struggle with identifying out-of-scope queries. Even when a large number of out-of-scope examples are provided for training, there is a major performance gap, with the best system scoring 66% out-of-scope recall. Our results show that while current models work on known classes, they have difficulty on out-of-scope queries, particularly when data is not plentiful. This dataset will enable future work to address this key gap in the research and development of dialog systems. All data introduced in this paper can be found at https://github.com/clinc/oos-eval."
],
[
"We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data."
],
[
"We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent."
],
[
"Out-of-scope queries were collected in two ways. First, using worker mistakes: queries written for one of the 150 intents that did not actually match any of the intents. Second, using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere. To help ensure the richness of this additional out-of-scope data, each of these task prompts contributed to at most four queries. Since we use the same crowdsourcing method for collecting out-of-scope data, these queries are similar in style to their in-scope counterparts.",
"The out-of-scope data is difficult to collect, requiring expert knowledge of the in-scope intents to painstakingly ensure that no out-of-scope query sample is mistakenly labeled as in-scope (and vice versa). Indeed, roughly only 69% of queries collected with prompts targeting out-of-scope yielded out-of-scope queries. Of the 1,200 out-of-scope queries collected, 100 are used for validation and 100 are used for training, leaving 1,000 for testing."
],
[
"For all queries collected, all tokens were down-cased, and all end-of-sentence punctuation was removed. Additionally, all duplicate queries were removed and replaced.",
"In an effort to reduce bias in the in-scope data, we placed all queries from a given crowd worker in a single split (train, validation, or test). This avoids the potential issue of similar queries from a crowd worker ending up in both the train and test sets, for instance, which would make the train and test distributions unrealistically similar. We note that this is a recommendation from concurrent work by BIBREF3. We also used this procedure for the out-of-scope set, except that we split the data into train/validation/test based on task prompt instead of worker."
],
[
"In addition to the full dataset, we consider three variations. First, Small, in which there are only 50 training queries per each in-scope intent, rather than 100. Second, Imbalanced, in which intents have either 25, 50, 75, or 100 training queries. Third, OOS+, in which there are 250 out-of-scope training examples, rather than 100. These are intended to represent production scenarios where data may be in limited or uneven supply."
],
[
"To quantify the challenges that our new dataset presents, we evaluated the performance of a range of classifier models and out-of-scope prediction schemes."
],
[
"SVM: A linear support vector machine with bag-of-words sentence representations.",
"MLP: A multi-layer perceptron with USE embeddings BIBREF4 as input.",
"FastText: A shallow neural network that averages embeddings of n-grams BIBREF5.",
"CNN: A convolutional neural network with non-static word embeddings initialized with GloVe BIBREF6.",
"BERT: A neural network that is trained to predict elided words in text and then fine-tuned on our data BIBREF1.",
"Platforms: Several platforms exist for the development of task-oriented agents. We consider Google's DialogFlow and Rasa NLU with spacy-sklearn."
],
[
"We use three baseline approaches for the task of predicting whether a query is out-of-scope: (1) oos-train, where we train an additional (i.e. 151st) intent on out-of-scope training data; (2) oos-threshold, where we use a threshold on the classifier's probability estimate; and (3) oos-binary, a two-stage process where we first classify a query as in- or out-of-scope, then classify it into one of the 150 intents if classified as in-scope.",
"To reduce the severity of the class imbalance between in-scope versus out-of-scope query samples (i.e., 15,000 versus 250 queries for OOS+), we investigate two strategies when using oos-binary: one where we undersample the in-scope data and train using 1,000 in-scope queries sampled evenly across all intents (versus 250 out-of-scope), and another where we augment the 250 OOS+ out-of-scope training queries with 14,750 sentences sampled from Wikipedia.",
"From a development point of view, the oos-train and oos-binary methods both require careful curation of an out-of-scope training set, and this set can be tailored to individual systems. The oos-threshold method is a more general decision rule that can be applied to any model that produces a probability. In our evaluation, the out-of-scope threshold was chosen to be the value which yielded the highest validation score across all intents, treating out-of-scope as its own intent."
],
[
"We consider two performance metrics for all scenarios: (1) accuracy over the 150 intents, and (2) recall on out-of-scope queries. We use recall to evaluate out-of-scope since we are more interested in cases where such queries are predicted as in-scope, as this would mean a system gives the user a response that is completely wrong. Precision errors are less problematic as the fallback response will prompt the user to try again, or inform the user of the system's scope of supported domains."
],
[
"Table TABREF14 presents results for all models across the four variations of the dataset. First, BERT is consistently the best approach for in-scope, followed by MLP. Second, out-of-scope query performance is much lower than in-scope across all methods. Training on less data (Small and Imbalanced) yields models that perform slightly worse on in-scope queries. The trend is mostly the opposite when evaluating out-of-scope, where recall increases under the Small and Imbalanced training conditions. Under these two conditions, the size of the in-scope training set was decreased, while the number of out-of-scope training queries remained constant. This indicates that out-of-scope performance can be increased by increasing the relative number of out-of-scope training queries. We do just that in the OOS+ setting—where the models were trained on the full training set as well as 150 additional out-of-scope queries—and see that performance on out-of-scope increases substantially, yet still remains low relative to in-scope accuracy."
],
[
"In-scope accuracy using the oos-threshold approach is largely comparable to oos-train. Out-of-scope recall tends to be much higher on Full, but several models suffer greatly on the limited datasets. BERT and MLP are the top oos-threshold performers, and for several models the threshold approach provided erratic results, particularly FastText and Rasa."
],
[
"Table TABREF19 compares classifier performance using the oos-binary scheme. In-scope accuracy suffers for all models using the undersampling scheme when compared to training on the full dataset using the oos-train and oos-threshold approaches shown in Table TABREF14. However, out-of-scope recall improves compared to oos-train on Full but not OOS+. Augmenting the out-of-scope training set appears to help improve both in-scope and out-of-scope performance compared to undersampling, but out-of-scope performance remains weak."
],
[
"In most other analyses and datasets, the idea of out-of-scope data is not considered, and instead the output classes are intended to cover all possible queries (e.g., TREC BIBREF7). Recent work by BIBREF8 considers a similar problem they call out-of-distribution detection. They use other datasets or classes excluded during training to form the out-of-distribution samples. This means that the out-of-scope samples are from a small set of coherent classes that differ substantially from the in-distribution samples. Similar experiments were conducted for evaluating unknown intent discovery models in BIBREF9. In contrast, our out-of-scope queries cover a broad range of phenomena and are similar in style and often similar in topic to in-scope queries, representing things a user might say given partial knowledge of the capabilities of a system.",
"Table TABREF20 compares our dataset with other short-query intent classification datasets. The Snips BIBREF0 dataset and the dataset presented in BIBREF10 are the most similar to the in-scope part of our work, with the same type of conversational agent requests. Like our work, both of these datasets were bootstrapped using crowdsourcing. However, the Snips dataset has only a small number of intents and an enormous number of examples of each. Snips does present a low-data variation, with 70 training queries per intent, in which performance drops slightly. The dataset presented in BIBREF10 has a large number of intent classes, yet also contains a wide range of samples per intent class (ranging from 24 to 5,981 queries per intent, and so is not constrained in all cases).",
"BIBREF11 created datasets with constrained training data, but with very few intents, presenting a very different type of challenge. We also include the TREC query classification datasets BIBREF7, which have a large set of labels, but they describe the desired response type (e.g., distance, city, abbreviation) rather than the action intents we consider. Moreover, TREC contains only questions and no commands. Crucially, none of the other datasets summarized in Table TABREF20 offer a feasible way to evaluate out-of-scope performance.",
"The Dialog State Tracking Challenge (DSTC) datasets are another related resource. Specifically, DSTC 1 BIBREF12, DSTC 2 BIBREF13, and DSTC 3 BIBREF14 contain “chatbot style\" queries, but the datasets are focused on state tracking. Moreover, most if not all queries in these datasets are in-scope. In contrast, the focus of our analysis is on both in- and out-of-scope queries that challenge a virtual assistant to determine whether it can provide an acceptable response."
],
[
"This paper analyzed intent classification and out-of-scope prediction methods with a new dataset consisting of carefully collected out-of-scope data. Our findings indicate that certain models like BERT perform better on in-scope classification, but all methods investigated struggle with identifying out-of-scope queries. Models that incorporate more out-of-scope training data tend to improve on out-of-scope performance, yet such data is expensive and difficult to generate. We believe our analysis and dataset will lead to developing better, more robust dialog systems.",
"All datasets introduced in this paper can be found at https://github.com/clinc/oos-eval."
]
],
"section_name": [
"Introduction",
"Dataset",
"Dataset ::: In-Scope Data Collection",
"Dataset ::: Out-of-Scope Data Collection",
"Dataset ::: Data Preprocessing and Partitioning",
"Dataset ::: Dataset Variants",
"Benchmark Evaluation",
"Benchmark Evaluation ::: Classifier Models",
"Benchmark Evaluation ::: Out-of-Scope Prediction",
"Benchmark Evaluation ::: Metrics",
"Results ::: Results with oos-train",
"Results ::: Results with oos-threshold",
"Results ::: Results with oos-binary",
"Prior Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"8ae507d5fd72264c5e26b804f802e7fe35a3acb3",
"ec4e8ed6bad6662d60e71d3c72c741cd3c33e6a9"
],
"answer": [
{
"evidence": [
"We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent."
],
"extractive_spans": [],
"free_form_answer": "intents are annotated manually with guidance from queries collected using a scoping crowdsourcing task",
"highlighted_evidence": [
"We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent."
],
"extractive_spans": [
"manually "
],
"free_form_answer": "",
"highlighted_evidence": [
" We manually grouped data generated by scoping tasks into intents. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"526826a61db025e9f3a6105ef5e37cf8344c118c",
"aef1c79bd4559e975c94d2d1148cf12ac98e2957"
],
"answer": [
{
"evidence": [
"Benchmark Evaluation ::: Classifier Models",
"SVM: A linear support vector machine with bag-of-words sentence representations.",
"MLP: A multi-layer perceptron with USE embeddings BIBREF4 as input.",
"FastText: A shallow neural network that averages embeddings of n-grams BIBREF5.",
"CNN: A convolutional neural network with non-static word embeddings initialized with GloVe BIBREF6.",
"BERT: A neural network that is trained to predict elided words in text and then fine-tuned on our data BIBREF1.",
"Platforms: Several platforms exist for the development of task-oriented agents. We consider Google's DialogFlow and Rasa NLU with spacy-sklearn."
],
"extractive_spans": [
"SVM",
"MLP",
"FastText",
"CNN",
"BERT",
"Google's DialogFlow",
"Rasa NLU"
],
"free_form_answer": "",
"highlighted_evidence": [
"Benchmark Evaluation ::: Classifier Models\nSVM: A linear support vector machine with bag-of-words sentence representations.\n\nMLP: A multi-layer perceptron with USE embeddings BIBREF4 as input.\n\nFastText: A shallow neural network that averages embeddings of n-grams BIBREF5.\n\nCNN: A convolutional neural network with non-static word embeddings initialized with GloVe BIBREF6.\n\nBERT: A neural network that is trained to predict elided words in text and then fine-tuned on our data BIBREF1.\n\nPlatforms: Several platforms exist for the development of task-oriented agents. We consider Google's DialogFlow and Rasa NLU with spacy-sklearn."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Benchmark Evaluation ::: Classifier Models",
"SVM: A linear support vector machine with bag-of-words sentence representations.",
"MLP: A multi-layer perceptron with USE embeddings BIBREF4 as input.",
"FastText: A shallow neural network that averages embeddings of n-grams BIBREF5.",
"CNN: A convolutional neural network with non-static word embeddings initialized with GloVe BIBREF6.",
"BERT: A neural network that is trained to predict elided words in text and then fine-tuned on our data BIBREF1.",
"Platforms: Several platforms exist for the development of task-oriented agents. We consider Google's DialogFlow and Rasa NLU with spacy-sklearn."
],
"extractive_spans": [
"SVM",
"MLP",
"FastText",
"CNN",
"BERT",
"DialogFlow",
"Rasa NLU"
],
"free_form_answer": "",
"highlighted_evidence": [
" Classifier Models\nSVM: A linear support vector machine with bag-of-words sentence representations.\n\nMLP: A multi-layer perceptron with USE embeddings BIBREF4 as input.\n\nFastText: A shallow neural network that averages embeddings of n-grams BIBREF5.\n\nCNN: A convolutional neural network with non-static word embeddings initialized with GloVe BIBREF6.\n\nBERT: A neural network that is trained to predict elided words in text and then fine-tuned on our data BIBREF1.\n\nPlatforms: Several platforms exist for the development of task-oriented agents. We consider Google's DialogFlow and Rasa NLU with spacy-sklearn."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"c025600373d051bea24fa1f71740d4641b7c9710",
"dad95fcb7657008e9b83668821d2ad1f35643f94"
],
"answer": [
{
"evidence": [
"This paper fills this gap by analyzing intent classification performance with a focus on out-of-scope handling. To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. The queries cover 150 intents, plus out-of-scope queries that do not fall within any of the 150 in-scope intents."
],
"extractive_spans": [
"23,700 "
],
"free_form_answer": "",
"highlighted_evidence": [
"To do so, we constructed a new dataset with 23,700 queries that are short and unstructured, in the same style made by real users of task-oriented systems. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data."
],
"extractive_spans": [],
"free_form_answer": " 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains and 1,200 out-of-scope queries.",
"highlighted_evidence": [
"We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"4075ef68df83642b6877dc6126db3ec137945dc4",
"6fe2637337fd769f69f84d1d6a9caf3c9344cd3b"
],
"answer": [
{
"evidence": [
"We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data."
],
"extractive_spans": [],
"free_form_answer": "crowsourcing platform",
"highlighted_evidence": [
"We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent.",
"Out-of-scope queries were collected in two ways. First, using worker mistakes: queries written for one of the 150 intents that did not actually match any of the intents. Second, using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere. To help ensure the richness of this additional out-of-scope data, each of these task prompts contributed to at most four queries. Since we use the same crowdsourcing method for collecting out-of-scope data, these queries are similar in style to their in-scope counterparts."
],
"extractive_spans": [],
"free_form_answer": "For ins scope data collection:crowd workers which provide questions and commands related to topic domains and additional data the rephrase and scenario crowdsourcing tasks proposed by BIBREF2 is used. \nFor out of scope data collection: from workers mistakes-queries written for one of the 150 intents that did not actually match any of the intents and using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere.",
"highlighted_evidence": [
"We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. ",
"Out-of-scope queries were collected in two ways. First, using worker mistakes: queries written for one of the 150 intents that did not actually match any of the intents. Second, using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"How was the dataset annotated?",
"Which classifiers are evaluated?",
"What is the size of this dataset?",
"Where does the data come from?"
],
"question_id": [
"2c7e94a65f5f532aa31d3e538dcab0468a43b264",
"149da739b1c19a157880d9d4827f0b692006aa2c",
"27de1d499348e17fec324d0ef00361a490659988",
"cfcdd73e712caf552ba44d0aa264d8dace65a589"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"dataset",
"dataset",
"dataset",
"dataset"
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Example exchanges between a user (blue, right side) and a task-driven dialog system for personal finance (grey, left side). The system correctly identi-",
"Table 1: Sample queries from our dataset. The out-of-scope queries are similar in style to the in-scope queries.",
"Table 2: Benchmark classifier results under each data condition using the oos-train (top half) and oos-threshold (bottom half) prediction methods.",
"Table 3: Results of oos-binary experiments on OOS+, where we compare performance of undersampling (under) and augmentation using sentences from Wikipedia (wiki aug). The wiki aug approach was too large for the DialogFlow and Rasa classifiers.",
"Table 4: Classification dataset properties. Ours has the broadest range of intents and specially collected out-ofscope queries. We consider “chatbot style” queries to be short, possibly unstructured questions and commands."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png"
]
} | [
"How was the dataset annotated?",
"What is the size of this dataset?",
"Where does the data come from?"
] | [
[
"1909.02027-Dataset ::: In-Scope Data Collection-0"
],
[
"1909.02027-Dataset-0",
"1909.02027-Introduction-4"
],
[
"1909.02027-Dataset ::: In-Scope Data Collection-0",
"1909.02027-Dataset-0",
"1909.02027-Dataset ::: Out-of-Scope Data Collection-0"
]
] | [
"intents are annotated manually with guidance from queries collected using a scoping crowdsourcing task",
" 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains and 1,200 out-of-scope queries.",
"For ins scope data collection:crowd workers which provide questions and commands related to topic domains and additional data the rephrase and scenario crowdsourcing tasks proposed by BIBREF2 is used. \nFor out of scope data collection: from workers mistakes-queries written for one of the 150 intents that did not actually match any of the intents and using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere."
] | 46 |
1906.01081 | Handling Divergent Reference Texts when Evaluating Table-to-Text Generation | Automatically constructed datasets for generating text from semi-structured data (tables), such as WikiBio, often contain reference texts that diverge from the information in the corresponding semi-structured data. We show that metrics which rely solely on the reference texts, such as BLEU and ROUGE, show poor correlation with human judgments when those references diverge. We propose a new metric, PARENT, which aligns n-grams from the reference and generated texts to the semi-structured data before computing their precision and recall. Through a large scale human evaluation study of table-to-text models for WikiBio, we show that PARENT correlates with human judgments better than existing text generation metrics. We also adapt and evaluate the information extraction based evaluation proposed by Wiseman et al (2017), and show that PARENT has comparable correlation to it, while being easier to use. We show that PARENT is also applicable when the reference texts are elicited from humans using the data from the WebNLG challenge. | {
"paragraphs": [
[
"The task of generating natural language descriptions of structured data (such as tables) BIBREF2 , BIBREF3 , BIBREF4 has seen a growth in interest with the rise of sequence to sequence models that provide an easy way of encoding tables and generating text from them BIBREF0 , BIBREF1 , BIBREF5 , BIBREF6 .",
"For text generation tasks, the only gold standard metric is to show the output to humans for judging its quality, but this is too expensive to apply repeatedly anytime small modifications are made to a system. Hence, automatic metrics that compare the generated text to one or more reference texts are routinely used to compare models BIBREF7 . For table-to-text generation, automatic evaluation has largely relied on BLEU BIBREF8 and ROUGE BIBREF9 . The underlying assumption behind these metrics is that the reference text is gold-standard, i.e., it is the ideal target text that a system should generate. In practice, however, when datasets are collected automatically and heuristically, the reference texts are often not ideal. Figure FIGREF2 shows an example from the WikiBio dataset BIBREF0 . Here the reference contains extra information which no system can be expected to produce given only the associated table. We call such reference texts divergent from the table.",
"We show that existing automatic metrics, including BLEU, correlate poorly with human judgments when the evaluation sets contain divergent references (§ SECREF36 ). For many table-to-text generation tasks, the tables themselves are in a pseudo-natural language format (e.g., WikiBio, WebNLG BIBREF6 , and E2E-NLG BIBREF10 ). In such cases we propose to compare the generated text to the underlying table as well to improve evaluation. We develop a new metric, PARENT (Precision And Recall of Entailed N-grams from the Table) (§ SECREF3 ). When computing precision, PARENT effectively uses a union of the reference and the table, to reward correct information missing from the reference. When computing recall, it uses an intersection of the reference and the table, to ignore extra incorrect information in the reference. The union and intersection are computed with the help of an entailment model to decide if a text n-gram is entailed by the table. We show that this method is more effective than using the table as an additional reference. Our main contributions are:"
],
[
"We briefly review the task of generating natural language descriptions of semi-structured data, which we refer to as tables henceforth BIBREF11 , BIBREF12 . Tables can be expressed as set of records INLINEFORM0 , where each record is a tuple (entity, attribute, value). When all the records are about the same entity, we can truncate the records to (attribute, value) pairs. For example, for the table in Figure FIGREF2 , the records are {(Birth Name, Michael Dahlquist), (Born, December 22 1965), ...}. The task is to generate a text INLINEFORM1 which summarizes the records in a fluent and grammatical manner. For training and evaluation we further assume that we have a reference description INLINEFORM2 available for each table. We let INLINEFORM3 denote an evaluation set of tables, references and texts generated from a model INLINEFORM4 , and INLINEFORM5 , INLINEFORM6 denote the collection of n-grams of order INLINEFORM7 in INLINEFORM8 and INLINEFORM9 , respectively. We use INLINEFORM10 to denote the count of n-gram INLINEFORM11 in INLINEFORM12 , and INLINEFORM13 to denote the minimum of its counts in INLINEFORM14 and INLINEFORM15 . Our goal is to assign a score to the model, which correlates highly with human judgments of the quality of that model."
],
[
"PARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 ."
],
[
" BIBREF1 proposed to use an auxiliary model, trained to extract structured records from text, for evaluation. However, the extraction model presented in that work is limited to the closed-domain setting of basketball game tables and summaries. In particular, they assume that each table has exactly the same set of attributes for each entity, and that the entities can be identified in the text via string matching. These assumptions are not valid for the open-domain WikiBio dataset, and hence we train our own extraction model to replicate their evaluation scheme.",
"Our extraction system is a pointer-generator network BIBREF19 , which learns to produce a linearized version of the table from the text. The network learns which attributes need to be populated in the output table, along with their values. It is trained on the training set of WikiBio. At test time we parsed the output strings into a set of (attribute, value) tuples and compare it to the ground truth table. The F-score of this text-to-table system was INLINEFORM0 , which is comparable to other challenging open-domain settings BIBREF20 . More details are included in the Appendix SECREF52 .",
"Given this information extraction system, we consider the following metrics for evaluation, along the lines of BIBREF1 . Content Selection (CS): F-score for the (attribute, value) pairs extracted from the generated text compared to those extracted from the reference. Relation Generation (RG): Precision for the (attribute, value) pairs extracted from the generated text compared to those in the ground truth table. RG-F: Since our task emphasizes the recall of information from the table as well, we consider another variant which computes the F-score of the extracted pairs to those in the table. We omit the content ordering metric, since our extraction system does not align records to the input text."
],
[
"In this section we compare several automatic evaluation metrics by checking their correlation with the scores assigned by humans to table-to-text models. Specifically, given INLINEFORM0 models INLINEFORM1 , and their outputs on an evaluation set, we show these generated texts to humans to judge their quality, and obtain aggregated human evaluation scores for all the models, INLINEFORM2 (§ SECREF33 ). Next, to evaluate an automatic metric, we compute the scores it assigns to each model, INLINEFORM3 , and check the Pearson correlation between INLINEFORM4 and INLINEFORM5 BIBREF21 ."
],
[
"Our main experiments are on the WikiBio dataset BIBREF0 , which is automatically constructed and contains many divergent references. In § SECREF47 we also present results on the data released as part of the WebNLG challenge.",
"We developed several models of varying quality for generating text from the tables in WikiBio. This gives us a diverse set of outputs to evaluate the automatic metrics on. Table TABREF32 lists the models along with their hyperparameter settings and their scores from the human evaluation (§ SECREF33 ). Our focus is primarily on neural sequence-to-sequence methods since these are most widely used, but we also include a template-based baseline. All neural models were trained on the WikiBio training set. Training details and sample outputs are included in Appendices SECREF56 & SECREF57 .",
"We divide these models into two categories and measure correlation separately for both the categories. The first category, WikiBio-Systems, includes one model each from the four families listed in Table TABREF32 . This category tests whether a metric can be used to compare different model families with a large variation in the quality of their outputs. The second category, WikiBio-Hyperparams, includes 13 different hyperparameter settings of PG-Net BIBREF19 , which was the best performing system overall. 9 of these were obtained by varying the beam size and length normalization penalty of the decoder network BIBREF23 , and the remaining 4 were obtained by re-scoring beams of size 8 with the information extraction model described in § SECREF4 . All the models in this category produce high quality fluent texts, and differ primarily on the quantity and accuracy of the information they express. Here we are testing whether a metric can be used to compare similar systems with a small variation in performance. This is an important use-case as metrics are often used to tune hyperparameters of a model."
],
[
"We collected human judgments on the quality of the 16 models trained for WikiBio, plus the reference texts. Workers on a crowd-sourcing platform, proficient in English, were shown a table with pairs of generated texts, or a generated text and the reference, and asked to select the one they prefer. Figure FIGREF34 shows the instructions they were given. Paired comparisons have been shown to be superior to rating scales for comparing generated texts BIBREF24 . However, for measuring correlation the comparisons need to be aggregated into real-valued scores, INLINEFORM0 , for each of the INLINEFORM1 models. For this, we use Thurstone's method BIBREF22 , which assigns a score to each model based on how many times it was preferred over an alternative.",
"The data collection was performed separately for models in the WikiBio-Systems and WikiBio-Hyperparams categories. 1100 tables were sampled from the development set, and for each table we got 8 different sentence pairs annotated across the two categories, resulting in a total of 8800 pairwise comparisons. Each pair was judged by one worker only which means there may be noise at the instance-level, but the aggregated system-level scores had low variance (cf. Table TABREF32 ). In total around 500 different workers were involved in the annotation. References were also included in the evaluation, and they received a lower score than PG-Net, highlighting the divergence in WikiBio."
],
[
"Text only: We compare BLEU BIBREF8 , ROUGE BIBREF9 , METEOR BIBREF18 , CIDEr and CIDEr-D BIBREF25 using their publicly available implementations.",
"Information Extraction based: We compare the CS, RG and RG-F metrics discussed in § SECREF4 .",
"Text & Table: We compare a variant of BLEU, denoted as BLEU-T, where the values from the table are used as additional references. BLEU-T draws inspiration from iBLEU BIBREF26 but instead rewards n-grams which match the table rather than penalizing them. For PARENT, we compare both the word-overlap model (PARENT-W) and the co-occurrence model (PARENT-C) for determining entailment. We also compare versions where a single INLINEFORM0 is tuned on the entire dataset to maximize correlation with human judgments, denoted as PARENT*-W/C."
],
[
"We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table TABREF37 . The distribution of correlations for the best performing metrics are shown in Figure FIGREF38 .",
"Table TABREF37 also indicates whether PARENT is significantly better than a baseline metric. BIBREF21 suggest using the William's test for this purpose, but since we are computing correlations between only 4/13 systems at a time, this test has very weak power in our case. Hence, we use the bootstrap samples to obtain a INLINEFORM0 confidence interval of the difference in correlation between PARENT and any other metric and check whether this is above 0 BIBREF27 .",
"Correlations are higher for the systems category than the hyperparams category. The latter is a more difficult setting since very similar models are compared, and hence the variance of the correlations is also high. Commonly used metrics which only rely on the reference (BLEU, ROUGE, METEOR, CIDEr) have only weak correlations with human judgments. In the hyperparams category, these are often negative, implying that tuning models based on these may lead to selecting worse models. BLEU performs the best among these, and adding n-grams from the table as references improves this further (BLEU-T).",
"Among the extractive evaluation metrics, CS, which also only relies on the reference, has poor correlation in the hyperparams category. RG-F, and both variants of the PARENT metric achieve the highest correlation for both settings. There is no significant difference among these for the hyperparams category, but for systems, PARENT-W is significantly better than the other two. While RG-F needs a full information extraction pipeline in its implementation, PARENT-C only relies on co-occurrence counts, and PARENT-W can be used out-of-the-box for any dataset. To our knowledge, this is the first rigorous evaluation of using information extraction for generation evaluation.",
"On this dataset, the word-overlap model showed higher correlation than the co-occurrence model for entailment. In § SECREF47 we will show that for the WebNLG dataset, where more paraphrasing is involved between the table and text, the opposite is true. Lastly, we note that the heuristic for selecting INLINEFORM0 is sufficient to produce high correlations for PARENT, however, if human annotations are available, this can be tuned to produce significantly higher correlations (PARENT*-W/C)."
],
[
"In this section we further analyze the performance of PARENT-W under different conditions, and compare to the other best metrics from Table TABREF37 .",
"To study the correlation as we vary the number of divergent references, we also collected binary labels from workers for whether a reference is entailed by the corresponding table. We define a reference as entailed when it mentions only information which can be inferred from the table. Each table and reference pair was judged by 3 independent workers, and we used the majority vote as the label for that pair. Overall, only INLINEFORM0 of the references were labeled as entailed by the table. Fleiss' INLINEFORM1 was INLINEFORM2 , which indicates a fair agreement. We found the workers sometimes disagreed on what information can be reasonably entailed by the table.",
"Figure FIGREF40 shows the correlations as we vary the percent of entailed examples in the evaluation set of WikiBio. Each point is obtained by fixing the desired proportion of entailed examples, and sampling subsets from the full set which satisfy this proportion. PARENT and RG-F remain stable and show a high correlation across the entire range, whereas BLEU and BLEU-T vary a lot. In the hyperparams category, the latter two have the worst correlation when the evaluation set contains only entailed examples, which may seem surprising. However, on closer examination we found that this subset tends to omit a lot of information from the tables. Systems which produce more information than these references are penalized by BLEU, but not in the human evaluation. PARENT overcomes this issue by measuring recall against the table in addition to the reference.",
"We check how different components in the computation of PARENT contribute to its correlation to human judgments. Specifically, we remove the probability INLINEFORM0 of an n-gram INLINEFORM1 being entailed by the table from Eqs. EQREF19 and EQREF23 . The average correlation for PARENT-W drops to INLINEFORM5 in this case. We also try a variant of PARENT with INLINEFORM6 , which removes the contribution of Table Recall (Eq. EQREF22 ). The average correlation is INLINEFORM7 in this case. With these components, the correlation is INLINEFORM8 , showing that they are crucial to the performance of PARENT.",
" BIBREF28 point out that hill-climbing on an automatic metric is meaningless if that metric has a low instance-level correlation to human judgments. In Table TABREF46 we show the average accuracy of the metrics in making the same judgments as humans between pairs of generated texts. Both variants of PARENT are significantly better than the other metrics, however the best accuracy is only INLINEFORM0 for the binary task. This is a challenging task, since there are typically only subtle differences between the texts. Achieving higher instance-level accuracies will require more sophisticated language understanding models for evaluation."
],
[
"To check how PARENT correlates with human judgments when the references are elicited from humans (and less likely to be divergent), we check its correlation with the human ratings provided for the systems competing in the WebNLG challenge BIBREF6 . The task is to generate text describing 1-5 RDF triples (e.g. John E Blaha, birthPlace, San Antonio), and human ratings were collected for the outputs of 9 participating systems on 223 instances. These systems include a mix of pipelined, statistical and neural methods. Each instance has upto 3 reference texts associated with the RDF triples, which we use for evaluation.",
"The human ratings were collected on 3 distinct aspects – grammaticality, fluency and semantics, where semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. We report the correlation of several metrics with these ratings in Table TABREF48 . Both variants of PARENT are either competitive or better than the other metrics in terms of the average correlation to all three aspects. This shows that PARENT is applicable for high quality references as well.",
"While BLEU has the highest correlation for the grammar and fluency aspects, PARENT does best for semantics. This suggests that the inclusion of source tables into the evaluation orients the metric more towards measuring the fidelity of the content of the generation. A similar trend is seen comparing BLEU and BLEU-T. As modern neural text generation systems are typically very fluent, measuring their fidelity is of increasing importance. Between the two entailment models, PARENT-C is better due to its higher correlation with the grammaticality and fluency aspects.",
"The INLINEFORM0 parameter in the calculation of PARENT decides whether to compute recall against the table or the reference (Eq. EQREF22 ). Figure FIGREF50 shows the distribution of the values taken by INLINEFORM1 using the heuristic described in § SECREF3 for instances in both WikiBio and WebNLG. For WikiBio, the recall of the references against the table is generally low, and hence the recall of the generated text relies more on the table. For WebNLG, where the references are elicited from humans, this recall is much higher (often INLINEFORM2 ), and hence the recall of the generated text relies more on the reference."
],
[
"Over the years several studies have evaluated automatic metrics for measuring text generation performance BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 . The only consensus from these studies seems to be that no single metric is suitable across all tasks. A recurring theme is that metrics like BLEU and NIST BIBREF36 are not suitable for judging content quality in NLG. Recently, BIBREF37 did a comprehensive study of several metrics on the outputs of state-of-the-art NLG systems, and found that while they showed acceptable correlation with human judgments at the system level, they failed to show any correlation at the sentence level. Ours is the first study which checks the quality of metrics when table-to-text references are divergent. We show that in this case even system level correlations can be unreliable.",
"Hallucination BIBREF38 , BIBREF39 refers to when an NLG system generates text which mentions extra information than what is present in the source from which it is generated. Divergence can be viewed as hallucination in the reference text itself. PARENT deals with hallucination by discounting n-grams which do not overlap with either the reference or the table.",
"PARENT draws inspiration from iBLEU BIBREF26 , a metric for evaluating paraphrase generation, which compares the generated text to both the source text and the reference. While iBLEU penalizes texts which match the source, here we reward such texts since our task values accuracy of generated text more than the need for paraphrasing the tabular content BIBREF40 . Similar to SARI for text simplification BIBREF41 and Q-BLEU for question generation BIBREF42 , PARENT falls under the category of task-specific metrics."
],
[
"We study the automatic evaluation of table-to-text systems when the references diverge from the table. We propose a new metric, PARENT, which shows the highest correlation with humans across a range of settings with divergent references in WikiBio. We also perform the first empirical evaluation of information extraction based metrics BIBREF1 , and find RG-F to be effective. Lastly, we show that PARENT is comparable to the best existing metrics when references are elicited by humans on the WebNLG data."
],
[
"Bhuwan Dhingra is supported by a fellowship from Siemens, and by grants from Google. We thank Maruan Al-Shedivat, Ian Tenney, Tom Kwiatkowski, Michael Collins, Slav Petrov, Jason Baldridge, David Reitter and other members of the Google AI Language team for helpful discussions and suggestions. We thank Sam Wiseman for sharing data for an earlier version of this paper. We also thank the anonymous reviewers for their feedback."
],
[
"For evaluation via information extraction BIBREF1 we train a model for WikiBio which accepts text as input and generates a table as the output. Tables in WikiBio are open-domain, without any fixed schema for which attributes may be present or absent in an instance. Hence we employ the Pointer-Generator Network (PG-Net) BIBREF19 for this purpose. Specifically, we use a sequence-to-sequence model, whose encoder and decoder are both single-layer bi-directional LSTMs. The decoder is augmented with an attention mechanism over the states of the encoder. Further, it also uses a copy mechanism to optionally copy tokens directly from the source text. We do not use the coverage mechanism of BIBREF19 since that is specific to the task of summarization they study. The decoder is trained to produce a linearized version of the table where the rows and columns are flattened into a sequence, and separate by special tokens. Figure FIGREF53 shows an example.",
"Clearly, since the references are divergent, the model cannot be expected to produce the entire table, and we see some false information being hallucinated after training. Nevertheless, as we show in § SECREF36 , this system can be used for evaluating generated texts. After training, we can parse the output sequence along the special tokens INLINEFORM0 R INLINEFORM1 and INLINEFORM2 C INLINEFORM3 to get a set of (attribute, value) pairs. Table TABREF54 shows the precision, recall and F-score of these extracted pairs against the ground truth tables, where the attributes and values are compared using an exact string match."
],
[
"After tuning we found the same set of hyperparameters to work well for both the table-to-text PG-Net, and the inverse information extraction PG-Net. The hidden state size of the biLSTMs was set to 200. The input and output vocabularies were set to 50000 most common words in the corpus, with additional special symbols for table attribute names (such as “birth-date”). The embeddings of the tokens in the vocabulary were initialized with Glove BIBREF43 . Learning rate of INLINEFORM0 was used during training, with the Adam optimizer, and a dropout of INLINEFORM1 was also applied to the outputs of the biLSTM. Models were trained till the loss on the dev set stopped dropping. Maximum length of a decoded text was set to 40 tokens, and that of the tables was set to 120 tokens. Various beam sizes and length normalization penalties were applied for the table-to-text system, which are listed in the main paper. For the information extraction system, we found a beam size of 8 and no length penalty to produce the highest F-score on the dev set."
],
[
"Table TABREF55 shows some sample references and the corresponding predictions from the best performing model, PG-Net for WikiBio."
]
],
"section_name": [
"Introduction",
"Table-to-Text Generation",
"PARENT",
"Evaluation via Information Extraction",
"Experiments & Results",
"Data & Models",
"Human Evaluation",
"Compared Metrics",
"Correlation Comparison",
"Analysis",
"WebNLG Dataset",
"Related Work",
"Conclusions",
"Acknowledgements",
"Information Extraction System",
"Hyperparameters",
"Sample Outputs"
]
} | {
"answers": [
{
"annotation_id": [
"b3c1d62049b2fc7ee2113f40310056d754d155c5",
"d1bb221526f5e07b69775abe1bb58ae26f2bd593"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We show that existing automatic metrics, including BLEU, correlate poorly with human judgments when the evaluation sets contain divergent references (§ SECREF36 ). For many table-to-text generation tasks, the tables themselves are in a pseudo-natural language format (e.g., WikiBio, WebNLG BIBREF6 , and E2E-NLG BIBREF10 ). In such cases we propose to compare the generated text to the underlying table as well to improve evaluation. We develop a new metric, PARENT (Precision And Recall of Entailed N-grams from the Table) (§ SECREF3 ). When computing precision, PARENT effectively uses a union of the reference and the table, to reward correct information missing from the reference. When computing recall, it uses an intersection of the reference and the table, to ignore extra incorrect information in the reference. The union and intersection are computed with the help of an entailment model to decide if a text n-gram is entailed by the table. We show that this method is more effective than using the table as an additional reference. Our main contributions are:",
"PARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 ."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Parent subsections) combine precisions for n-gram orders 1-4",
"highlighted_evidence": [
"PARENT\nPARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"43494b586d6efb46f29d42990bf357931ac2d6e3",
"77a7898c0e83985d5ca0dc7f1ea665892d0baa69"
],
"answer": [
{
"evidence": [
"The data collection was performed separately for models in the WikiBio-Systems and WikiBio-Hyperparams categories. 1100 tables were sampled from the development set, and for each table we got 8 different sentence pairs annotated across the two categories, resulting in a total of 8800 pairwise comparisons. Each pair was judged by one worker only which means there may be noise at the instance-level, but the aggregated system-level scores had low variance (cf. Table TABREF32 ). In total around 500 different workers were involved in the annotation. References were also included in the evaluation, and they received a lower score than PG-Net, highlighting the divergence in WikiBio."
],
"extractive_spans": [],
"free_form_answer": "about 500",
"highlighted_evidence": [
"In total around 500 different workers were involved in the annotation.",
"about 500"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b6860f76450a9767c494a08374b3d47ab519b6ed",
"be3f819c8f634408bafa53e961d5e5b82431211a"
],
"answer": [
{
"evidence": [
"We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table TABREF37 . The distribution of correlations for the best performing metrics are shown in Figure FIGREF38 .",
"FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1.",
"FLOAT SELECTED: Table 4: Average pearson correlation across 500 bootstrap samples of each metric to human ratings for each aspect of the generations from the WebNLG challenge.",
"The human ratings were collected on 3 distinct aspects – grammaticality, fluency and semantics, where semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. We report the correlation of several metrics with these ratings in Table TABREF48 . Both variants of PARENT are either competitive or better than the other metrics in terms of the average correlation to all three aspects. This shows that PARENT is applicable for high quality references as well."
],
"extractive_spans": [],
"free_form_answer": "Best proposed metric has average correlation with human judgement of 0.913 and 0.846 compared to best compared metrics result of 0.758 and 0.829 on WikiBio and WebNLG challenge.",
"highlighted_evidence": [
"We report the average correlation across all bootstrap samples for each metric in Table TABREF37 .",
"FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1.",
"FLOAT SELECTED: Table 4: Average pearson correlation across 500 bootstrap samples of each metric to human ratings for each aspect of the generations from the WebNLG challenge.",
"We report the correlation of several metrics with these ratings in Table TABREF48 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1."
],
"extractive_spans": [],
"free_form_answer": "Their average correlation tops the best other model by 0.155 on WikiBio.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Ngrams of which length are aligned using PARENT?",
"How many people participated in their evaluation study of table-to-text models?",
"By how much more does PARENT correlate with human judgements in comparison to other text generation metrics?"
],
"question_id": [
"28067da818e3f61f8b5152c0d42a531bf0f987d4",
"bf3b27a4f4be1f9ae31319877fd0c75c03126fd5",
"ffa7f91d6406da11ddf415ef094aaf28f3c3872d"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A table from the WikiBio dataset (right), its reference description and three hypothetical generated texts with scores assigned to them by automatic evaluation metrics. Text which cannot be inferred from the table is in red, and text which can be inferred but isn’t present in the reference is in green. PARENT is our proposed metric.",
"Table 1: Models used for WikiBio, with the human evaluation scores for these model outputs and the reference texts. PG-Net: Pointer-Generator network. Human scores computed using Thurstone’s method (Tsukida and Gupta, 2011).",
"Figure 2: Instructions to crowd-workers for comparing two generated texts.",
"Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1.",
"Figure 3: Distribution of metric correlations across 500 bootstrap samples. PRT = PARENT.",
"Figure 4: Correlation of the metrics to human judgment as the percentage of entailed examples in WikiBio is varied.",
"Table 3: Accuracy on making the same judgments as humans between pairs of generated texts. p < 0.01∗/0.05†/0.10‡: accuracy is significantly higher than the next best accuracy to the left using a paired McNemar’s test.",
"Table 4: Average pearson correlation across 500 bootstrap samples of each metric to human ratings for each aspect of the generations from the WebNLG challenge.",
"Figure 5: Histogram of the recall of the references against the table (Eq. 6), which is used to set 1 − λ. Lower values indicate that the metric relies more on the table and less on the reference.",
"Figure 6: An input-output pair for the information extraction system. <R> and <C> are special symbols used to separate (attribute, value) pairs and attributes from values, respectively.",
"Table 5: Performance of the Information Extraction system.",
"Table 6: Sample references and predictions from PG-Net with beam size 8. Information which is absent from the reference, but can be inferred from the table is in bold. Information which is present in the reference, but cannot be inferred from the table is in italics."
],
"file": [
"2-Figure1-1.png",
"5-Table1-1.png",
"6-Figure2-1.png",
"6-Table2-1.png",
"7-Figure3-1.png",
"7-Figure4-1.png",
"8-Table3-1.png",
"8-Table4-1.png",
"9-Figure5-1.png",
"11-Figure6-1.png",
"11-Table5-1.png",
"12-Table6-1.png"
]
} | [
"Ngrams of which length are aligned using PARENT?",
"How many people participated in their evaluation study of table-to-text models?",
"By how much more does PARENT correlate with human judgements in comparison to other text generation metrics?"
] | [
[
"1906.01081-PARENT-0",
"1906.01081-Introduction-2"
],
[
"1906.01081-Human Evaluation-1"
],
[
"1906.01081-WebNLG Dataset-1",
"1906.01081-8-Table4-1.png",
"1906.01081-Correlation Comparison-0",
"1906.01081-6-Table2-1.png"
]
] | [
"Answer with content missing: (Parent subsections) combine precisions for n-gram orders 1-4",
"about 500",
"Their average correlation tops the best other model by 0.155 on WikiBio."
] | 48 |
1812.10479 | Multimodal deep learning for short-term stock volatility prediction | Stock market volatility forecasting is a task relevant to assessing market risk. We investigate the interaction between news and prices for the one-day-ahead volatility prediction using state-of-the-art deep learning approaches. The proposed models are trained either end-to-end or using sentence encoders transfered from other tasks. We evaluate a broad range of stock market sectors, namely Consumer Staples, Energy, Utilities, Heathcare, and Financials. Our experimental results show that adding news improves the volatility forecasting as compared to the mainstream models that rely only on price data. In particular, our model outperforms the widely-recognized GARCH(1,1) model for all sectors in terms of coefficient of determination $R^2$, $MSE$ and $MAE$, achieving the best performance when training from both news and price data. | {
"paragraphs": [
[
"Natural Language Processing (NLP) has increasingly attracted the attention of the financial community. This trend can be explained by at least three major factors. The first factor refers to the business perspective. It is the economics of gaining competitive advantage using alternative sources of data and going beyond historical stock prices, thus, trading by analyzing market news automatically. The second factor is the major advancements in the technologies to collect, store, and query massive amounts of user-generated data almost in real-time. The third factor refers to the progress made by the NLP community in understanding unstructured text. Over the last decades the number of studies using NLP for financial forecasting has experienced exponential growth. According to BIBREF0 , until 2008, less than five research articles were published per year mentioning both “stock market” and “text mining” or “sentiment analysis” keywords. In 2012, this number increased to slightly more than ten articles per year. The last numbers available for 2016 indicates this has increased to sixty articles per year.",
"The ability to mechanically harvest the sentiment from texts using NLP has shed light on conflicting theories of financial economics. Historically, there has been two differing views on whether disagreement among market participants induces more trades. The “non-trade theorem” BIBREF1 states that assuming all market participants have common knowledge about a market event, the level of disagreement among the participants does not increase the number of trades but only leads to a revision of the market quotes. In contrast, the theoretically framework proposed in BIBREF2 advocates that disagreement among market participants increases trading volume. Using textual data from Yahoo and RagingBull.com message boards to measure the dispersion of opinions (positive or negative) among traders, it was shown in BIBREF3 that disagreement among users' messages helps to predict subsequent trading volume and volatility. Similar relation between disagreement and increased trading volume was found in BIBREF4 using Twitter posts. Additionally, textual analysis is adding to the theories of medium-term/long-term momentum/reversal in stock markets BIBREF5 . The unified Hong and Stein model BIBREF6 on stock's momentum/reversal proposes that investors underreact to news, causing slow price drifts, and overreact to price shocks not accompanied by news, hence inducing reversals. This theoretical predicated behaviour between price and news was systematically estimated and supported in BIBREF7 , BIBREF8 using financial media headlines and in BIBREF9 using the Consumer Confidence Index® published by The Conference Board BIBREF10 . Similarly, BIBREF11 uses the Harvard IV-4 sentiment lexicon to count the occurrence of words with positive and negative connotation of the Wall Street Journal showing that negative sentiment is a good predictor of price returns and trading volumes.",
"Accurate models for forecasting both price returns and volatility are equally important in the financial domain. Volatility measures how wildly the asset is expected to oscillate in a given time period and is related to the second moment of the price return distribution. In general terms, forecasting price returns is relevant to take speculative positions. The volatility, on the other hand, measures the risk of these positions. On a daily basis, financial institutions need to assess the short-term risk of their portfolios. Measuring the risk is essential in many aspects. It is imperative for regulatory capital disclosures required by banking supervision bodies. Moreover, it is useful to dynamically adjust position sizing accordingly to market conditions, thus, maintaining the risk within reasonable levels.",
"Although, it is crucial to predict the short-term volatility from the financial markets application perspective, much of the current NLP research on volatility forecasting focus on the volatility prediction for very long-term horizons (see BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 ). Predominately, these works are built on extensions of the bag-of-words representation that has the main drawback of not capturing word order. Financial forecasting, however, requires the ability to capture semantics that is dependent upon word order. For example, the headline “Qualcomm sues Apple for contract breach” and “Apple sues Qualcomm for contract breach” trigger different responses for each stock and for the market aggregated index, however, they share the same bag-of-words representation. Additionally, these works use features from a pretrained sentiment analyis model to train the financial forecasting model. A key limitation of this process is that it requires a labelled sentiment dataset. Additionally, the error propagation is not end-to-end. In this work, we fill in the gaps of volatility prediction research in the following manner:"
],
[
"Previous work in BIBREF12 incorporates sections of the “Form 10-K” to predict the volatility twelve months after the report is released. They train a Support Vector Regression model on top of sparse representation (bag-of-words) with standard term weighting (e.g. Term-Frequency). This work was extended in BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 by employing the Loughran-McDonald Sentiment Word Lists BIBREF20 , which contain three lists where words are grouped by their sentiments (positive, negative and neutral). In all these works, the textual representation is engineered using the following steps: 1) For each sentiment group, the list is expanded by retrieving 20 most similar words for each word using Word2Vec word embeddings BIBREF21 . 2) Finally, each 10-K document is represented using the expanded lists of words. The weight of each word in this sparse representation is defined using Information Retrieval (IR) methods such as term-frequency (tf) and term-frequency with inverted document frequency (tfidf). Particularly, BIBREF16 shows that results can be improved using enhanced IR methods and projecting each sparse feature into a dense space using Principal Component Analysis (PCA).",
"The works described above ( BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 ) target long-horizon volatility predictions (one year or quarterly BIBREF16 ). In particular, BIBREF16 and BIBREF15 uses market data (price) features along with the textual representation of the 10-K reports. These existing works that employ multi-modal learning BIBREF22 are based on a late fusion approach. For example, stacking ensembles to take into account the price and text predictions BIBREF16 . In contrast, our end-to-end trained model can learn the joint distribution of both price and text.",
"Predicting the price direction rather than the volatility was the focus in BIBREF23 . They extracted sentiment words from Twitter posts to build a time series of collective Profile of Mood States (POMS). Their results show that collective mood accurately predicts the direction of Down Jones stock index (86.7% accuracy). In BIBREF24 handcrafted text representations including term count, noun-phrase tags and extracted named entities are employed for predicting stock market direction using Support Vector Machine (SVM). An extension of Latent Dirichlet Allocation (LDA) is proposed in BIBREF25 to learn a joint latent space of topics and sentiments.",
"Our deep learning models bear a close resemblance to works focused on directional price forecasting BIBREF26 , BIBREF27 . In BIBREF26 , headline news are processed using Stanford OpenIE to generate triples that are fed into a Neural Tensor Network to create the final headline representation. In BIBREF27 , a character-level embedding is pre-trained in an unsupervised manner. The character embedding is used as input to a sequence model to learn the headline representation. Particularly, both works average all headline representations in a given day, rather than attempting to weight the most relevant ones. In this work, we propose a neural attention mechanism to capture the News Relevance and provide experimental evidence that it is a key component of the end-to-end learning process. Our attention extends the previous deep learning methods from BIBREF26 , BIBREF27 .",
"Despite the fact that end-to-end deep learning models have attained state-of-the-art performance, the large number of parameters make them prone to overfitting. Additionally, end-to-end models are trained from scratch requiring large datasets and computational resources. Transfer learning (TL) alleviates this problem by adapting representations learnt from a different and potentially weakly related source domain to the new target domain. For example, in computer vision tasks the convolutional features learnt from ImageNet BIBREF28 dataset (source domain) have been successfully transferred to multiple domain target tasks with much smaller datasets such as object classification and scene recognition BIBREF29 . In this work, we consider TL in our experiments for two main reasons. First, it address the question whether our proposed dataset is suitable for end-to-end training since the performance of the transferred representations can be compared with end-to-end learning. Second, it is still to be investigated which dataset transfers better to the forecasting problem. Recently, the NLP community has focused on universal representations of sentences BIBREF17 , BIBREF19 , which are dense representations that carry the meaning of a full sentence. BIBREF17 found that transferring the sentence representation trained on the Stanford Natural Language Inference (SNLI) BIBREF30 dataset achieves state-of-the-art sentence representations to multiple NLP tasks (e.g. sentiment analysis, question-type and opinion polarity). Following BIBREF17 , in this work, we investigate the suitability of SNLI and Reuters RCV1 BIBREF31 datasets to transfer learning to the volatility forecasting task. To the best of our knowledge, the hierarchical attention mechanism at headline level, proposed in our work, has not being applied to volatility prediction so far; neither has been investigated the ability to transfer sentence encoders from source datasets to the target forecasting problem (Transfer Learning)."
],
[
"Our corpus covers a broad range of news including news around earnings dates and complements the 10-K reports content. As an illustration, the headlines “Walmart warns that strong U.S. dollar will cost $15B in sales” and “Procter & Gamble Co raises FY organic sales growth forecast after sales beat” describe the company financial conditions and performance from the management point of view – these are also typical content present in Section 7 of the 10-K reports.",
"In this section, we describe the steps involved in compiling our dataset of financial news at stock level, which comprises a broad range of business sectors."
],
[
"The first step in compiling our corpus was to choose the constituents stocks. Our goal was to consider stocks in a broad range of sectors, aiming a diversified financial domain corpus. We found that Exchange Traded Funds (ETF) provide a mechanical way to aggregate the most relevant stocks in a given industry/sector. An ETF is a fund that owns assets, e.g. stock shares or currencies, but, unlike mutual funds are traded in stock exchanges. These ETFs are extremely liquid and track different investment themes. We decided to use SPDR Setcor Funds constituents stocks in our work since the company is the largest provider of sector funds in the United States. We included in our analysis the top 5 (five) sector ETFs by financial trading volume (as in Jan/2018). Among the most traded sectors we also filtered out the sectors that were similar to each other. For example, the Consumer Staples and Consumer Discretionary sectors are both part of the parent Consumer category. For each of the top 5 sectors we selected the top 10 holdings, which are deemed the most relevant stocks. tbl:stockuniverse, details our dataset sectors and its respective stocks."
],
[
"We assume that an individual stock news as the one that explicitly mention the stock name or any of its surface forms in the headline. As an illustration, in order to collect all news for the stock code PG, Procter & Gamble company name, we search all the headlines with any of these words: Procter&Gamble OR Procter and Gamble OR P&G. In this example, the first word is just the company name and the remaining words are the company surface forms.",
"We automatically derived the surface forms for each stock by starting with a seed of surface forms extracted from the DBpedia Knowledge Base (KB). We then applied the following procedure:",
"Relate each company name with the KB entity unique identifier.",
"Retrieve all values of the wikiPageRedirects property. The property holds the names of different pages that points to the same entity/company name. This step sets the initial seed of surface forms.",
"Manually, filter out some noisy property values. For instance, from the Procter & Glamble entity page we were able to automatically extract dbr:Procter_and_gamble and dbr:P_&_G, but had to manually exclude the noisy associations dbr:Female_pads and dbr:California_Natural.",
"The result of the steps above is a dictionary of surface forms $wd_{sc}$ ."
],
[
"Our corpus is built at stock code level by collecting headlines from the Reuters Archive. This archive groups the headlines by date, starting from 1 January 2007. Each headline is a html link (<a href> tag) to the full body of the news, where the anchor text is the headline content followed by the release time. For example, the page dated 16 Dec 2016 has the headline “Procter & Gamble appoints Nelson Peltz to board 5:26PM UTC”.",
"For each of the 50 stocks (5 sectors times 10 stocks per sector) selected using the criteria described in sub:corpussecstock, we retrieved all the headlines from the Reuters Archive raging from 01/01/2007 to 30/12/2017. This process takes the following steps:",
"For a given stock code ( $sc$ ) retrieve all surface forms $wd_{sc}$ .",
"For each day, store only the headlines content matching any word in $wd_{sc}$ . For each stored headline we also store the time and timezone.",
"Convert the news date and time to Eastern Daylight Time (EDT).",
"Categorize the news release time. We consider the following category set: {before market, during market , after market, holidays, weekends}. during market contains news between 9:30AM and 4:00PM. before market before 9:30AM and after market after 4:00PM.",
"The time categories prevents any misalignment between text and stock price data. Moreover, it prevents data leakage and, consequently, unrealistic predictive model performance. In general, news released after 4:00PM EDT can drastically change market expectations and the returns calculated using close to close prices as in the GARCH(1,1) model (see eq:closingreturn). Following BIBREF3 , to deal with news misalignment, news issued after 4:00PM (after market) are grouped with the pre-market (before market) on the following trading day.",
"tbl:stocktimecat shows the distribution of news per sector for each time category. We can see a high concentration of news released before the market opens (55% on average). In contrast, using a corpus compiled from message boards, a large occurrence of news during market hours was found BIBREF3 . This behaviour indicating day traders' activity. Our corpus comprise financial news agency headlines, a content more focused on corporate events (e.g. lawsuits, merges & acquisitions, research & development) and on economic news (see tbl:stockheadlinesexmaples for a sample of our dataset). These headlines are mostly factual. On the other hand, user-generated content such as Twitter and message boards (as in BIBREF3 , BIBREF4 ) tends to be more subjective.",
"U.S. macroeconomic indicators such as Retail Sales, Jobless Claims and GDP are mostly released around 8:30AM (one hour before the market opens). These numbers are key drivers of market activity and, as such, have a high media coverage. Specific sections of these economic reports impact several stocks and sectors. Another factor that contribute to the high activity of news outside regular trading hours are company earnings reports. These are rarely released during trading hours. Finally, before the market opens news agencies provide a summary of the international markets developments, e.g. the key facts during the Asian and Australian trading hours. All these factors contribute to the high concentration of pre-market news."
],
[
"We start this section by reviewing the GARCH(1,1) model, which is a strong benchmark used to evaluate our neural model. We then review the source datasets proposed in the literature that were trained independently and transfered to our volatility prediction model. Finally, we review the general architectures of sequence modelling and attention mechanisms."
],
[
"Financial institutions use the concept of “Value at risk” to measure the expected volatility of their portfolios. The widespread econometric model for volatility forecasting is the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) BIBREF32 , BIBREF33 . Previous research shows that the GARCH(1,1) model is hard to beat. For example, BIBREF34 compared GARCH(1,1) with 330 different econometric volatility models showing that they are not significantly better than GARCH(1,1). Let $p_t$ be the price of an stock at the end of a trading period with closing returns $r_t$ given by ",
"$$r_t = \\frac{p_t}{p_{t-1}} - 1 $$ (Eq. 29) ",
"The GARCH process explicitly models the time-varying volatility of asset returns. In the GARCH(1,1) specification the returns series $r_t$ follow the process: ",
"$$r_t &= \\mu + \\epsilon _t \\\\\n\\epsilon _t &= \\sigma _t z_t \\\\\n\\sigma ^2_t &= a_0 + a_1 \\epsilon _{t-1}^2 + b_1 \\sigma _{t-1}^2$$ (Eq. 30) ",
"where $\\mu $ is a constant (return drift) and $z_t$ is a sequence of i.i.d. random variables with mean zero and unit variance. It is worth noting that although the conditional mean return described in eq:garchcondmean has a constant value, the conditional volatility $\\sigma _t$ is time-dependent and modeled by eq:att.",
"The one-step ahead expected volatility forecast can be computed directly from eq:garchcondvariance and is given by ",
"$$E_T[\\sigma _{T+1}^2] = a_0 + a_1 E_T[\\epsilon ^2] + b_1 E_T[\\sigma _{T}^2] $$ (Eq. 32) ",
"In general, the $t^{\\prime }$ -steps ahead expected volatility $E_T[\\sigma _{T+t^{\\prime }}^2]$ can be easily expressed in terms of the previous step expected volatility. It is easy to prove by induction that the forecast for any horizon can be represented in terms of the one-step ahead forecast and is given by ",
"$$E_T[\\sigma _{T+t^{\\prime }}^2] - \\sigma _u^2 = (a_1 + b_1)^{(t^{\\prime } -1)} \\left(E_T[\\sigma _{T+1}^2] - \\sigma _u^2\\right)$$ (Eq. 33) ",
"where $\\sigma _u$ is the unconditional volatility: ",
"$$\\sigma _u = \\sqrt{a_0 / (1 - a_1 - b_1)} $$ (Eq. 34) ",
"From the equation above we can see that for long horizons, i.e. $t^\\prime \\rightarrow \\infty $ , the volatility forecast in eq:forecastrecursive converges to the unconditional volatility in eq:unvar.",
"All the works reviewed in sec:introduction ( BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 ) consider GARCH(1,1) benchmark. However, given the long horizon of their predictions (e.g. quarterly or annual), the models are evaluated using the unconditional volatility $\\sigma _u$ in eq:unvar. In this work, we focus on the short-term volatility prediction and use the GARCH(1,1) one-day ahead conditional volatility prediction in eq:forecastoneperiod to evaluate our models.",
"Let $\\sigma _{t+1}$ denote the ex-post “true” daily volatility at a given time $t$ . The performance on a set with $N$ daily samples can be evaluated using the standard Mean Squared Error ( $MSE$ ) and Mean Absolute Error ( $MAE$ ) ",
"$$MSE &= \\frac{1}{N} \\sum _{t=1}^{N} \\left( E_t[\\sigma _{t+1}] - \\sigma _{t+1}\\right)^2 \\\\\nMAE &= \\frac{1}{N} \\sum _{t=1}^{N}\\left|E_t[\\sigma _{t+1}] - \\sigma _{t+1} \\right|$$ (Eq. 36) ",
"Additionally, following BIBREF35 , the models are also evaluated using the coefficient of determination $R^2$ of the regression ",
"$$\\sigma _{t+1} = a + b E_t[\\sigma _{t+1}] + e_t$$ (Eq. 37) ",
"where ",
"$$R^2 = 1 - \\frac{\\sum _{t=1}^{N}e^{2}_{t}}{\\sum _{t=1}^{N}\\left(E_t[\\sigma _{t+1}] - \\frac{1}{N} \\sum _{t=1}^{N}E_t[\\sigma _{t+1}]\\right)^{2}}$$ (Eq. 38) ",
"One of the challenges in evaluating GARCH models is the fact that the ex-post volatility $\\sigma _{t+1}$ is not directly observed. Apparently, the squared daily returns $r_{t+1}^{2}$ in eq:closingreturn could stand as a good proxy for the ex-post volatility. However, the squared returns yield very noisy measurements. This is a direct consequence of the term $z^t$ that connects the squared return to the latent volatility factor in eq:garchwhitenoise. The use of intraday prices to estimate the ex-post daily volayility was first proposed in BIBREF35 . They argue that volatility estimators using intraday prices is the proper way to evaluate the GARCH(1,1) model, as opposed to squared daily returns. For example, considering the Deutsche Mark the GARCH(1,1) model $R^2$ improves from $0.047$ (squared returns) to $0.33$ (intraday returns) BIBREF35 .",
"It is clear from the previous section that any volatility model evaluation using the noisy squared returns as the ex-post volatility proxy will lead to very poor performance. Therefore, high-frequency intraday data is fundamental to short-term volatility performance evaluation. However, intraday data is difficult to acquire and costly. Fortunately, there are statistically efficient daily volatility estimators that only depend on the open, high, low and close prices. These price “ranges” are widely available. In this section, we discuss these estimators.",
"Let $O_t$ , $H_t$ , $L_t$ , $C_t$ be the open, high, low and close prices of an asset in a given day $t$ . Assuming that the daily price follows a geometric Brownian motion with zero drift and constant daily volatility $\\sigma $ , Parkinson (1980) derived the first daily volatility estimator ",
"$$\\widehat{\\sigma _{PK,t}^2} = \\frac{\\ln \\left(\\frac{H_t}{L_t}\\right)^2}{4\\ln (2)} $$ (Eq. 41) ",
"which represents the daily volatility in terms of its price range. Hence, it contains information about the price path. Given this property, it is expected that $\\sigma _{PK}$ is less noisy than the volatility calculated using squared returns. The Parkinson's volatility estimator was extended by Garman-Klass (1980) which incorporates additional information about the opening ( $O_t$ ) and closing ( $C_t$ ) prices and is defined as ",
"$$\\widehat{\\sigma _{GK,t}^{2}} = \\frac{1}{2} \\ln \\left(\\frac{H_t}{L_t}\\right)^2 - (2\\ln (2) - 1) \\ln \\left(\\frac{C_t}{O_t}\\right)^2 $$ (Eq. 42) ",
"The relative noisy of different estimators $\\hat{\\sigma }$ can be measured in terms of its relative efficiency to the daily volatility $\\sigma $ and is defined as ",
"$$e\\left(\\widehat{\\sigma ^{2}}, \\sigma ^2\\right) \\equiv \\frac{Var[\\sigma ^2]}{Var[\\widehat{\\sigma ^{2}}]}$$ (Eq. 43) ",
"where $Var[\\cdot ]$ is the variance operator. It follows directly from eq:garchwhitenoise that the squared return has efficiency 1 and therefore, very noisy. BIBREF36 reports Parkinson ( $\\widehat{\\sigma _{PK,t}^2}$ ) volatility estimator has 4.9 relative efficiency and Garman-Klass ( $\\widehat{\\sigma _{GK,t}^2}$ ) 7.4. Additionally, all the described estimators are unbiased.",
"Many alternative estimators to daily volatility have been proposed in the literature. However, experiments in BIBREF36 rate the Garman-Klass volatility estimator as the best volatility estimator based only on open, high, low and close prices. In this work, we train our models to predict the state-of-the-art Garman-Klass estimator. Moreover, we evaluate our models and GARCH(1,1) using the metrics described in sub:evalution, but with the appropriate volatility proxies, i.e. Parkinson and Garman-Klass estimators."
],
[
"Vector representations of words, also known as Word embeddings BIBREF21 , BIBREF37 , that represent a word as a dense vector has become the standard building blocks of almost all NLP tasks. These embeddings are trained on large unlabeled corpus and are able to capture context and similarity among words.",
"Some attempts have been made to learn vector representations of a full sentence, rather than only a single word, using unsupervised approaches similar in nature to word embeddings. Recently, BIBREF17 showed state-of-the-art performance when a sentence encoder is trained end-to-end on a supervised source task and transferred to other target tasks. Inspired by this work, we investigate the performance of sentence encoders trained on the Text categorization and Natural Language Inference (NLI) tasks and use these encoders in our main short-term volatility prediction task.",
"A generic sentence encoder $S_e$ receives the sentence words as input and returns a vector representing the sentence. This can be expressed as a mapping ",
"$$S_e \\colon \\mathbb {R}^{T^{S} \\times d_w} \\rightarrow \\mathbb {R}^{d_S}$$ (Eq. 45) ",
"from a variable size sequence of words to a sentence vector $S$ of fixed-size $d_S$ , where $T^{S}$ is the sentence number of words and $d_w$ is the pre-trained word embedding dimension.",
"In the following sections, we describe the datasets and architectures to train the sentence encoders of the auxiliary transfer learning tasks.",
"The Reuters Corpus Volume I (RCV1) is corpus containing 806,791 news articles in the English language collected from 20/08/1996 to 19/08/1997 BIBREF31 . The topic of each news was human-annotated using a hierarchical structure. At the top of the hierarchy, lies the coarse-grained categories: CCAT (Corporate), ECAT (Economics), GCAT (Government), and MCAT (Markets). A news article can be assigned to more than one category meaning that the text categorization task is mutilabel. Each news is stored in a separate XML file. lst:rcv1xmlexample shows the typical structure of an article.",
"<?xml version=\"1.0\" encoding=\"iso-8859-1\" ?>",
"<newsitem itemid=\"6159\" id=\"root\" date=\"1996-08-21\" xml:lang=\"en\">",
"<headline>Colombia raises internal coffee price.</headline>",
"<dateline>BOGOTA 1996-08-21</dateline>",
"<copyright>(c) Reuters Limited 1996</copyright>",
"<metadata>",
"<codes class=\"bip:topics:1.0\">",
" <code code=\"C13\">",
" <editdetail attribution=\"Reuters BIP Coding Group\" action=\"confirmed\" date=\"1996-08-21\"/>",
" </code>",
" <code code=\"C31\">",
" <editdetail attribution=\"Reuters BIP Coding Group\" action=\"confirmed\" date=\"1996-08-21\"/>",
" </code>",
" <code code=\"CCAT\">",
" <editdetail attribution=\"Reuters BIP Coding Group\" action=\"confirmed\" date=\"1996-08-21\"/>",
" </code>",
" <code code=\"M14\">",
" <editdetail attribution=\"Reuters BIP Coding Group\" action=\"confirmed\" date=\"1996-08-21\"/>",
" </code>",
" <code code=\"M141\">",
" <editdetail attribution=\"Reuters BIP Coding Group\" action=\"confirmed\" date=\"1996-08-21\"/>",
" </code>",
" <code code=\"MCAT\">",
" <editdetail attribution=\"Reuters BIP Coding Group\" action=\"confirmed\" date=\"1996-08-21\"/>",
" </code>",
"</codes>",
"</metadata>",
"</newsitem>",
"The RCV1 dataset is not released with a standard train, validation, test split. In this work, we separated 15% of samples as a test set for evaluation purposes. The remaining samples were further split leaving 70% and 15% for training and validation, respectively.",
"Regarding the categories distribution, we found that, from the original 126 categories, 23 categories were never assigned to any news; therefore, were disregarded. From the 103 classes left we found a high imbalance among the labels with a large number of underrepresented categories having less than 12 samples. The very low number of samples for these minority classes brings a great challenge to discriminate the very fine-grained categories. Aiming to alleviate this problem, we grouped into a same class all categories below the second hierarchical level. For example, given the root node CCAT (Corporate) we grouped C151 (ACCOUNTS/EARNINGS), C1511 (ANNUAL RESULTS) and C152 (COMMENT/FORECASTS) into the direct child node C15 (PERFORMANCE). Using this procedure the original 103 categories where reduced to 55. One of the benefits of this procedure was that the less represented classes end up having around thousand samples compared with only 12 samples in the original dataset.",
"fig:rcv1arch, shows the architecture for the end-to-end text categorization task. On the bottom of the architecture $S_e$ receives word embeddings and outputs a sentence vector $S$ . The $S$ vector pass through a fully connected (FC) layer with sigmoid activation function that outputs a vector $\\hat{y} \\in \\mathbb {R}^{55}$ with each element $\\hat{y}_j \\in [0,1]$ .",
"The architecture described above is trained under the assumption that each category is independent but not mutually exclusive since a sample can have more than one category assigned (multilabel classification). The loss per sample is the average log loss across all labels: ",
"$$\\mathcal {L}(\\hat{y}, y) = - \\sum _{i=1}^{55}\\left( y_i \\log (\\hat{y}_i) + (1-y_{i}) \\log (1-\\hat{y}_{i}) \\right)$$ (Eq. 48) ",
"where the index $i$ runs over the elements of the predicted and true vectors.",
"Given the high categories imbalance, during the training we monitor the $F_1$ metric of the validation set and choose the model with the highest value.",
"Stanford Natural Language Inference (SNLI) dataset BIBREF30 consist of 570,000 pairs of sentences. Each pair has a premise and a hypothesis, manually labeled with one of the three labels: entailment, contradiction, or neutral. The SNLI has many desired properties. The labels are equally balanced, as opposed to the RCV1 dataset. Additionally, language inference is a complex task that requires a deeper understanding of the sentence meaning making this dataset suitable for learning supervised sentence encoders that generalize well to other tasks BIBREF17 . tbl:snliexmaples, shows examples of SNLI dataset sentence pairs and its respective labels.",
"In order to learn sentence encoders that can be transfered to other tasks unambiguously, we consider a neural network architecture for the sentence encoder with shared parameters between the premise and hypothesis pairs as in BIBREF17 .",
"fig:snliarch, describes the neural network architecture. After each premise and hypothesis is encoded into $S_p$ and $S_h$ , respectively, we have a fusion layer. This layer has no trainable weights and just concatenate each sentence embedding. Following BIBREF17 , we add two more matching methods: the absolute difference $\\vert S_p - S_h \\vert $ and the element-wise $S_p \\odot S_h$ . Finally, in order to learn the pair representation, $S_ph$ is feed into and FC layer with rectified linear unit (ReLU) activation function, which is expressed as $f(x) = \\log (1 + e^x)$ . The last softmax layer outputs the probability of each class.",
"Finally, the NLI classifier weights are optimized in order to minimize the categorical log loss per sample ",
"$$\\mathcal {L}(\\hat{y}, y) = - \\sum _{j=1}^{3}y_i \\log (\\hat{y}_i)$$ (Eq. 52) ",
"During the training, we monitor the validation set accuracy and choose the model with the highest metric value."
],
[
"We start this section by reviewing the Recurrent Neural Network (RNN) architecture and its application to encode a sequence of words.",
"RNN's are capable of handling variable-length sequences, this being a direct consequence of its recurrent cell, which shares the same parameters across all sequence elements. In this work, we adopt the Long Short-Term Memory (LSTM) cell BIBREF38 with forget gates $f_t$ BIBREF39 . The LSTM cell is endowed with a memory state that can learn representations that depend on the order of the words in a sentence. This makes LSTM more fit to find relations that could not be captured using standard bag-of-words representations.",
"Let $x_1, x_2, \\cdots , x_T$ be a series of observations of length $T$ , where $x_t \\in \\mathbb {R}^{d_w}$ . In general terms, the LSTM cell receives a previous hidden state $h_{t-1}$ that is combined with the current observation $x_t$ and a memory state $C_t$ to output a new hidden state $h_t$ . This internal memory state $C_{t}$ is updated depending on its previous state and three modulating gates: input, forget, and output. Formally, for each step $t$ the updating process goes as follows (see fig:lstmcell for a high level schematic view): First, we calculate the input $i_t$ , forget $T$0 , and output $T$1 gates: ",
"$$i_t &= \\sigma _s\\left(W_i x_t + U_i h_{t-1} + b_i\\right) \\\\\nf_t &= \\sigma _s\\left(W_f x_t + U_f h_{t-1} + b_f\\right) \\\\\no_t &= \\sigma _s\\left(W_o x_t + U_o h_{t-1} + b_o\\right)$$ (Eq. 54) ",
" where $\\sigma _s$ is the sigmoid activation. Second, a candidate memory state $\\widetilde{C}_t$ is generated: ",
"$$\\widetilde{C}_t = \\tanh \\left(W_c x_t + U_c h_{t-1} + b_c\\right)$$ (Eq. 55) ",
"Now we are in a position to set the final memory state $C_t$ . Its value is modulated based on the input and forget gates of eq:inputforgetgates and is given by: ",
"$$C_t = i_t \\odot \\widetilde{C}_t + f_t \\odot C_{t-1}$$ (Eq. 56) ",
"Finally, based on the memory state and output gate of eq:inputforgetgates, we have the output hidden state ",
"$$h_t = o_t \\odot \\tanh \\left(C_t\\right)$$ (Eq. 57) ",
"Regarding the trainable weights, let $n$ be the LSTM cell number of units. It follows that $W$ 's and $U$ 's matrices of the affine transformations have ${n \\times d_w}$ and ${n \\times n}$ dimensions, respectively. Its bias terms $b$ 's are vectors of size $n$ . Consequently, the total number of parameters is $4 (n d_w + n^2 + n)$ and does not depend on the sequence number of time steps $T$ .",
"We see that the LSTM networks are able to capture temporal dependencies in sequences of arbitrary length. One straightforward application is to model the Sentence encoder discussed in sec:transferlearning, which outputs a sentence vector representation using its words as input.",
"Given a sequence of words $\\left\\lbrace w_t\\right\\rbrace _{t=1}^{T}$ we aim to learn the words hidden state $\\left\\lbrace h_t\\right\\rbrace _{t=1}^{T}$ in a way that each word captures the influence of its past and future words. The Bidirectional LSTM (BiLSTM) proposed in BIBREF40 is an LSTM that “reads” a sentence, or any sequence in general, from the beginning to the end (forward) and the other way around (backward). The new state $h_t$ is the concatenation ",
"$$h_t = [\\overrightarrow{h_t}, \\overleftarrow{h_t}]$$ (Eq. 59) ",
"where ",
"$$\\overrightarrow{h_t} &= \\text{LSTM}\\left(w_1, \\cdots , w_T\\right) \\\\\n\\overleftarrow{h_t} &= \\text{LSTM}\\left(w_T, \\cdots , w_1\\right) \\\\$$ (Eq. 60) ",
"Because sentences have different lengths, we need to convert the $T$ concatenated hidden states of the BiLSTM into a fixed-length sentence representation. One straightforward operation is to apply any form of pooling. Attention mechanism is an alternative approach where the sentence is represented as an weighted average of hidden states where the weights are learnt end-to-end.",
"In the next sections we describe the sentence encoders using pooling and attention layers.",
"The max-pooling layer aims to extract the most salient word features all over the sentence. Formally, it outputs a sentence vector representation $S_{MP} \\in \\mathbb {R}^{2n}$ such that ",
"$$S_{MP} = \\max _{t=1}^{T} h_t$$ (Eq. 62) ",
"where $h_t$ is defined in eq:htconcat and the $\\max $ operator is applied over the time steps dimension. fig:bilstmmaxpool illustrates the BiLSTM max-pooling (MP) sentence encoder.",
"The efficacy of the max-pooling layer was assessed in many NLP studies. BIBREF41 employed a max-pooling layer on top of word representations and argues that it performs better than mean pooling. Experimental results in BIBREF17 show that among three types of pooling (max, mean and last) the max-pooling provides the most universal sentence representations in terms of transferring performance to other tasks. Grounded on these studies, in this work, we choose the BiLSTM max-pooling as our pooling layer of choice.",
"Attention mechanisms were introduced in the deep learning literature to overcome some simplifications imposed by pooling operators. When we humans read a sentence, we are able to spot its most relevant parts in a given context and disregard information that is redundant or misleading. The attention model aims to mimic this behaviour.",
"Attention layers were proposed for different NLP tasks. For example, NLI, with cross-attention between premise and hypothesis, Question & Answering and Machine Translation (MT). Specifically in the Machine Translation task, each word in the target sentence learns to attend the relevant words of the source sentence in order to generate the sentence translation.",
"A sentence encoder with attention (or self-attentive) BIBREF42 , BIBREF43 , BIBREF44 assigns different weights to the own words of the sentence; therefore, converting the hidden states into a single sentence vector representation.",
"Considering the word hidden vectors set $\\lbrace h_1, \\cdots , h_T\\rbrace $ where $h_t \\in \\mathbb {R}^n$ , the attention mechanism is defined by the equations: ",
"$$\\tilde{h}_t &= \\sigma \\left(W h_t + b \\right) \\\\\n\\alpha _{t} &= \\frac{\\exp ({v^{\\intercal } \\cdot \\tilde{h}_t} )}{\\sum _{t} \\exp ({v \\cdot \\tilde{h}_t})} \\\\\nS_{A_w} &= \\sum _{t} \\alpha _{t} h_t$$ (Eq. 66) ",
" where $W \\in \\mathbb {R}^{d_a \\times n}$ , $b \\in \\mathbb {R}^{d_a \\times 1}$ , and $v \\in \\mathbb {R}^{d_a \\times 1}$ are trainable parameters.",
"We can see that the sentence representation $S_{A_w}$ is a weighted average of the hidden states. fig:bilstminneratt provides a schematic view of the BiLSTM attention, where we can account the attention described in eq:att as a two layer model with a dense layer ( $d_a$ units) followed by another dense that predicts $\\alpha _t$ (single unit)."
],
[
"In this section, we first introduce our problem in a deep multimodal learning framework. We then present our neural architecture, which is able to address the problems of news relevance and novelty. Finally, we review the methods applied to learn commonalities between stocks (global features)."
],
[
"Our problem is to predict the daily stock volatility. As discussed in subsub:rangevolestimators, the Gaman-Klass estimator $\\widehat{\\sigma _{GK,t}}$ in eq:volgk is a very efficient short-term volatility proxy, thus, it is adopted as our target variable.",
"Our goal is to learn a mapping between the next day volatility $\\sigma _{t+1}$ and historical multimodal data available up to day $t$ . To this aim, we use a sliding window approach with window size $T$ . That is, for each stock $sc$ a sample on day $t$ is expressed as a sequence of historical prices $P^{sc}_t$ and corpus headlines $N^{sc}_t$ . The price sequence is a vector of Daily Prices (DP) and expressed as ",
"$$P^{sc}_t = \\left[DP^{sc}_{t-T}, DP^{sc}_{t-T+1}, \\cdots , DP^{sc}_t \\right]$$ (Eq. 69) ",
"where $DP^{sc}_{t^{\\prime }}$ is a vector of price features. In order to avoid task-specific feature engineering, the daily price features are expressed as the simple returns: ",
"$$DP^{sc}_t = \\left[ \\frac{O^{sc}_{t}}{C^{sc}_{t-1}} - 1, \\frac{H^{sc}_{t}}{C^{sc}_{t-1}} - 1, \\frac{L^{sc}_{t}}{C^{sc}_{t-1}} - 1, \\frac{C^{sc}_{t}}{C^{sc}_{t-1}} - 1 \\right]$$ (Eq. 70) ",
"The sequence of historical corpus headlines $N^{sc}_t$ is expressed as ",
"$$N^{sc}_t = \\left[n^{sc}_{t-T}, n^{sc}_{t-T+1}, \\cdots , n^{sc}_{t} \\right]$$ (Eq. 71) ",
"where $n^{sc}_{t^{\\prime }}$ is a set containing all headlines that influence the market on a given day $t^{\\prime }$ .",
"Aiming to align prices and news modes, we consider the explicit alignment method discussed in subsec:stockheadlines. That is, $n^{sc}_{t^{\\prime }}$ contains all stock headlines before the market opens ( $\\texttt {before market}_{t}$ ), during the trading hours",
"( $\\texttt {during market}_{t}$ ), and previous day after-markets",
"( $\\texttt {after market}_{t-1}$ ).",
"As a text preprocessing step, we tokenize the headlines and convert each word to an integer that refers to its respective pre-trained word embedding. This process is described as follows: First, for all stocks of our corpus we tokenize each headline and extract the corpus vocabulary set $V$ . We then build the embedding matrix $E_w \\in \\mathbb {R}^{\\vert V \\vert \\times d_w}$ , where each row is a word embedding vector $d_w$ dimensions. Words that do not have a corresponding embedding, i.e. out of vocabulary words, are skipped.",
"Finally, the input sample of the text mode is a tensor of integers with $T \\times l_n \\times l_s$ dimensions, where $l_n$ is the maximum number of news occurring in a given day and $l_s$ is the maximum length of a corpus sentence. Regarding the price mode, we have a $T \\times 4$ tensor of floating numbers."
],
[
"Given the price and news histories for each stock $sc$ we could directly learn one model per stock. However, this approach suffers from two main drawbacks. First, the market activity of one specific stock is expected to impact other stocks, which is a widely accepted pattern named “spillover effect”. Second, since our price data is sampled on a daily basis, we would train the stock model relying on a small number of samples. One possible solution to model the commonality among stocks would be feature enrichment. For example, when modeling a given stock $X$ we would enrich its news and price features by concatenating features from stock $Y$ and $Z$ . Although the feature enrichment is able to model the effect of other stocks, it still would consider only one sample per day.",
"In this work, we propose a method that learns an global model.",
"The global model is implemented using the following methods:",
"Multi-Stock batch samples: Since our models are trained using Stochastic Gradient Descent, we propose at each mini-batch iteration to sample from a batch set containing any stock of our stocks universe. As a consequence, the mapping between volatility and multimodal data is now able to learn common explanatory factors among stocks. Moreover, adopting this approach increases the total number of training samples, which is now the sum of the number of samples per stock.",
"Stock Embedding: Utilizing the Multi-Stock batch samples above, we tackle the problem of modeling commonality among stocks. However, it is reasonable to assume that stocks have part of its dynamic driven by idiosyncratic factors. Nevertheless, we could aggregate stocks per sector or rely on any measure of similarity among stocks. In order to incorporate information specific to each stock, we propose to equip our model with a “stock embedding” mode that is learnt jointly with price and news modes. That is to say, we leave the task of distinguishing the specific dynamic of each stock to be learnt by the neural network. Specifically, this stock embedding is modeled using a discrete encoding as input, i.e. $\\mathcal {I}^{sc}_t$ is a vector with size equal to the number of stocks of the stocks universe and has element 1 for the i-th coordinate and 0 elsewhere, thus, indicating the stock of each sample.",
"Formally, we can express the one model per stock approach as the mapping ",
"$$\\begin{split}\n\\sigma ^{sc}_{t+1} = f^{sc} ( DN^{sc}_{t-T}, DN^{sc}_{t-T+1}, \\cdots , DN^{sc}_t ; \\\\\nDP^{sc}_{t-T}, DP^{sc}_{t-T+1}, \\cdots , DP^{sc}_t )\n\\end{split}$$ (Eq. 75) ",
"where $DN^{sc}_{t^{\\prime }}$ is a fixed-vector representing all news released on a given day for the stock $sc$ and $DP^{sc}_{t^{\\prime }}$ is defined in eq:pricemodevec.",
"The global model attempts to learn a single mapping $f$ that at each mini-batch iteration randomly aggregates samples across all the universe of stocks, rather than one mapping $f^{sc}$ per stock. The global model is expressed as ",
"$$\\begin{split}\n\\sigma ^{sc}_{t+1} = f ( DN^{sc}_{t-T}, DN^{sc}_{t-T+1}, \\cdots , DN^{sc}_t ; \\\\\nDP^{sc}_{t-T}, DP^{sc}_{t-T+1}, \\cdots , DP^{sc}_t ; \\\\\n\\mathcal {I}^{sc}_t)\n\\end{split}$$ (Eq. 77) ",
"In the next section, we describe our hierarchical neural model and how the news, price and stock embedding are fused into a joint representation."
],
[
"In broad terms, our hierarchical neural architecture is described as follows. First, each headline released on a given day $t$ is encoded into a fixed-size vector $S_t$ using a sentence encoder. We then apply our daily New Relevance Attention (NRA) mechanism that attends each news based on its content and converts a variable size of news released on a given day into a single vector denoted by Daily News ( $DN$ ). We note that this representation take account of the overall effect of all news released on a given day. This process is illustrated in fig:DNencoder. We now are in a position to consider the temporal effect of the past $T$ days of market news and price features. fig:nntimeseriesarch illustrates the neural network architecture from the temporal sequence to the final volatility prediction. For each stock code $sc$ the temporal encoding for news is denoted by Market News $MN^{sc}_t$ and for the price by Market Price $MP^{sc}_t$ and are a function of the past $T$ Daily News representations ${\\lbrace DN^{sc}_{t-T}, \\cdots , DN^{sc}_t \\rbrace }$ (Text mode) and Daily Prices features $S_t$0 (Price mode), where each Daily Price $S_t$1 feature is given by eq:pricemodevec and the $S_t$2 representation is calculated using Daily New Relevance Attention. After the temporal effects of $S_t$3 past days of market activity were already encoded into the Market News $S_t$4 and Market Price $S_t$5 , we concatenate feature-wise $S_t$6 , $S_t$7 and the Stock embedding $S_t$8 . The stock embedding $S_t$9 represents the stock code of the sample on a given day $t$ . Finally, we have a Fully Connected (FC) layer that learns the Joint Representation of all modes. This fixed-sized joint representation is fed into a FC layer with linear activation that predicts the next day volatility $\\hat{\\sigma }_{t+1}$ .",
"Below, we detail, for each mode separately, the layers of our hierarchical model.",
"– Text mode",
"Word Embedding Retrieval",
"Standard embedding layer with no trainable parameters. It receives a vector of word indices as input and returns a matrix of word embeddings.",
"News Encoder",
"This layer encodes all news on a given day and outputs a set news embeddings $\\lbrace S^{1}_t, \\cdots , S^{l_n}_t \\rbrace $ . Each encoded sentence has dimension $d_S$ , which is a hyperparameter of our model. This layer constitutes a key component of our neural architectures and, as such, we evaluate our models considering sentence encoders trained end-to-end, using the BiLSTM attention (subsec:bilstminneratt) and BiLSTM max-pooling (subsec:bilstmmaxpool) architectures, and also transferred from the RCV1 and SNLI as fixed features.",
"Daily news relevance attention",
"Our proposed news relevance attention mechanism for all news released on a given day. The attention mechanism is introduced to tackle information overload. It was designed to “filter out” redundant or misleading news and focus on the relevant ones based solely on the news content. Formally, the layer outputs a Daily News (DN) embedding $DN^{sc}_t = \\sum _{i=1}^{l_n} \\beta _i S^{sc^{i}}_t$ , which is a linear combination of all encoded news on a given day $t$ . This news-level attention uses the same equations as in eq:att, but with trainable weights $\\lbrace W_{R}, b_{R}, v_{R}\\rbrace $ , i.e. the weights are segregated from the sentence encoder. fig:DNencoder, illustrates our relevance attention. Note that this layer was deliberately developed to be invariant to headlines permutation, as is the case with the linear combination formula above. The reason is that our price data is sampled daily and, as a consequence, we are not able to discriminate the market reaction for each intraday news.",
"News Temporal Context",
" Sequence layer with daily news embeddings $DN^{sc}_t$ as time steps. This layer aims to learn the temporal context of news, i.e. the relationship between the news at day $t$ and the $T$ past days. It receives as input a chronologically ordered sequence of $T$ past Daily News embeddings ${\\lbrace DN^{sc}_{t-T}, \\cdots , DN^{sc}_t \\rbrace }$ and outputs the news mode encoding Market News $MN^{sc}_t \\in d_{MN}$ . The sequence with $T$ time steps is encoded using a BiLSTM attention. The layer was designed to capture the temporal order that news are released and the current news novelty. i.e. news that were repeated in the past can be “forgotten” based on the modulating gates of the LSTM network.",
"– Price mode",
"Price Encoder",
"Sequence layer analogous to News Temporal Context, but for the price mode. The input is the ordered sequence Daily Prices ${\\lbrace DP^{sc}_{t-T}, \\cdots , DP^{sc}_t \\rbrace }$ of size $T$ , where each element the price feature defined in eq:pricemodevec. Particularly, the architecture consists of two stacked LSTM's. The first one outputs for each price feature time step a hidden vector that takes the temporal context into account. Then these hidden vectors are again passed to a second independent LSTM. The layer outputs the price mode encoding Market Price $MP^{sc}_t \\in d_{MP}$ . This encoding is the last hidden vector of the second LSTM Market.",
"– Stock embedding",
"Stock Encoder",
"Stock dense representation. The layer receives the discrete encoding $\\mathcal {I}^{sc}_t$ indicating the sample stock code pass through a FC layer and outputs a stock embedding $E_{sc}$ .",
"– Joint Representation",
"Merging",
"Feature-wise News, Price, and Stock modes concatenation. No trainable parameters.",
"Joint Representation Encoder",
"FC layer of size $d_{JR}$ ."
],
[
"During the training we feed into our neural model the price, news, and stock indicator data. The price and stock indicator modes data occur in all days. However, at the individual stock level we can have days that the company is not covered by the media. This feature imposes challenges to our multimodal training since neural networks are not able to handle missing modes without special intervention. A straightforward solution would be to consider only days with news released, disregarding the remaining samples. However, this approach has two main drawbacks. First, the “missing news” do not happen at random, or are attributed to measurement failure as is, for example, the case of multimodal tasks using mechanical sensors data. Conversely, as highlighted in BIBREF7 , BIBREF8 the same price behaviour results in distinct market reactions when accompanied or not by news. In other words, specifically to financial forecasting problems the absence or existence of news are highly informative.",
"Some methods were proposed in the multimodal literature to effectively treat informative missing modes or “informative missingness”, which is a characteristic refereed in the literature as learning with missing modalities BIBREF22 . In this work, we directly model the news missingness as a feature of our text model temporal sequence by using the method initially proposed in BIBREF45 , BIBREF46 for clinical data with missing measurements and applied in the context of financial forecasting in BIBREF47 . Specifically, we implement the Zeros & Imputation (ZI) method BIBREF46 in order to jointly learn the price mode and news relationship across all days of market activity.",
"The ZI implementation is described as follows: Before the daily news sequence is processed by the text temporal layer (described in itm:newstclayer) we input a 0 vector for all time steps with missing news and leave the news encoding unchanged otherwise. This step is called zero imputation. In addition, we concatenate feature-wise an indicator vector with value 1 for all vectors with zero imputation and 0 for the days with news.",
"As described in BIBREF47 , the ZI method endow a temporal sequence model with the ability to learn different representations depending on the news history and its relative time position. Moreover, it allows our model to predict the volatility for all days of our time series and, at the same time, take into account the current and past news informative missingness. Furthermore, the learnt positional news encoding works differently than a typical “masking”, where days without news are not passed through the LSTM cell. Masking the time steps would be losing information about the presence or absence of news concomitant with prices."
],
[
"We aim to evaluate our hierarchical neural model in the light of three main aspects. First, we asses the importance of the different sentence encoders to our end-to-end models and how it compares to transferring the sentence encoder from our two auxiliary TL tasks. Second, we ablate our proposed news relevance attention (NRA) component to evaluate its importance. Finally, we consider a model that takes into consideration only the price mode (unimodal), i.e. ignoring any architecture related to the text mode.",
"Before we define the baselines to asses the three aspects described above, we review in the next section the scores of the trained TL tasks."
],
[
"This section reports the performance of the auxiliary TL tasks considered in this work. Our ultimate goal is to indicate that our scores are in line with previous works All the architectures presented in sec:transferlearning are trained for a maximum of 50 epochs using mini-batch SGD with Adam optimizer BIBREF48 . Moreover, at the end of each epoch, we evaluate the validation scores, which are accuracy (Stanfor SNLI dataset) and F1 (RCV1 dataset), and save the weights with the best values. Aiming to seeped up training, we implement early stopping with patience set to 8 epochs. That is, if the validation scores do not improve for more than 10 epochs we halt the training. Finally, we use Glove pre-trained word embeddings BIBREF37 as fixed features.",
"tbl:tlevaluation compares our test scores with state-of-the-art (SOTA) results reported in previous works. We can see that our scores for the SNLI task are very close to state-of-the-art.",
"Regarding the RCV1 dataset, our results consider only the headline content for training, while the refereed works consider both the news headline and message body. The reason for training using only the headlines is that both tasks are learnt with the sole purpose of transferring the sentence encoders to our main volatility prediction task, whose textual input is restricted to headlines."
],
[
"During the training of our hierarchical neural model described in sub:HAN we took special care to guard against overfitting. To this aim, we completely separate 2016 and 2017 as the test set and report our results on this “unseen” set. The remaining data is further split into training (2007 to 2013) and validation (2014 to 2015). The model convergence during training is monitored in the validation set. We monitor the validation score of our model at the end of each epoch and store the network weights if the validation scores improves between two consecutive epochs. Additionally, we use mini-batch SGD with Adam optimizer and early stopping with patience set to eight epochs. The hyperparameter tunning is performed using grid search.",
"All training is performed using the proposed global model approach described in sub:globalmodel, which learns a model that takes into account the features of all the 40 stocks of our corpus. Using this approach our training set has a total of 97,903 samples. Moreover, during the SGD mini-batch sampling the past $T$ days of price and news history tensors and each stock sample stock indicator are randomly selected from the set of all 40 stocks."
],
[
"In order to evaluate the contributions of each component of our neural model described in sub:HAN and the effect of using textual data to predict the volatility, we report our results using the following baselines:",
"- News (unimodal price only): This baseline completely ablates (i.e. removes) any architecture related to the news mode, considering only the price encoding and the stock embedding components. Using this ablation we aim to evaluate the influence of news to the volatility prediction problem.",
"+ News (End-to-end Sentence Encoders) - NRA: This baseline ablates our proposed new relevance attention (NRA) component, and instead, makes use of the same Daily Averaging method in BIBREF26 , BIBREF27 , where all fixed-sized headline representations on a given day are averaged without taking into account the relevance of each news. We evaluate this baseline for both BiLSTM attention (Att) and BiLSTM max-pooling (MP) sentence encoders. Here, our goal is to asses the true contribution of our NRA component in the case SOTA sentence encoders are taken into account.",
"+ News (End-to-End W-L Att Sentence Encoder) + NRA: The Word-Level Attention (W-L Att) sentence encoder implements an attention mechanism directly on top of word embeddings, and, as such, does not consider the order of words in a sentence. This baseline complements the previous one, i.e. it evaluates the influence of the sentence encoder when our full specification is considered.",
"+ News (TL Sentence Encoders) + NRA: Makes use of sentence encoders of our two auxiliary TL tasks as fixed features. This baseline aims to address the following questions, namely: What dataset and models are more suitable to transfer to our specific volatility forecasting problem; How End-to-End models, which are trained on top of word embeddings, perform compared to sentence encoders transferred from other tasks.",
"tbl:comparativeallsectors summarizes the test scores for the ablations discussed above. Our best model is the + News (BiLSTM Att) + NRA, which is trained end-to-end and uses our full architecture. The second best model, i.e. + News (BiLSTM MP) + NRA, ranks slightly lower and only differs form the best model in terms of the sentence encoder. The former sentence encoder uses an attention layer (subsec:bilstminneratt) and the the last a max-pooling layer (subsec:bilstmmaxpool), where both layers are placed on top of the LSTM hidden states of each word.",
"Importantly, our experiments show that using news and price (multimodal) to predict the volatility improves the scores by 11% (MSE) and 9% (MAE) when compared with the – News (price only unimodal) model that considers only price features as explanatory variables.",
"When comparing the performance of End-to-End models and the TL auxiliary tasks the following can be observed: The end-to-end models trained with the two SOTA sentence encoders perform better than transferring sentence encoder from both auxiliary tasks. However, our experiments show that the same does not hold for models trained end-to-end relying on the simpler WL-Att sentence encoder, which ignores the order of words in a sentence. In other words, considering the appropriate TL task, it is preferable to transfer a SOTA sentence encoder trained on a larger dataset than learning a less robust sentence encoder in an end-to-end fashion. Moreover, initially, we thought that being the RCV1 a financial domain corpus it would demonstrate a superior performance when compared to the SNLI dataset. Still, the SNLI transfers better than RCV1. We hypothesize that the text categorization task (RCV1 dataset) is not able to capture complex sentence structures at the same level required to perform natural language inference. Particularly to the volatility forecasting problem, our TL results corroborates the same findings in BIBREF17 , where it was shown that SNLI dataset attains the best sentence encoding for a broad range of pure NLP tasks, including, among other, text categorization and sentiment analysis.",
"Significantly, experimental results in tbl:comparativeallsectors clearly demonstrate that our proposed news relevance attention (NRA) outperforms the News Averaging method proposed in previous studies BIBREF26 , BIBREF27 . Even when evaluating our NRA component in conjunction with the more elementary W-L Att sentence encoder it surpass the results of sophisticated sentence encoder using a News Averaging approach. In other words, our results strongly points to the advantage of discriminating noisy from impacting news and the effectiveness of learning to attend the most relevant news.",
"Having analyzed our best model, we now turn to its comparative performance with respect to the widely regarded GARCH(1,1) model described in sec:GARCH.",
"We asses our model performance relative to GARCH(1,1) using standard loss metrics (MSE and MAE) and the regression-based accuracy specified in eq:regressionloss and measured in terms of the coefficient of determination $R^2$ . In addition, we evaluate our model across two different volatility proxies: Garman-Klass ( $\\widehat{\\sigma _{GK}}$ ) (eq:volgk) and Parkinson ( $\\widehat{\\sigma _{PK}}$ ) (eq:volpk). We note that, as reviewed in sub:evalution, these two volatility proxies are statically efficient and proper estimators of the next day volatility.",
"tbl:garchallsectors reports the comparative performance among our best Price + News model (+ News BiLSTM (MP) + NRA), our Price only (unimodal) model and GARCH(1,1). The results clearly demonstrate the superiority of our model, being more accurate than GRACH for both volatility proxies. We note that evaluating the GARCH(1,1) model relying on standard MSE and MAE error metrics should be taken with a grain of salt. BIBREF35 provides the background theory and arguments supporting $R^2$ as the metric of choice to evaluate the predictive power of a volatility model. In any case, the outperformance or our model with respect to GARCH(1,1) permeates all three metrics, name $R^2$ , $MSE$ and $MAE$ ."
],
[
"Company sectors are expected to have different risk levels, in the sense that each sector is driven by different types of news and economic cycles. Moreover, by performing a sector-level analysis we were initially interested in understanding if the outperformance of our model with respect to GARCH(1,1) was the result of a learning bias to a given sector or if, as turned out to be the case, the superior performance of our model spreads across a diversified portfolio of sectors.",
"In order to evaluate the performance per sector, we first separate the constituents stocks for each sector in tbl:stockuniverse. Then, we calculate the same metrics discussed in the previous section for each sector individually.",
"tbl:garcheachsector reports our experimental results segregated by sector. We observe that the GRACH model accuracy, measured using the $R^2$ score, has a high degree of variability among sectors. For example, the accuracy ranges from 0.15 to 0.44 for the HealthCare and Energy sector, respectively. This high degree of variability is in agreement with previous results reported in BIBREF16 , but in the context of long-term (quarterly) volatility predictions. Although the GARCH(1,1) accuracy is sector-dependent, without any exception, our model using price and news as input clearly outperforms GRACH sector-wise. This fact allow us to draw the following conclusions:",
"Our model outperformance is persistent across sectors, i.e. the characteristics of the results reported in tbl:garchallsectors permeates all sectors, rather than being composed of a mix of outperforming and underperforming sector contributions. This fact provides a strong evidence that our model is more accurate than GARCH(1,1).",
"The proposed Global model approach discussed in sub:globalmodel is able to generalize well, i.e. the patterns learnt are not biased to a given sector or stock.",
"One of the limitations of our work is to rely on proxies for the volatility estimation. Although these proxies are handy if only open, high, low and close daily price data is available, having high frequency price data we could estimate the daily volatility using the sum of squared intraday returns to measure the true daily latent volatility. For example, in evaluating the performance for the one-day-ahead GARCH(1,1) Yen/Dollar exchange rate BIBREF35 reports $R^2$ values of 0.237 and 0.392 using hourly and five minutes sampled intraday returns, respectively. However, we believe that utilizing intraday data would further improve our model performance.",
"Since our experimental results demonstrate the key aspect of the news relevance attention to model architecture we observe that intraday data would arguably ameliorate the learning process. Having intraday data would allow us to pair each individual news release with the instantaneous market price reaction. Using daily data we are losing part of this information by only measuring the aggregate effect of all news to the one-day-ahead prediction."
],
[
"We study the joint effect of stock news and prices on the daily volatility forecasting problem. To the best of our knowledge, this work is one of the first studies aiming to predict short-term (daily) rather than long-term (quarterly or yearly) volatility taking news and price as explanatory variables and using a comprehensive dataset of news headlines at the individual stock level. Our hierarchical end-to-end model benefits from state-of-the-art approaches to encode text information and to deal with two main challenges in correlating news with market reaction: news relevance and novelty. That is, to address the problem of how to attend the most important news based purely on its content (news relevance attention) and to take into account the temporal information of past news (temporal context). Additionally, we propose a multi-stock mini-batch + stock embedding method suitable to model commonality among stocks.",
"The experimental results show that our multimodal approach outperforms the GARCH(1,1) volatility model, which is the most prevalent econometric model for daily volatility predictions. The outperformance being sector-wise and demonstrates the effectiveness of combining price and news for short-term volatility forecasting. The fact that we outperform GARCH(1,1) for all analyzed sectors confirms the robustness of our proposed architecture and evidences that our global model approach generalizes well.",
"We ablated (i.e. removed) different components of our neural architecture to assess its most relevant parts. To this aim, we replaced our proposed news relevance attention layer, which aims to attend the most important news on a given day, with a simpler architecture proposed in the literature, which averages the daily news. We found that our attention layer improves the results. Additionally, we ablated all the architecture related to the news mode and found that news enhances the forecasting accuracy.",
"Finally, we evaluated different sentence encoders, including those transfered from other NLP tasks, and concluded that they achieve better performance as compared to a plain Word-level attention sentence encoder trained end-to-end. However, they do not beat state-of-the-art sentence encoders trained end-to-end.",
"In order to contribute to the literature of Universal Sentence Encoders, we evaluated the performance of transferring sentence encoders from two different tasks to the volatility prediction problem. We showed that models trained on the Natural Language Inference (NLI) task are more suitable to forecasting problems than a financial domain dataset (Reuters RCV1). By analyzing different architectures, we showed that a BiLSTM with max-pooling for the SNLI dataset provides the best sentence encoder.",
"In the future, we plan to make use of intraday prices to better assess the predictive power of our proposed models. Additionally, we would further extend our analysis to other stock market sectors."
]
],
"section_name": [
"Introduction",
"Related work",
"Our dataset",
"Sectors and stocks",
"Stock specific data",
"Stock headlines",
"Background",
"GARCH model",
"Transfer Learning from other source domains",
"Sequence Models",
"Methodology",
"Problem statement",
"Global features and stock embedding",
"Our multimodal hierarchical network",
"Multimodal learning with missing modes",
"Experimental results and discussions",
"Auxiliary transfer learning tasks",
"Training setup",
"Stocks universe result",
"Sector-level results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"43c0434aabf1ee183e5dec3aa104d6fe4c74f046",
"6367fbcf717839043b824074072adbdb98a57723"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 8: Sector-level performance comparison."
],
"extractive_spans": [],
"free_form_answer": "Energy with accuracy of 0.538",
"highlighted_evidence": [
"FLOAT SELECTED: Table 8: Sector-level performance comparison."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 7: Our volatility model performance compared with GARCH(1,1). Best performance in bold. Our model has superior performance across the three evaluation metrics and taking into consideration the state-of-the-art volatility proxies, namely Garman-Klass (σ̂PK) and Parkinson (σ̂PK)."
],
"extractive_spans": [],
"free_form_answer": "Energy",
"highlighted_evidence": [
"FLOAT SELECTED: Table 7: Our volatility model performance compared with GARCH(1,1). Best performance in bold. Our model has superior performance across the three evaluation metrics and taking into consideration the state-of-the-art volatility proxies, namely Garman-Klass (σ̂PK) and Parkinson (σ̂PK)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"Which stock market sector achieved the best performance?"
],
"question_id": [
"b634ff1607ce5756655e61b9a6f18bc736f84c83"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
""
],
"topic_background": [
"unfamiliar"
]
} | {
"caption": [
"Figure 1: RCV1 text categorization architecture. The sentence encoder Se maps word emebddings wi to a sentence vector S and the last FC layer has a sigmoid activation function.",
"Figure 2: Natural Language Inference task architecture. Note that the sentence encoder Se is shared between the premise and hypothesis pair. The FC layer learns the representation of the sentence pair and the final Softmax layer asserts the output of the 3 possible labels, i.e. [entailment, contradiction, neutral ], sums to one.",
"Figure 3: Schematic view of a LSTM cell. The observed state xt is combined with previous memory and hidden states to output a hidden state ht. The memory state Ct is an internal state; therefore, not part of the output representation. An LSTM network is trained by looping its shared cell across all sequence length.",
"Figure 4: BiLSTM max-pooling. The network performs a polling operation on top of each word hidden state.",
"Figure 5: BiLSTM attention. The specific example encodes a headline from our corpus.",
"Figure 6: Daily news relevance attention. The figure illustrates a day where three news were released for the Walmart company. After the headlines are encoded into a fixed-size representation S, the daily news relevance attention AR converts all sentences into single vector representation of all Daily News DN by attending each headline based on its content.",
"Figure 7: Hierarchical Neural Network architecture.",
"Table 1: Corpus sectors and respective constituent stocks. For each sector we selected the top 10 stock holdings (as in January 2018). Stock codes in parentheses.",
"Table 2: Distribution of headlines per sector according to market hours. The majority of the 146,783 headlines are released before 9:30AM (before market). The category after market includes news released after 4:00PM EDT. We count the categories holiday and weekend as before market since they impact the following working day.",
"Table 3: Random samples from our dataset. Note the factual/objective characteristic of our corpus, where typical news do not carry any sentiment connotation.",
"Table 4: Stanford NLI (SNLI) dataset examples. Natural language sentence pairs are labelled with entailment (e), contradiction (c), or neutral (n).",
"Table 5: TL auxiliary tasks – Sentence Encoders comparison. Test scores are accuracy and F1 scores for the SNLI subsubsection 4.2.2 and RCV1 subsubsection 4.2.1 datasets, respectively. † indicates model trained with both headlines and body content and using the original 103 classes of the RCV1 dataset, rather than our models that are trained using headlines only and a total of 55 classes (see subsubsection 4.2.1 for a complete description). As a consequence, the reported benchmarks for the RCV1 dataset are not directly comparable and where reported for the sake of a better benchmark.",
"Table 6: Model architecture ablations and sentence encoders comparisons. The minus sign means that the component of our network architecture described in subsection 5.3 was ablated (i.e. removed) and the plus sign that it is added. The second and third row report results replacing the news relevance attention (NRA) with a News Averaging component as in [27, 28]. † indicates our model was trained using only the price mode. †† highlights that the sentence encoder Word-Level Attention (W-L Attention) does not take into consideration the headline words order. Best result in bold.",
"Table 7: Our volatility model performance compared with GARCH(1,1). Best performance in bold. Our model has superior performance across the three evaluation metrics and taking into consideration the state-of-the-art volatility proxies, namely Garman-Klass (σ̂PK) and Parkinson (σ̂PK).",
"Table 8: Sector-level performance comparison."
],
"file": [
"13-Figure1-1.png",
"14-Figure2-1.png",
"16-Figure3-1.png",
"17-Figure4-1.png",
"18-Figure5-1.png",
"22-Figure6-1.png",
"23-Figure7-1.png",
"36-Table1-1.png",
"37-Table2-1.png",
"37-Table3-1.png",
"38-Table4-1.png",
"38-Table5-1.png",
"39-Table6-1.png",
"39-Table7-1.png",
"40-Table8-1.png"
]
} | [
"Which stock market sector achieved the best performance?"
] | [
[
"1812.10479-40-Table8-1.png",
"1812.10479-39-Table7-1.png"
]
] | [
"Energy"
] | 49 |
1904.09535 | NeuronBlocks: Building Your NLP DNN Models Like Playing Lego | Deep Neural Networks (DNN) have been widely employed in industry to address various Natural Language Processing (NLP) tasks. However, many engineers find it a big overhead when they have to choose from multiple frameworks, compare different types of models, and understand various optimization mechanisms. An NLP toolkit for DNN models with both generality and flexibility can greatly improve the productivity of engineers by saving their learning cost and guiding them to find optimal solutions to their tasks. In this paper, we introduce NeuronBlocks, a toolkit encapsulating a suite of neural network modules as building blocks to construct various DNN models with complex architecture. This toolkit empowers engineers to build, train, and test various NLP models through simple configuration of JSON files. The experiments on several NLP datasets such as GLUE, WikiQA and CoNLL-2003 demonstrate the effectiveness of NeuronBlocks. | {
"paragraphs": [
[
"",
"Deep Neural Networks (DNN) have been widely employed in industry for solving various Natural Language Processing (NLP) tasks, such as text classification, sequence labeling, question answering, etc. However, when engineers apply DNN models to address specific NLP tasks, they often face the following challenges.",
"The above challenges often hinder the productivity of engineers, and result in less optimal solutions to their given tasks. This motivates us to develop an NLP toolkit for DNN models, which facilitates engineers to develop DNN approaches. Before designing this NLP toolkit, we conducted a survey among engineers and identified a spectrum of three typical personas.",
"To satisfy the requirements of all the above three personas, the NLP toolkit has to be generic enough to cover as many tasks as possible. At the same time, it also needs to be flexible enough to allow alternative network architectures as well as customized modules. Therefore, we analyzed the NLP jobs submitted to a commercial centralized GPU cluster. Table TABREF11 showed that about 87.5% NLP related jobs belong to a few common tasks, including sentence classification, text matching, sequence labeling, MRC, etc. It further suggested that more than 90% of the networks were composed of several common components, such as embedding, CNN/RNN, Transformer and so on.",
"Based on the above observations, we developed NeuronBlocks, a DNN toolkit for NLP tasks. The basic idea is to provide two layers of support to the engineers. The upper layer targets common NLP tasks. For each task, the toolkit contains several end-to-end network templates, which can be immediately instantiated with simple configuration. The bottom layer consists of a suite of reusable and standard components, which can be adopted as building blocks to construct networks with complex architecture. By following the interface guidelines, users can also contribute to this gallery of components with their own modules.",
"The technical contributions of NeuronBlocks are summarized into the following three aspects."
],
[
"There are several general-purpose deep learning frameworks, such as TensorFlow, PyTorch and Keras, which have gained popularity in NLP community. These frameworks offer huge flexibility in DNN model design and support various NLP tasks. However, building models under these frameworks requires a large overhead of mastering these framework details. Therefore, higher level abstraction to hide the framework details is favored by many engineers.",
"There are also several popular deep learning toolkits in NLP, including OpenNMT BIBREF0 , AllenNLP BIBREF1 etc. OpenNMT is an open-source toolkit mainly targeting neural machine translation or other natural language generation tasks. AllenNLP provides several pre-built models for NLP tasks, such as semantic role labeling, machine comprehension, textual entailment, etc. Although these toolkits reduce the development cost, they are limited to certain tasks, and thus not flexible enough to support new network architectures or new components.",
""
],
[
"",
"The Neuronblocks is built on PyTorch. The overall framework is illustrated in Figure FIGREF16 . It consists of two layers: the Block Zoo and the Model Zoo. In Block Zoo, the most commonly used components of deep neural networks are categorized into several groups according to their functions. Within each category, several alternative components are encapsulated into standard and reusable blocks with a consistent interface. These blocks serve as basic and exchangeable units to construct complex network architectures for different NLP tasks. In Model Zoo, the most popular NLP tasks are identified. For each task, several end-to-end network templates are provided in the form of JSON configuration files. Users can simply browse these configurations and choose one or more to instantiate. The whole task can be completed without any coding efforts.",
""
],
[
"",
"We recognize the following major functional categories of neural network components. Each category covers as many commonly used modules as possible. The Block Zoo is an open framework, and more modules can be added in the future.",
"[itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em]",
"Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported.",
"Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability.",
"Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 .",
"Metrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported.",
""
],
[
"",
"In NeuronBlocks, we identify four types of most popular NLP tasks. For each task, we provide various end-to-end network templates.",
"[itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em]",
"Text Classification and Matching. Tasks such as domain/intent classification, question answer matching are supported.",
"Sequence Labeling. Predict each token in a sequence into predefined types. Common tasks include NER, POS tagging, Slot tagging, etc.",
"Knowledge Distillation BIBREF7 . Teacher-Student based knowledge distillation is one common approach for model compression. NeuronBlocks provides knowledge distillation template to improve the inference speed of heavy DNN models like BERT/GPT.",
"Extractive Machine Reading Comprehension. Given a pair of question and passage, predict the start and end positions of the answer spans in the passage.",
""
],
[
"NeuronBlocks provides convenient user interface for users to build, train, and test DNN models. The details are described in the following.",
"[itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em]",
"I/O interface. This part defines model input/output, such as training data, pre-trained models/embeddings, model saving path, etc.",
"Model Architecture interface. This is the key part of the configuration file, which defines the whole model architecture. Figure FIGREF19 shows an example of how to specify a model architecture using the blocks in NeuronBlocks. To be more specific, it consists of a list of layers/blocks to construct the architecture, where the blocks are supplied in the gallery of Block Zoo.",
"Training Parameters interface. In this part, the model optimizer as well as all other training hyper parameters are indicated.",
""
],
[
"Figure FIGREF34 shows the workflow of building DNN models in NeuronBlocks. Users only need to write a JSON configuration file. They can either instantiate an existing template from Model Zoo, or construct a new architecture based on the blocks from Block Zoo. This configuration file is shared across training, test, and prediction. For model hyper-parameter tuning or architecture modification, users just need to change the JSON configuration file. Advanced users can also contribute novel customized blocks into Block Zoo, as long as they follow the same interface guidelines with the existing blocks. These new blocks can be further shared across all users for model architecture design. Moreover, NeuronBlocks has flexible platform support, such as GPU/CPU, GPU management platforms like PAI.",
""
],
[
"",
"To verify the performance of NeuronBlocks, we conducted extensive experiments for common NLP tasks on public data sets including CoNLL-2003 BIBREF14 , GLUE benchmark BIBREF13 , and WikiQA corpus BIBREF15 . The experimental results showed that the models built with NeuronBlocks can achieve reliable and competitive results on various tasks, with productivity greatly improved."
],
[
" For sequence labeling task, we evaluated NeuronBlocks on CoNLL-2003 BIBREF14 English NER dataset, following most works on the same task. This dataset includes four types of named entities, namely, PERSON, LOCATION, ORGANIZATION, and MISC. We adopted the BIOES tagging scheme instead of IOB, as many previous works indicated meaningful improvement with BIOES scheme BIBREF16 , BIBREF17 . Table TABREF28 shows the results on CoNLL-2003 Englist testb dataset, with 12 different combinations of network layers/blocks, such as word/character embedding, CNN/LSTM and CRF. The results suggest that the flexible combination of layers/blocks in NeuronBlocks can easily reproduce the performance of original models, with comparative or slightly better performance."
],
[
"The General Language Understanding Evaluation (GLUE) benchmark BIBREF13 is a collection of natural language understanding tasks. We experimented on the GLUE benchmark tasks using BiLSTM and Attention based models. As shown in Table TABREF29 , the models built by NeuronBlocks can achieve competitive or even better results on GLUE tasks with minimal coding efforts."
],
[
"We evaluated Knowledge Distillation task in NeuronBlocks on a dataset collected from one commercial search engine. We refer to this dataset as Domain Classification Dataset. Each sample in this dataset consists of two parts, i.e., a question and a binary label indicating whether the question belongs to a specific domain. Table TABREF36 shows the results, where Area Under Curve (AUC) metric is used as the performance evaluation criteria and Queries per Second (QPS) is used to measure inference speed. By knowledge distillation training approach, the student model by NeuronBlocks managed to get 23-27 times inference speedup with only small performance regression compared with BERTbase fine-tuned classifier."
],
[
"The WikiQA corpus BIBREF15 is a publicly available dataset for open-domain question answering. This dataset contains 3,047 questions from Bing query logs, each associated with some candidate answer sentences from Wikipedia. We conducted experiments on WikiQA dataset using CNN, BiLSTM, and Attention based models. The results are shown in Table TABREF41 . The models built in NeuronBlocks achieved competitive or even better results with simple model configurations."
],
[
"In this paper, we introduce NeuronBlocks, a DNN toolkit for NLP tasks built on PyTorch. NeuronBlocks targets three types of engineers, and provides a two-layer solution to satisfy the requirements from all three types of users. To be more specific, the Model Zoo consists of various templates for the most common NLP tasks, while the Block Zoo supplies a gallery of alternative layers/modules for the networks. Such design achieves a balance between generality and flexibility. Extensive experiments have verified the effectiveness of this approach. NeuronBlocks has been widely used in a product team of a commercial search engine, and significantly improved the productivity for developing NLP DNN approaches.",
"As an open-source toolkit, we will further extend it in various directions. The following names a few examples.",
""
]
],
"section_name": [
"Introduction",
"Related Work",
"Design",
"Block Zoo",
"Model Zoo",
"User Interface",
"Workflow",
"Experiments",
"Sequence Labeling",
"GLUE Benchmark",
"Knowledge Distillation",
"WikiQA",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"457226f68fea0fcf3436a1e9d4a0f58ecb497d03",
"a2420614d4ee0c22ef07eef3d2f69ad7f738dee5"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"To verify the performance of NeuronBlocks, we conducted extensive experiments for common NLP tasks on public data sets including CoNLL-2003 BIBREF14 , GLUE benchmark BIBREF13 , and WikiQA corpus BIBREF15 . The experimental results showed that the models built with NeuronBlocks can achieve reliable and competitive results on various tasks, with productivity greatly improved.",
"For sequence labeling task, we evaluated NeuronBlocks on CoNLL-2003 BIBREF14 English NER dataset, following most works on the same task. This dataset includes four types of named entities, namely, PERSON, LOCATION, ORGANIZATION, and MISC. We adopted the BIOES tagging scheme instead of IOB, as many previous works indicated meaningful improvement with BIOES scheme BIBREF16 , BIBREF17 . Table TABREF28 shows the results on CoNLL-2003 Englist testb dataset, with 12 different combinations of network layers/blocks, such as word/character embedding, CNN/LSTM and CRF. The results suggest that the flexible combination of layers/blocks in NeuronBlocks can easily reproduce the performance of original models, with comparative or slightly better performance."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To verify the performance of NeuronBlocks, we conducted extensive experiments for common NLP tasks on public data sets including CoNLL-2003 BIBREF14 , GLUE benchmark BIBREF13 , and WikiQA corpus BIBREF15 .",
"For sequence labeling task, we evaluated NeuronBlocks on CoNLL-2003 BIBREF14 English NER dataset, following most works on the same task."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"a3d0389bdb63041a918533135b587de2407b7b25",
"adeeadf2cc9a557030f60c70e09e278d676640fe"
],
"answer": [
{
"evidence": [
"We recognize the following major functional categories of neural network components. Each category covers as many commonly used modules as possible. The Block Zoo is an open framework, and more modules can be added in the future.",
"Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported.",
"Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability.",
"Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 .",
"Metrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported."
],
"extractive_spans": [
"Embedding Layer",
"Neural Network Layers",
"Loss Function",
"Metrics"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Block Zoo is an open framework, and more modules can be added in the future.",
"Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported.",
"Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability.",
"Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 .\n\nMetrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Neuronblocks is built on PyTorch. The overall framework is illustrated in Figure FIGREF16 . It consists of two layers: the Block Zoo and the Model Zoo. In Block Zoo, the most commonly used components of deep neural networks are categorized into several groups according to their functions. Within each category, several alternative components are encapsulated into standard and reusable blocks with a consistent interface. These blocks serve as basic and exchangeable units to construct complex network architectures for different NLP tasks. In Model Zoo, the most popular NLP tasks are identified. For each task, several end-to-end network templates are provided in the form of JSON configuration files. Users can simply browse these configurations and choose one or more to instantiate. The whole task can be completed without any coding efforts.",
"We recognize the following major functional categories of neural network components. Each category covers as many commonly used modules as possible. The Block Zoo is an open framework, and more modules can be added in the future.",
"Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported.",
"Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability.",
"Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 .",
"Metrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported."
],
"extractive_spans": [
"Embedding Layer",
"Neural Network Layers",
"Loss Function",
"Metrics"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Neuronblocks is built on PyTorch.",
"It consists of two layers: the Block Zoo and the Model Zoo.",
"Block Zoo\nWe recognize the following major functional categories of neural network components.",
"Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported.",
"Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability.",
"Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 .\n\n",
"Metrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"75e6e2be3c26ad250c2ec5ed0abe3c8f09b75627",
"edc27ef58f3f5b4f67f542813bfad6122f00eb21"
],
"answer": [
{
"evidence": [
"The above challenges often hinder the productivity of engineers, and result in less optimal solutions to their given tasks. This motivates us to develop an NLP toolkit for DNN models, which facilitates engineers to develop DNN approaches. Before designing this NLP toolkit, we conducted a survey among engineers and identified a spectrum of three typical personas."
],
"extractive_spans": [],
"free_form_answer": "By conducting a survey among engineers",
"highlighted_evidence": [
"Before designing this NLP toolkit, we conducted a survey among engineers and identified a spectrum of three typical personas."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"",
"",
""
],
"question": [
"Do they report results only on English?",
"What neural network modules are included in NeuronBlocks?",
"How do the authors evidence the claim that many engineers find it a big overhead to choose from multiple frameworks, models and optimization techniques?"
],
"question_id": [
"b1cf5739467ba90059add58d11b73d075a11ec86",
"2ea4347f1992b0b3958c4844681ff0fe4d0dd1dd",
"4f253dfced6a749bf57a1b4984dc962ce9550184"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Table 1: Task analysis of NLP DNN jobs submitted to a commercial centralized GPU cluster.",
"Figure 1: The overall framework of NeuronBlocks.",
"Figure 2: A Model architecture interface example of sequence labeling model in NeuronBlocks.",
"Table 2: NeuronBlocks results on CoNLL-2003 English NER testb dataset. The abbreviation (C-16)= (Chiu and Nichols, 2016), (L-16)= (Lample et al., 2016), (M-16)= (Ma and Hovy, 2016), (N)= (Yang et al., 2018), (P-17)= (Peters et al., 2017).",
"Table 3: NeuronBlocks?results on GLUE benchmark development sets. As described in (Wang et al., 2019), for CoLA, we report Matthews correlation. For QQP, we report accuracy and F1. For MNLI, we report accuracy averaged over the matched and mismatched development sets. For all other tasks we report accuracy. All values have been scaled by 100. Please note that results on the development sets are reported, since GLUE does not distribute labels for the test sets.",
"Figure 3: The workflow of NeuronBlocks.",
"Table 4: NeuronBlocks results on Knowledge Distillation task.",
"Table 5: NeuronBlocks results on WikiQA."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"4-Figure2-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"4-Figure3-1.png",
"5-Table4-1.png",
"5-Table5-1.png"
]
} | [
"How do the authors evidence the claim that many engineers find it a big overhead to choose from multiple frameworks, models and optimization techniques?"
] | [
[
"1904.09535-Introduction-2"
]
] | [
"By conducting a survey among engineers"
] | 51 |
1911.03059 | A Comprehensive Comparison of Machine Learning Based Methods Used in Bengali Question Classification | QA classification system maps questions asked by humans to an appropriate answer category. A sound question classification (QC) system model is the pre-requisite of a sound QA system. This work demonstrates phases of assembling a QA type classification model. We present a comprehensive comparison (performance and computational complexity) among some machine learning based approaches used in QC for Bengali language. | {
"paragraphs": [
[
"Question classification (QC) deals with question analysis and question labeling based on the expected answer type. The goal of QC is to assign classes accurately to the questions based on expected answer. In modern system, there are two types of questions BIBREF0. One is Factoid question which is about providing concise facts and another one is Complex question that has a presupposition which is complex. Question Answering (QA) System is an integral part of our daily life because of the high amount of usage of Internet for information acquisition. In recent years, most of the research works related to QA are based on English language such as IBM Watson, Wolfram Alpha. Bengali speakers often fall in difficulty while communicating in English BIBREF1.",
"In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers. Bengali questions have flexible inquiring ways, so there are many difficulties associated with Bengali QC BIBREF0. As there is no rich corpus of questions in Bengali Language available, collecting questions is an additional challenge. Different difficulties in building a QA System are mentioned in the literature BIBREF2 BIBREF3. The first work on a machine learning based approach towards Bengali question classification is presented in BIBREF0 that employ the Stochastic Gradient Descent (SGD)."
],
[
"Over the years, a handful of QA systems have gained popularity around the world. One of the oldest QA system is BASEBALL (created on 1961) BIBREF4 which answers question related to baseball league in America for a particular season. LUNAR BIBREF5 system answers questions about soil samples taken from Apollo lunar exploration. Some of the most popular QA Systems are IBM Watson, Apple Siri and Wolfram Alpha. Examples of some QA systems based on different languages are: Zhang Yu Chinese question classification BIBREF6 based on Incremental Modified Bayes, Arabic QA system (AQAS) BIBREF7 by F. A. Mohammed, K. Nasser, & H. M. Harb and Syntactic open domain Arabic QA system for factoid questions BIBREF8 by Fareed et al. QA systems have been built on different analysis methods such as morphological analysis BIBREF9, syntactical analysis BIBREF10, semantic analysis BIBREF11 and expected answer Type analysis BIBREF12."
],
[
"Researches on question classification, question taxonomies and QA system have been undertaken in recent years. There are two types of approaches for question classification according to Banerjee et al in BIBREF13 - by rules and by machine learning approach. Rule based approaches use some hard coded grammar rules to map the question to an appropriate answer type BIBREF14 BIBREF15. Machine Learning based approaches have been used by Zhang et al and Md. Aminul Islam et al in BIBREF16 and BIBREF0. Many classifiers have been used in machine learning for QC such as Support Vector Machine (SVM) BIBREF16 BIBREF17, Support Vector Machines and Maximum Entropy Model BIBREF18, Naive Bayes (NB), Kernel Naive Bayes (KNB), Decision Tree (DT) and Rule Induction (RI) BIBREF13. In BIBREF0, they claimed to achieve average precision of 0.95562 for coarse class and 0.87646 for finer class using Stochastic Gradient Descent (SGD)."
],
[
"A Bengali QC System was built by Somnath Banerjee and Sivaji Bandyopadhyay BIBREF13 BIBREF19 BIBREF20. They proposed a two-layer taxonomy classification with 9 coarse-grained classes and 69 fine-grained classes. There are other research works BIBREF0 BIBREF21 in Bengali Language. A survey was performed on text QA techniques BIBREF22 where there was an analysis conducted in Bengali Language. Syed Mehedi Hasan Nirob et al achieved 88.62% accuracy by using 380 top frequent words as the feature in their work BIBREF17."
],
[
"QA system resides within the scope of Computer Science. It deals with information retrieval and natural language processing. Its goal is to automatically answer questions asked by humans in natural language. IR-based QA, Knowledge based approaches and Hybrid approaches are the QA system types. TREC, IBM-Watson, Google are examples of IR-based QA systems. Knowledge based QA systems are Apple Siri, Wolfram Alpha. Examples of Hybrid approach systems are IBM Watson and True Knowledge Evi.",
"Figure FIGREF4 provides an overview of QA System. The first step of QA System is Question Analysis. Question analysis has two parts - question classification and another question formulation. In question classification step, the question is classified using different classifier algorithms. In question formulation, the question is analyzed and the system creates a proper IR question by detecting the entity type of the question to provide a simple answer.",
"The next step is documents retrieval and analysis. In this step, the system matches the query against the sources of answers where the source can be documents or Web. In the answer extraction step, the system extracts the answers from the documents of the sources collected in documents retrieval and analysis phase. The extracted answers are filtered and evaluated in answer evaluation phase as there can be multiple possible answers for a query. In the final step, an answer of the question is returned."
],
[
"We use different types of classifiers for QA type classification. We separate our methodology into two sections similar to BIBREF0 - one is training section and another is validation section shown in Figure FIGREF5.",
"We use 10 fold cross validation where we have 3150 and 350 questions in our training set and validation set respectively. During training, after selecting the possible class labels, the system extracts the features of the questions and creates a model by passing through a classifier algorithm with the extracted features and class labels. During validation, the system extracts the features of the question and passes it into the model created during training and predicts the answer type."
],
[
"Though Bengali is the seventh most spoken language in terms of number of native speakers BIBREF23, there is no standard corpus of questions available BIBREF0. We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. The corpus contains the questions and the classes each question belongs to.",
"The set of question categories is known as question taxonomy BIBREF0. We have used two layer taxonomy which was proposed by Xin Li, Dan Roth BIBREF24. This two layer taxonomy is made up of two classes which are Coarse Class and Finer Class. There are six coarse classes such as Numeric, Location, Entity, Description, Human and Abbreviation and fifty finer classes such as city, state, mountain, distance, count, definition, group, expression, substance, creative, vehicle etc as shown in the Table I BIBREF0. A coarse-grained description of a system denotes large components while a fine-grained description denotes smaller sub-components of which the larger ones are composed of."
],
[
"Question word answer phrases, parts of speech tags, parse feature, named entity and semantically related words are different features from answer type detection BIBREF18. We use question word and phrases as features for answer type detection. We consider the following features:"
],
[
"Term Frequency - Inverse Document Frequency (TF-IDF) is a popular method used to identify the importance of a word in a particular document. TF-IDF transforms text into meaningful numeric representation. This technique is widely used to extract features for Natural Language Processing (NLP) applications BIBREF25 BIBREF26."
],
[
"N-grams is n-back to back words in a text. Queries of a same class usually share word n-grams BIBREF0. In this system, we choose bi-gram for extracting features."
],
[
"We use two setups (as done in BIBREF0) for our system. In the first setup, we eliminate the stop words from the text using another dataset containing only stop words. At second step, we work without eliminating the stop words from the text which gives better result than the first setup."
],
[
"MLP contains three layers - an input layer, an output layer and some hidden layers. Input layer receives the signal, the output layer gives a decision or prediction about the input and the computation of the MLP is conducted in the hidden layers. In our system, we use 100 layers. For weight optimization, we use Limited-memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) optimization algorithm."
],
[
"SVM gives an optimal hyper-plane and it maximizes the margin between classes. We use Radial Basis Function (RBF) kernel in our system to make decision boundary curve-shaped. For decision function shape, we use the original one-vs-one (ovo) decision function."
],
[
"NBC is based on Bayes' Theorem which gives probability of an event occurrence based on some conditions related to that event. We use Multinomial Naive Bayes Classifier with smoothing parameter equals to 0.1. A zero probability cancels the effects of all the other probabilities."
],
[
"Stochastic gradient descent optimizes an objective function with suitable smoothness properties BIBREF27. It selects few examples randomly instead of whole data for each iteration. We use 'L2' regularization for reduction of overfitting."
],
[
"Gradient Boosting Classifier produces a prediction model consisting of weak prediction models. Gradient boosting uses decision trees. We use 100 boosting stages in this work."
],
[
"K-NN is a supervised classification and regression algorithm. It uses the neighbours of the given sample to identify its class. K determines the number of neighbours needed to be considered. We set the value of K equals to 13 in this work."
],
[
"RF is an ensemble learning technique. It constructs large number of decision trees during training and then predicts the majority class. We use 500 decision trees in the forest and \"entropy\" function to measure the quality of a split."
],
[
"Table II shows the accuracy and F1 score for different classifiers with and without eliminating stop words while extracting features. Figure FIGREF21 shows the average results of different classifiers in a bar chart with and without eliminating stop words from the questions.",
"Overall, SGD has shown the best performance on our dataset as it introduces non-linearity and uses back-propagation for updating parameter weights using loss function calculated on training set into classification. K-NN has shown the weakest performance overall, as this algorithm has a bad reputation of not working well in high dimensional data BIBREF28. MLP and SVM have shown similar performance. MLP takes advantage of multiple hidden layers in order to take non-linearly separable samples in a linearly separable condition. SVM accomplishes this same feat by taking the samples to a higher dimensional hyperplane where the samples are linearly separable. Gradient Boosting Classifier (GBC) and Random Forest (RF) both utilize a set of decision trees and achieve similar results (RF performs slightly without eliminating stop words). Naive Bayesian Classifier (NBC) shows performance on per with GBC and RF algorithms. The overall better performance of all the algorithms when provided with stop words show the importance of stop words in Bengali QA classification.",
"Figure FIGREF22 shows the predictions of some particular questions by each of the classifiers. The input is a full question and the output is the class of the question."
],
[
"In Table TABREF24, n = No. of training sample, p = No. of features, ntrees = No. of trees (for methods based on various trees), nsv = No. of support vectors, i = No. of iterations, h = No. of nodes in each hidden layer, k = No. of hidden layers and $\\overline{m}$ = the average no. of non-zero attributes per sample."
],
[
"By implementing different machine learning based classifiers on our Bengali question corpus, we perform a comparative analysis among them. The question classification impacts the QA system. So, it is important to classify the question more precisely. This work will help the research community to choose a proper classification model for smart Bengali QA system development. Future work should aim at developing a richer corpus of Bengali questions which will help in getting better vector representation of words and thus will facilitate deep learning based automatic feature extraction."
]
],
"section_name": [
"Introduction",
"Related Works ::: Popular Question-Answering Systems",
"Related Works ::: Research Works Related to Question Classifications",
"Related Works ::: Research Works in Bengali Language",
"Question Answering (QA) System",
"Proposed Methodology",
"Question Collection and Categories",
"Implementation of the System ::: Feature Extraction",
"Implementation of the System ::: Feature Extraction ::: TF-IDF",
"Implementation of the System ::: Feature Extraction ::: Word level N-Grams",
"Implementation of the System ::: Feature Extraction ::: Stop Words",
"Implementation of the System ::: Classification Algorithms ::: Multi Layer Perceptron (MLP)",
"Implementation of the System ::: Classification Algorithms ::: Support Vector Machine (SVM)",
"Implementation of the System ::: Classification Algorithms ::: Naive Bayesian Classifier (NBC)",
"Implementation of the System ::: Classification Algorithms ::: Stochastic Gradient Descent (SGD)",
"Implementation of the System ::: Classification Algorithms ::: Gradient Boosting Classifier (GBC)",
"Implementation of the System ::: Classification Algorithms ::: K Nearest Neighbour (K-NN)",
"Implementation of the System ::: Classification Algorithms ::: Random Forest (RF)",
"Implementation of the System ::: Results and Discussion",
"Implementation of the System ::: Computational Complexity",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"c7ee8a931544cc6e72631a2f6d0d204c761cfd82",
"c8eea73d444612f78168f9b7ae08813e46e9eab1"
],
"answer": [
{
"evidence": [
"Though Bengali is the seventh most spoken language in terms of number of native speakers BIBREF23, there is no standard corpus of questions available BIBREF0. We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. The corpus contains the questions and the classes each question belongs to."
],
"extractive_spans": [],
"free_form_answer": "Dataset of total 3500 questions from the Internet and other sources such as books of general knowledge questions, history, etc.",
"highlighted_evidence": [
"We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. The corpus contains the questions and the classes each question belongs to."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Though Bengali is the seventh most spoken language in terms of number of native speakers BIBREF23, there is no standard corpus of questions available BIBREF0. We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. The corpus contains the questions and the classes each question belongs to."
],
"extractive_spans": [],
"free_form_answer": "3500 questions collected from the internet and books.",
"highlighted_evidence": [
"We have collected total 3500 questions from the Internet and other sources such as books of general knowledge questions, history etc. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4746cfbda7f80f242913007f486e2bb63f1332ca",
"97f532d796ee55abcf606e55fac8293a43e7f4e7"
],
"answer": [
{
"evidence": [
"In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers. Bengali questions have flexible inquiring ways, so there are many difficulties associated with Bengali QC BIBREF0. As there is no rich corpus of questions in Bengali Language available, collecting questions is an additional challenge. Different difficulties in building a QA System are mentioned in the literature BIBREF2 BIBREF3. The first work on a machine learning based approach towards Bengali question classification is presented in BIBREF0 that employ the Stochastic Gradient Descent (SGD)."
],
"extractive_spans": [
"Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers. Bengali questions have flexible inquiring ways, so there are many difficulties associated with Bengali QC BIBREF0. As there is no rich corpus of questions in Bengali Language available, collecting questions is an additional challenge. Different difficulties in building a QA System are mentioned in the literature BIBREF2 BIBREF3. The first work on a machine learning based approach towards Bengali question classification is presented in BIBREF0 that employ the Stochastic Gradient Descent (SGD)."
],
"extractive_spans": [
"Multi-Layer Perceptron",
"Naive Bayes Classifier",
"Support Vector Machine",
"Gradient Boosting Classifier",
"Stochastic Gradient Descent",
"K Nearest Neighbour",
"Random Forest"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"what datasets did they use?",
"what ml based approaches were compared?"
],
"question_id": [
"dc1cec824507fc85ac1ba87882fe1e422ff6cffb",
"f428618ca9c017e0c9c2a23515dab30a7660f65f"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Fig. 2. Proposed Work Flow Diagram",
"Fig. 1. General Architecture of Question Answering System",
"TABLE II EXPERIMENT RESULTS",
"TABLE I COARSE AND FINE GRAINED QUESTION CATEGORIES",
"TABLE III COMPUTATIONAL COMPLEXITY OF EACH CLASSIFIER",
"Fig. 3. F1 Scores of Each Classifiers"
],
"file": [
"2-Figure2-1.png",
"2-Figure1-1.png",
"3-TableII-1.png",
"3-TableI-1.png",
"4-TableIII-1.png",
"4-Figure3-1.png"
]
} | [
"what datasets did they use?"
] | [
[
"1911.03059-Question Collection and Categories-0"
]
] | [
"3500 questions collected from the internet and books."
] | 52 |
1909.08089 | Extractive Summarization of Long Documents by Combining Global and Local Context | In this paper, we propose a novel neural single document extractive summarization model for long documents, incorporating both the global context of the whole document and the local context within the current topic. We evaluate the model on two datasets of scientific papers, Pubmed and arXiv, where it outperforms previous work, both extractive and abstractive models, on ROUGE-1, ROUGE-2 and METEOR scores. We also show that, consistently with our goal, the benefits of our method become stronger as we apply it to longer documents. Rather surprisingly, an ablation study indicates that the benefits of our model seem to come exclusively from modeling the local context, even for the longest documents. | {
"paragraphs": [
[
"Single-document summarization is the task of generating a short summary for a given document. Ideally, the generated summaries should be fluent and coherent, and should faithfully maintain the most important information in the source document. purpleThis is a very challenging task, because it arguably requires an in-depth understanding of the source document, and current automatic solutions are still far from human performance BIBREF0 .",
"Single-document summarization can be either extractive or abstractive. Extractive methods typically pick sentences directly from the original document based on their importance, and form the summary as an aggregate of these sentences. Usually, summaries generated in this way have a better performance on fluency and grammar, but they may contain much redundancy and lack in coherence across sentences. In contrast, abstractive methods attempt to mimic what humans do by first extracting content from the source document and then produce new sentences that aggregate and organize the extracted information. Since the sentences are generated from scratch they tend to have a relatively worse performance on fluency and grammar. Furthermore, while abstractive summaries are typically less redundant, they may end up including misleading or even utterly false statements, because the methods to extract and aggregate information form the source document are still rather noisy.",
"In this work, we focus on extracting informative sentences from a given document (without dealing with redundancy), especially when the document is relatively long (e.g., scientific articles).",
"Most recent works on neural extractive summarization have been rather successful in generating summaries of short news documents (around 650 words/document) BIBREF1 by applying neural Seq2Seq models BIBREF2 . However when it comes to long documents, these models tend to struggle with longer sequences because at each decoding step, the decoder needs to learn to construct a context vector capturing relevant information from all the tokens in the source sequence BIBREF3 .",
"Long documents typically cover multiple topics. In general, the longer a document is, the more topics are discussed. As a matter of fact, when humans write long documents they organize them in chapters, sections etc.. Scientific papers are an example of longer documents and they follow a standard discourse structure describing the problem, methodology, experiments/results, and finally conclusions BIBREF4 .",
"To the best of our knowledge only one previous work in extractive summarization has explicitly leveraged section information to guide the generation of summaries BIBREF5 . However, the only information about sections fed into their sentence classifier is a categorical feature with values like Highlight, Abstract, Introduction, etc., depending on which section the sentence appears in.",
"In contrast, in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary",
"Our main contributions are as follows: (i) In order to capture the local context, we are the first to apply LSTM-minus to text summarization. LSTM-minus is a method for learning embeddings of text spans, which has achieved good performance in dependency parsing BIBREF6 , in constituency parsing BIBREF7 , as well as in discourse parsing BIBREF8 . With respect to more traditional methods for capturing local context, which rely on hierarchical structures, LSTM-minus produces simpler models i.e. with less parameters, and therefore faster to train and less prone to overfitting. (ii) We test our method on the Pubmed and arXiv datasets and results appear to support our goal of effectively summarizing long documents. In particular, while overall we outperform the baseline and previous approaches only by a narrow margin on both datasets, the benefit of our method become much stronger as we apply it to longer documents. purpleFurthermore, in an ablation study to assess the relative contributions of the global and the local model we found that, rather surprisingly, the benefits of our model seem to come exclusively from modeling the local context, even for the longest documents.[6] (iii) In order to evaluate our approach, we have created oracle labels for both Pubmed and arXiv BIBREF9 , by applying a greedy oracle labeling algorithm. The two datasets annotated with extractive labels will be made public."
],
[
"Traditional extractive summarization methods are mostly based on explicit surface features BIBREF10 , relying on graph-based methods BIBREF11 , or on submodular maximization BIBREF12 . Benefiting from the success of neural sequence models in other NLP tasks, chenglapata propose a novel approach to extractive summarization based on neural networks and continuous sentence features, which outperforms traditional methods on the DailyMail dataset. In particular, they develop a general encoder-decoder architecture, where a CNN is used as sentence encoder, a uni-directional LSTM as document encoder, with another uni-directional LSTM as decoder. To decrease the number of parameters while maintaining the accuracy, summarunner present SummaRuNNer, a simple RNN-based sequence classifier without decoder, outperforming or matching the model of BIBREF2 . They take content, salience, novelty, and position of each sentence into consideration when deciding if a sentence should be included in the extractive summary. Yet, they do not capture any aspect of the topical structure, as we do in this paper. So their approach would arguably suffer when applied to long documents, likely containing multiple and diverse topics.",
"While SummaRuNNer was tested only on news, EMNLP2018 carry out a comprehensive set of experiments with deep learning models of extractive summarization across different domains, i.e. news, personal stories, meetings, and medical articles, as well as across different neural architectures, in order to better understand the general pros and cons of different design choices. They find that non auto-regressive sentence extraction performs as well or better than auto-regressive extraction in all domains, where by auto-regressive sentence extraction they mean using previous predictions to inform future predictions. Furthermore, they find that the Average Word Embedding sentence encoder works at least as well as encoders based on CNN and RNN. In light of these findings, our model is not auto-regressive and uses the Average Word Embedding encoder."
],
[
"Research on summarizing scientific articles has a long history BIBREF13 . Earlier on, it was realized that summarizing scientific papers requires different approaches than what was used for summarizing news articles, due to differences in document length, writing style and rhetorical structure. For instance, BIBREF14 presented a supervised Naive Bayes classifier to select content from a scientific paper based on the rhetorical status of each sentence (e.g., whether it specified a research goal, or some generally accepted scientific background knowledge, etc.). More recently, researchers have extended this work by applying more sophisticated classifiers to identify more fine-grain rhetorical categories, as well as by exploiting citation contexts. 2013-discourse propose the CoreSC discourse-driven content, which relies on CRFs and SVMs, to classify the discourse categories (e.g. Background, Hypothesis, Motivation, etc.) at the sentence level. The recent work most similar to ours is BIBREF5 where, in order to determine whether a sentence should be included in the summary, they directly use the section each sentence appears in as a categorical feature with values like Highlight, Abstract, Introduction, etc.. In this paper, instead of using sections as categorical features, we rely on a distributed representation of the semantic information within each section, as the local context of each sentence. In a very different line of work, cohan-2015-scientific form the summary by also exploiting information on how the target paper is cited in other papers. Currently, we do not use any information from citation contexts."
],
[
"summarydataset provide a comprehensive overview of the current datasets for summarization. Noticeably, most of the larger-scale summarization datasets consists of relatively short documents, like CNN/DailyMail BIBREF1 and New York Times BIBREF15 . One exception is BIBREF9 that recently introduce two large-scale datasets of long and structured scientific papers obtained from arXiv and PubMed. These two new datasets contain much longer documents than all the news datasets (See Table TABREF6 ) and are therefore ideal test-beds for the method we present in this paper."
],
[
"While most current neural abstractive summarization models have focused on summarizing relatively short news articles (e.g., BIBREF16 ), few researchers have started to investigate the summarization of longer documents by exploiting their natural structure. agents present an encoder-decoder architecture to address the challenges of representing a long document for abstractive summarization. The encoding task is divided across several collaborating agents, each is responsible for a subsection of text through a multi-layer LSTM with word attention. Their model seems however overly complicated when it comes to the extractive summarization task, where word attention is arguably much less critical. So, we do not consider this model further in this paper.",
"discourselongdocument also propose a model for abstractive summarization taking the structure of documents into consideration with a hierarchical approach, and test it on longer documents with section information, i.e. scientific papers. In particular, they apply a hierarchical encoder at the word and section levels. Then, in the decoding step, they combine the word attention and section attention to obtain a context vector.",
"This approach to capture discourse structure is however quite limited both in general and especially when you consider its application to extractive summarization. First, their hierarchical method has a large number of parameters and it is therefore slow to train and likely prone to overfitting. Secondly, it does not take the global context of the whole document into account, which may arguably be critical in extractive methods, when deciding on the salience of a sentence (or even a word). The extractive summarizer we present in this paper tries to address these limitations by adopting the parameter lean LSTM-minus method, and by explicitly modeling the global context."
],
[
"The LSTM-Minus method is first proposed in BIBREF6 as a novel way to learn sentence segment embeddings for graph-based dependency parsing, i.e. estimating the most likely dependency tree given an input sentence. For each dependency pair, they divide a sentence into three segments (prefix, infix and suffix), and LSTM-Minus is used to represent each segment. They apply a single LSTM to the whole sentence and use the difference between two hidden states INLINEFORM0 to represent the segment from word INLINEFORM1 to word INLINEFORM2 . This enables their model to learn segment embeddings from information both outside and inside the segments and thus enhancing their model ability to access to sentence-level information. The intuition behind the method is that each hidden vector INLINEFORM3 can capture useful information before and including the word INLINEFORM4 .",
"Shortly after, lstm-minusconstituency use the same method on the task of constituency parsing, as the representation of a sentence span, extending the original uni-directional LSTM-Minus to the bi-directional case. More recently, inspired by the success of LSTM-Minus in both dependency and constituency parsing, lstm-minusdiscourse extend the technique to discourse parsing. They propose a two-stage model consisting of an intra-sentential parser and a multi-sentential parser, learning contextually informed representations of constituents with LSTM-Minus, at the sentence and document level, respectively.",
"Similarly, in this paper, when deciding if a sentence should be included in the summary, the local context of that sentence is captured by applying LSTM-Minus at the document level, to represent the sub-sequence of sentences of the document (i.e., the topic/section) the target sentence belongs to."
],
[
"In this work, we propose an extractive model for long documents, incorporating local and global context information, motivated by natural topic-oriented structure of human-written long documents. The architecture of our model is shown in Figure FIGREF10 , each sentence is visited sequentially in the original document order, and a corresponding confidence score is computed expressing whether the sentence should be included in the extractive summary. Our model comprises three components: the sentence encoder, the document encoder and the sentence classifier."
],
[
"The goal of the sentence encoder is mapping sequences of word embeddings to a fixed length vector (See bottom center of Figure FIGREF10 ). There are several common methods to embed sentences. For extractive summarization, RNN were used in BIBREF17 , CNN in BIBREF2 , and Average Word Embedding in BIBREF18 . EMNLP2018 experiment with all the three methods and conclude that Word Embedding Averaging is as good or better than either RNNs or CNNs for sentence embedding across different domains and summarizer architectures. Thus, we use the Average Word Embedding as our sentence encoder, by which a sentence embedding is simply the average of its word embeddings, i.e. INLINEFORM0 ",
"Besides, we also tried the popular pre-trained BERT sentence embedding BIBREF19 , but initial results were rather poor. So we do not pursue this possibility any further."
],
[
"At the document level, a bi-directional recurrent neural network BIBREF20 is often used to encode all the sentences sequentially forward and backward, with such model achieving remarkable success in machine translation BIBREF21 . As units, we selected gated recurrent units (GRU) BIBREF22 , in light of favorable results shown in BIBREF23 . The GRU is represented with the standard reset, update, and new gates.",
"The output of the bi-directional GRU for each sentence INLINEFORM0 comprises two hidden states, INLINEFORM1 as forward and backward hidden state, respectively.",
"A. Sentence representation As shown in Figure FIGREF10 (A), for each sentence INLINEFORM0 , the sentence representation is the concatenation of both backward and forward hidden state of that sentence. INLINEFORM1 ",
"In this way, the sentence representation not only represents the current sentence, but also partially covers contextual information both before and after this sentence.",
"B. Document representation The document representation provides global information on the whole document. It is computed as the concatenation of the final state of the forward and backward GRU, labeled as B in Figure FIGREF10 . BIBREF24 INLINEFORM0 ",
"C. Topic segment representation To capture the local context of each sentence, namely the information of the topic segment that sentence falls into, we apply the LSTM-Minus method, a method for learning embeddings of text spans. LSTM-Minus is shown in detail in Figure 1 (left panel C), each topic segment is represented as the subtraction between the hidden states of the start and the end of that topic. As illustrated in Figure FIGREF10 , the representation for section 2 of the sample document (containing three sections and eight sentences overall) can be computed as INLINEFORM0 , where INLINEFORM1 are the forward hidden states of sentence 5 and 2, respectively, while INLINEFORM2 are the backward hidden states of sentence 3 and 6, respectively. In general, the topic segment representation INLINEFORM3 for segment INLINEFORM4 is computed as: INLINEFORM5 ",
"where INLINEFORM0 is the index of the beginning and the end of topic INLINEFORM1 , INLINEFORM2 and INLINEFORM3 denote the topic segment representation of forward and backward, respectively. The final representation of topic INLINEFORM4 is the concatenation of forward and backward representation INLINEFORM5 . To obtain INLINEFORM6 and INLINEFORM7 , we utilize subtraction between GRU hidden vectors of INLINEFORM8 and INLINEFORM9 , and we pad the hidden states with zero vectors both in the beginning and the end, to ensure the index can not be out of bound. The intuition behind this process is that the GRUs can keep previous useful information in their memory cell by exploiting reset, update, and new gates to decide how to utilize and update the memory of previous information. In this way, we can represent the contextual information within each topic segment for all the sentences in that segment."
],
[
"Once we have obtained a representation for the sentence, for its topic segment (i.e., local context) and for the document (i.e., global context), these three factors are combined to make a final prediction INLINEFORM0 on whether the sentence should be included in the summary. We consider two ways in which these three factors can be combined.",
"Concatenation We can simply concatenate the vectors of these three factors as, INLINEFORM0 ",
"where sentence INLINEFORM0 is part of the topic INLINEFORM1 , and INLINEFORM2 is the representation of sentence INLINEFORM3 with topic segment information and global context information.",
"Attentive context As local context and global context are all contextual information of the given sentence, we use an attention mechanism to decide the weight of each context vector, represented as INLINEFORM0 ",
"where the INLINEFORM0 is the weighted context vector of each sentence INLINEFORM1 , and assume sentence INLINEFORM2 is in topic INLINEFORM3 .",
"Then there is a final multi-layer perceptron(MLP) followed with a sigmoid activation function indicating the confidence score for selecting each sentence: INLINEFORM0 "
],
[
"To validate our method, we set up experiments on the two scientific paper datasets (arXiv and PubMed). With ROUGE and METEOR scores as automatic evaluation metrics, we compare with previous works, both abstractive and extractive."
],
[
"The weighted negative log-likelihood is minimized, where the weight is computed as INLINEFORM0 , to solve the problem of highly imbalanced data (typical in extractive summarization). INLINEFORM1 ",
"where INLINEFORM0 represent the ground-truth label of sentence INLINEFORM1 , with INLINEFORM2 meaning sentence INLINEFORM3 is in the gold-standard extract summary."
],
[
"In the Pubmed and arXiv datasets, the extractive summaries are missing. So we follow the work of BIBREF18 on extractive summary labeling, constructing gold label sequences by greedily optimizing ROUGE-1 on the gold-standard abstracts, which are available for each article. The algorithm is shown in Appendix A."
],
[
"We train our model using the Adam optimizer BIBREF25 with learning rate INLINEFORM0 and a drop out rate of 0.3. We use a mini-batch with a batch size of 32 documents, and the size of the GRU hidden states is 300. For word embeddings, we use GloVe BIBREF26 with dimension 300, pre-trained on the Wikipedia and Gigaword. The vocabulary size of our model is 50000. All the above parameters were set based on BIBREF18 without any fine-tuning. Again following BIBREF18 , we train each model for 50 epochs, and the best model is selected with early stopping on the validation set according to Rouge-2 F-score."
],
[
"We perform a systematic comparison with previous work in extractive summarization. For completeness, we also compare with recent neural abstractive approaches. In all the experiments, we use the same train/val/test splitting.",
"Traditional extractive summarization models: SumBasic BIBREF27 , LSA BIBREF28 , and LexRank BIBREF29 ",
"Neural abstractive summarization models: Attn-Seq2Seq BIBREF1 , Pntr-Gen-Seq2Seq BIBREF16 and Discourse-aware BIBREF9 ",
"Neural extractive summarization models: Cheng&Lapata BIBREF2 and SummaRuNNer BIBREF17 . Based on BIBREF18 , we use the Average Word Encoder as sentence encoder for both models, instead of the CNN and RNN sentence encoders that were originally used in the two systems, respectively. ",
"Baseline: Similar to our model, but without local context and global context, i.e. the input to MLP is the sentence representation only.",
"Lead: Given a length limit of INLINEFORM0 words for the summary, Lead will return the first INLINEFORM1 words of the source document.",
"Oracle: uses the Gold Standard extractive labels, generated based on ROUGE (Sec. 4.2)."
],
[
"For evaluation, we follow the same procedure as in BIBREF18 . Summaries are generated by selecting the top ranked sentences by model probability INLINEFORM0 , until the length limit is met or exceeded. Based on the average length of abstracts in these two datasets, we set the length limit to 200 words. We use ROUGE scores BIBREF30 and METEOR scores BIBREF31 between the model results and ground-truth abstractive summaries as evaluation metric. The unigram and bigram overlap (ROUGE-1,2) are intended to measure the informativeness, while longest common subsequence (ROUGE-L) captures fluency to some extent BIBREF2 . METEOR was originally proposed to evaluate translation systems by measuring the alignment between the system output and reference translations. As such, it can also be used as an automatic evaluation metric for summarization BIBREF18 .",
"The performance of all models on arXiv and Pubmed is shown in Table TABREF28 and Table TABREF29 , respectively. Follow the work BIBREF18 , we use the approximate randomization as the statistical significance test method BIBREF32 with a Bonferroni correction for multiple comparisons, at the confidence level 0.01 ( INLINEFORM0 ). As we can see in these tables, on both datasets, the neural extractive models outperforms the traditional extractive models on informativeness (ROUGE-1,2) by a wide margin, but results are mixed on ROUGE-L. Presumably, this is due to the neural training process, which relies on a goal standard based on ROUGE-1. Exploring other training schemes and/or a combination of traditional and neural approaches is left as future work. Similarly, the neural extractive models also dominate the neural abstractive models on ROUGE-1,2, but these abstractive models tend to have the highest ROUGE-L scores, possibly because they are trained directly on gold standard abstract summaries.",
"Compared with other neural extractive models, our models (both with attentive context and concatenation decoder) have better performances on all three ROUGE scores, as well as METEOR. In particular, the improvements over the Baseline model show that a combination of local and global contextual information does help to identify the most important sentences (more on this in the next section). Interestingly, just the Baseline model already achieves a slightly better performance than previous works; possibly because the auto-regressive approach used in those models is even more detrimental for long documents.",
"Figure FIGREF32 shows the most important result of our analysis: the benefits of our method, explicitly designed to deal with longer documents, do actually become stronger as we apply it to longer documents. As it can be seen in Figure FIGREF32 , the performance gain of our model with respect to current state-of-the-art extractive summarizer is more pronounced for documents with INLINEFORM0 words in both datasets.",
"Finally, the result of Lead (Table TABREF28 , TABREF29 ) shows that scientific papers have less position bias than news; i.e., the first sentences of these papers are not a good choice to form an extractive summary.",
"As a teaser for the potential and challenges that still face our approach, its output (i.e., the extracted sentences) when applied to this paper is colored in red and the order in which the sentences are extracted is marked with the Roman numbering. If we set the summary length limit to the length of our abstract, the first five sentences in the conclusions section are extracted. If we increase the length to 200 words, two more sentences are extracted, which do seem to provide useful complementary information. Not surprisingly, some redundancy is present, as dealing explicitly with redundancy is not a goal of our current proposal and left as future work."
],
[
"In order to assess the relative contributions of the global and local models to the performance of our approach, we ran an ablation study. This was done for each dataset both with the whole test set, as well as with a subset of long documents. The results for Pubmed and arXiv are shown in Table TABREF34 and Table TABREF35 , respectively. For statistical significance, as it was done for the general results in Section 4.5, we use the approximate randomization method BIBREF32 with the Bonferroni correction at ( INLINEFORM0 ).",
"From these tables, we can see that on both datasets the performance significantly improves when local topic information (i.e. local context) is added. And the improvement is even greater when we only consider long documents. Rather surprisingly, this is not the case for the global context. Adding a representation of the whole document (i.e. global context) never significantly improves performance. In essence, it seems that all the benefits of our model come exclusively from modeling the local context, even for the longest documents. Further investigation of this finding is left as future work."
],
[
"purpleIn this paper, we propose a novel extractive summarization model especially designed for long documents, by incorporating the local context within each topic, along with the global context of the whole document.[2] purpleOur approach integrates recent findings on neural extractive summarization in a parameter lean and modular architecture.[3] purpleWe evaluate our model and compare with previous works in both extractive and abstractive summarization on two large scientific paper datasets, which contain documents that are much longer than in previously used corpora.[4] purpleOur model not only achieves state-of-the-art on these two datasets, but in an additional experiment, in which we consider documents with increasing length, it becomes more competitive for longer documents.[5] purpleWe also ran an ablation study to assess the relative contribution of the global and local components of our approach. [1] Rather surprisingly, it appears that the benefits of our model come only from modeling the local context.",
"For future work, we initially intend to investigate neural methods to deal with redundancy. Then, it could be beneficial to integrate explicit features, like sentence position and salience, into our neural approach. More generally, we plan to combine of traditional and neural models, as suggested by our results. Furthermore, we would like to explore more sophistical structure of documents, like discourse tree, instead of rough topic segments. As for evaluation, we would like to elicit human judgments, for instance by inviting authors to rate the outputs from different systems, when applied to their own papers. More long term, we will study how extractive/abstractive techniques can be integrated; for instance, the output of an extractive system could be fed into an abstractive one, training the two jointly."
],
[
"This research was supported by the Language & Speech Innovation Lab of Cloud BU, Huawei Technologies Co., Ltd. Extractive Label Generation The algorithm SECREF6 is used to generate the extractive labels based on the human-made abstractive summaries, i.e. abstracts of scientific papers. Extractive label generation LabelGenerationReference,sentences,lengthLimit INLINEFORM0 = ” INLINEFORM1 = 0 INLINEFORM2 = [] INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 in range(len( INLINEFORM7 )) INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 INLINEFORM12 != INLINEFORM13 INLINEFORM14 .append( INLINEFORM15 ) INLINEFORM16 = INLINEFORM17 + INLINEFORM18 [ INLINEFORM19 ] INLINEFORM20 += NumberOfWords( INLINEFORM21 [ INLINEFORM22 ]) break INLINEFORM23 "
]
],
"section_name": [
"Introduction",
"Extractive summarization",
"Extractive summarization on Scientific papers",
"Datasets for long documents",
"Neural Abstractive summarization on long documents",
"LSTM-Minus",
"Our Model",
"Sentence Encoder",
"Document Encoder",
"Decoder",
"Experiments",
"Training",
"Extractive Label Generation",
"Implementation Details",
"Models for Comparison",
"Results and Analysis",
"Ablation Study",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"73d928ae2c39dcc5f533eb8a40c4f0c5f81589f7",
"f4bf3e7ae26594e5b39d200a92df24b1b9c0e588"
],
"answer": [
{
"evidence": [
"The performance of all models on arXiv and Pubmed is shown in Table TABREF28 and Table TABREF29 , respectively. Follow the work BIBREF18 , we use the approximate randomization as the statistical significance test method BIBREF32 with a Bonferroni correction for multiple comparisons, at the confidence level 0.01 ( INLINEFORM0 ). As we can see in these tables, on both datasets, the neural extractive models outperforms the traditional extractive models on informativeness (ROUGE-1,2) by a wide margin, but results are mixed on ROUGE-L. Presumably, this is due to the neural training process, which relies on a goal standard based on ROUGE-1. Exploring other training schemes and/or a combination of traditional and neural approaches is left as future work. Similarly, the neural extractive models also dominate the neural abstractive models on ROUGE-1,2, but these abstractive models tend to have the highest ROUGE-L scores, possibly because they are trained directly on gold standard abstract summaries.",
"FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.",
"FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."
],
"extractive_spans": [],
"free_form_answer": "Best proposed model result vs best previous result:\nArxiv dataset: Rouge 1 (43.62 vs 42.81), Rouge L (29.30 vs 31.80), Meteor (21.78 vs 21.35)\nPubmed dataset: Rouge 1 (44.85 vs 44.29), Rouge L (31.48 vs 35.21), Meteor (20.83 vs 20.56)",
"highlighted_evidence": [
"The performance of all models on arXiv and Pubmed is shown in Table TABREF28 and Table TABREF29 , respectively.",
"As we can see in these tables, on both datasets, the neural extractive models outperforms the traditional extractive models on informativeness (ROUGE-1,2) by a wide margin, but results are mixed on ROUGE-L.",
"FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.",
"FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.",
"FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."
],
"extractive_spans": [],
"free_form_answer": "On arXiv dataset, the proposed model outperforms baselie model by (ROUGE-1,2,L) 0.67 0.72 0.77 respectively and by Meteor 0.31.\n",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.",
"FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"47dccc8eb83e3ef619e7658762619e7af339f5c4",
"6015fe9fd722bb6df402909ca8a7898fc35e99df"
],
"answer": [
{
"evidence": [
"In contrast, in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary"
],
"extractive_spans": [
"global (the whole document)",
"local context (e.g., the section/topic)"
],
"free_form_answer": "",
"highlighted_evidence": [
"In contrast, in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In contrast, in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary"
],
"extractive_spans": [
"global (the whole document) and the local context (e.g., the section/topic) "
],
"free_form_answer": "",
"highlighted_evidence": [
"In contrast, in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"How much does their model outperform existing models?",
"What do they mean by global and local context?"
],
"question_id": [
"de5b6c25e35b3a6c5e40e350fc5e52c160b33490",
"b66c9a4021b6c8529cac1a2b54dacd8ec79afa5f"
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61"
],
"search_query": [
"summarization",
"summarization"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 2.1: One of the extractor compared in [19], (a) is a simple RNN model, model (b) is an attention-based encoder-decoder model",
"Figure 2.2: One of the extractor compared in [19], model (c) is the extractor proposed in [4], model (d) is the extractor proposed in [27].",
"Table 2.1: Comparison of news datasets, scientific paper datasets and the recently proposed patent dataset, [8][36]",
"Figure 2.3: The structure of Cohan’s model of discourse-aware abstractive summarization [8]",
"Figure 2.4: LSTM-Minus in dependency parsing.",
"Figure 2.5: The supervised topic segmentation model proposed in [21]",
"Figure 2.6: The approximate randomization statistical significance test. [31]",
"Figure 3.1: The structure of our model, sei,sri represent the sentence embedding and sentence representation of sentence i, respectively. The binary decision of whether the sentence should be included in the summary is based on the sentence itself (A), the whole document (B) and the current topic (C). The document representation is simply the concatenation of the last hidden states of the forward and backward RNNs, while the topic segment representation is computed by applying LSTM-Minus, as the details shown in Fig 3.2",
"Figure 3.2: Detail of C, the topic segment representation is computed by applying LSTM-Minus. The RNN in red rectangle is the Document Encoder, the same as the one in the red rectangle in Fig. 3.1",
"Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.",
"Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold.",
"Table 4.3: Percentage relative improvement of our model, when compared with the SummaRuNNer (SR) and Baseline (BSL) models on both datasets (first and second block). The third block shows Macro average relative improvement across the two datasets .",
"Figure 4.1: A Comparison between our model, SummaRuNNer and Oracle when applied to documents with increasing length, left-up: ROUGE-1 on Pubmed dataset, right-up: ROUGE-2 on Pubmed dataset, left-down: ROUGE-1 on arXiv dataset, right-down: ROUGE-2 on arXiv dataset",
"Figure 4.2: The relative position in documents of our predicted sentences, oracle sentences, and the section borders, and the documents are sampled uniformly from the highest ROUGE score(left) to lowest ROUGE score(right). The upper figure shows the position distribution of Pubmed, and the lower one shows the position distribution of arXiv.",
"Table 4.4: Ablation study on the Pubmed dataset. Baseline is the model with sentence representation only, Baseline+segment is the model with sentence and local topic information, Baseline+doc is the model with sentence and global document information, and the last one is the full model with concatenation decoder. Results that are not significantly distinguished from the best systems are bold.",
"Table 4.5: Ablation study on the arXiv dataset. The model descriptions refer to Table 4.4. Results that are not significantly distinguished from the best systems are bold.",
"Table 4.6: Results on the Bigpatent-A dataset.",
"Figure 4.3: A Comparison between our model, SummaRuNNer and Oracle when applied to documents with increasing length on Bigpatent-a dataset"
],
"file": [
"16-Figure2.1-1.png",
"16-Figure2.2-1.png",
"18-Table2.1-1.png",
"20-Figure2.3-1.png",
"21-Figure2.4-1.png",
"23-Figure2.5-1.png",
"24-Figure2.6-1.png",
"26-Figure3.1-1.png",
"27-Figure3.2-1.png",
"34-Table4.1-1.png",
"35-Table4.2-1.png",
"36-Table4.3-1.png",
"37-Figure4.1-1.png",
"38-Figure4.2-1.png",
"39-Table4.4-1.png",
"39-Table4.5-1.png",
"40-Table4.6-1.png",
"41-Figure4.3-1.png"
]
} | [
"How much does their model outperform existing models?"
] | [
[
"1909.08089-34-Table4.1-1.png",
"1909.08089-35-Table4.2-1.png",
"1909.08089-Results and Analysis-1"
]
] | [
"On arXiv dataset, the proposed model outperforms baselie model by (ROUGE-1,2,L) 0.67 0.72 0.77 respectively and by Meteor 0.31.\n"
] | 54 |
1910.09982 | Findings of the NLP4IF-2019 Shared Task on Fine-Grained Propaganda Detection | We present the shared task on Fine-Grained Propaganda Detection, which was organized as part of the NLP4IF workshop at EMNLP-IJCNLP 2019. There were two subtasks. FLC is a fragment-level task that asks for the identification of propagandist text fragments in a news article and also for the prediction of the specific propaganda technique used in each such fragment (18-way classification task). SLC is a sentence-level binary classification task asking to detect the sentences that contain propaganda. A total of 12 teams submitted systems for the FLC task, 25 teams did so for the SLC task, and 14 teams eventually submitted a system description paper. For both subtasks, most systems managed to beat the baseline by a sizable margin. The leaderboard and the data from the competition are available at this http URL. | {
"paragraphs": [
[
"Propaganda aims at influencing people's mindset with the purpose of advancing a specific agenda. In the Internet era, thanks to the mechanism of sharing in social networks, propaganda campaigns have the potential of reaching very large audiences BIBREF0, BIBREF1, BIBREF2.",
"Propagandist news articles use specific techniques to convey their message, such as whataboutism, red Herring, and name calling, among many others (cf. Section SECREF3). Whereas proving intent is not easy, we can analyse the language of a claim/article and look for the use of specific propaganda techniques. Going at this fine-grained level can yield more reliable systems and it also makes it possible to explain to the user why an article was judged as propagandist by an automatic system.",
"With this in mind, we organised the shared task on fine-grained propaganda detection at the NLP4IF@EMNLP-IJCNLP 2019 workshop. The task is based on a corpus of news articles annotated with an inventory of 18 propagandist techniques at the fragment level. We hope that the corpus would raise interest outside of the community of researchers studying propaganda. For example, the techniques related to fallacies and the ones relying on emotions might provide a novel setting for researchers interested in Argumentation and Sentiment Analysis."
],
[
"Propaganda has been tackled mostly at the article level. BIBREF3 created a corpus of news articles labelled as propaganda, trusted, hoax, or satire. BIBREF4 experimented with a binarized version of that corpus: propaganda vs. the other three categories. BIBREF5 annotated a large binary corpus of propagandist vs. non-propagandist articles and proposed a feature-based system for discriminating between them. In all these cases, the labels were obtained using distant supervision, assuming that all articles from a given news outlet share the label of that outlet, which inevitably introduces noise BIBREF6.",
"A related field is that of computational argumentation which, among others, deals with some logical fallacies related to propaganda. BIBREF7 presented a corpus of Web forum discussions with instances of ad hominem fallacy. BIBREF8, BIBREF9 introduced Argotario, a game to educate people to recognize and create fallacies, a by-product of which is a corpus with $1.3k$ arguments annotated with five fallacies such as ad hominem, red herring and irrelevant authority, which directly relate to propaganda.",
"Unlike BIBREF8, BIBREF9, BIBREF7, our corpus uses 18 techniques annotated on the same set of news articles. Moreover, our annotations aim at identifying the minimal fragments related to a technique instead of flagging entire arguments.",
"The most relevant related work is our own, which is published in parallel to this paper at EMNLP-IJCNLP 2019 BIBREF10 and describes a corpus that is a subset of the one used for this shared task."
],
[
"Propaganda uses psychological and rhetorical techniques to achieve its objective. Such techniques include the use of logical fallacies and appeal to emotions. For the shared task, we use 18 techniques that can be found in news articles and can be judged intrinsically, without the need to retrieve supporting information from external resources. We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques:"
],
[
"Using words/phrases with strong emotional implications (positive or negative) to influence an audience BIBREF11."
],
[
"Labeling the object of the propaganda as something the target audience fears, hates, finds undesirable or otherwise loves or praises BIBREF12."
],
[
"Repeating the same message over and over again, so that the audience will eventually accept it BIBREF13, BIBREF12."
],
[
"Either representing something in an excessive manner: making things larger, better, worse, or making something seem less important or smaller than it actually is BIBREF14, e.g., saying that an insult was just a joke."
],
[
"Questioning the credibility of someone or something."
],
[
"Seeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative, possibly based on preconceived judgments."
],
[
"Playing on strong national feeling (or with respect to a group, e.g., race, gender, political preference) to justify or promote an action or idea BIBREF15."
],
[
"Assuming one cause when there are multiple causes behind an issue. We include scapegoating as well: the transfer of the blame to one person or group of people without investigating the complexities of an issue."
],
[
"A brief and striking phrase that may include labeling and stereotyping. Slogans tend to act as emotional appeals BIBREF16."
],
[
"Stating that a claim is true simply because a valid authority/expert on the issue supports it, without any other supporting evidence BIBREF17. We include the special case where the reference is not an authority/expert, although it is referred to as testimonial in the literature BIBREF14."
],
[
"Presenting two alternative options as the only possibilities, when in fact more possibilities exist BIBREF13. As an extreme case, telling the audience exactly what actions to take, eliminating any other possible choice (dictatorship)."
],
[
"Words or phrases that discourage critical thought and meaningful discussion about a given topic. They are typically short and generic sentences that offer seemingly simple answers to complex questions or that distract attention away from other lines of thought BIBREF18."
],
[
"Discredit an opponent's position by charging them with hypocrisy without directly disproving their argument BIBREF19."
],
[
"Persuading an audience to disapprove an action or idea by suggesting that the idea is popular with groups hated in contempt by the target audience. It can refer to any person or concept with a negative connotation BIBREF20."
],
[
"Introducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made BIBREF11. Those subjected to a red herring argument are led away from the issue that had been the focus of the discussion and urged to follow an observation or claim that may be associated with the original claim, but is not highly relevant to the issue in dispute BIBREF20."
],
[
"Attempting to persuade the target audience to join in and take the course of action because “everyone else is taking the same action” BIBREF15."
],
[
"Using deliberately unclear words, to let the audience have its own interpretation BIBREF21, BIBREF11. For instance, when an unclear phrase with multiple possible meanings is used within the argument and, therefore, it does not really support the conclusion."
],
[
"When an opponent's proposition is substituted with a similar one which is then refuted in place of the original BIBREF22."
],
[
"The shared task features two subtasks:"
],
[
"Given a news article, detect all spans of the text in which a propaganda technique is used. In addition, for each span the propaganda technique applied must be identified."
],
[
"A sentence is considered propagandist if it contains at least one propagandist fragment. We then define a binary classification task in which, given a sentence, the correct label, either propaganda or non-propaganda, is to be predicted."
],
[
"The input for both tasks consists of news articles in free-text format, collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators. More details about the data collection and the annotation, as well as statistics about the corpus can be found in BIBREF10, where an earlier version of the corpus is described, which includes 450 news articles. We further annotated 47 additional articles for the purpose of the shared task using the same protocol and the same annotators.",
"The training, the development, and the test partitions of the corpus used for the shared task consist of 350, 61, and 86 articles and of 16,965, 2,235, and 3,526 sentences, respectively. Figure FIGREF15 shows an annotated example, which contains several propaganda techniques. For example, the fragment babies on line 1 is an instance of both Name_Calling and Labeling. Note that the fragment not looking as though Trump killed his grandma on line 4 is an instance of Exaggeration_or_Minimisation and it overlaps with the fragment killed his grandma, which is an instance of Loaded_Language.",
"Table TABREF23 reports the total number of instances per technique and the percentage with respect to the total number of annotations, for the training and for the development sets."
],
[
"The shared task had two phases: In the development phase, the participants were provided labeled training and development datasets; in the testing phase, testing input was further provided.",
"The participants tried to achieve the best performance on the development set. A live leaderboard kept track of the submissions.",
"The test set was released and the participants had few days to make final predictions.",
"In phase 2, no immediate feedback on the submissions was provided. The winner was determined based on the performance on the test set."
],
[
"FLC is a composition of two subtasks: the identification of the propagandist text fragments and the identification of the techniques used (18-way classification task). While F$_1$ measure is appropriate for a multi-class classification task, we modified it to account for partial matching between the spans; see BIBREF10 for more details. We further computed an F$_1$ value for each propaganda technique (not shown below for the sake of saving space, but available on the leaderboard)."
],
[
"SLC is a binary classification task with imbalanced data. Therefore, the official evaluation measure for the task is the standard F$_1$ measure. We further report Precision and Recall."
],
[
"The baseline system for the SLC task is a very simple logistic regression classifier with default parameters, where we represent the input instances with a single feature: the length of the sentence. The performance of this baseline on the SLC task is shown in Tables TABREF33 and TABREF34.",
"The baseline for the FLC task generates spans and selects one of the 18 techniques randomly. The inefficacy of such a simple random baseline is illustrated in Tables TABREF36 and TABREF41."
],
[
"A total of 90 teams registered for the shared task, and 39 of them submitted predictions for a total of 3,065 submissions. For the FLC task, 21 teams made a total of 527 submissions, and for the SLC task, 35 teams made a total of 2,538 submissions.",
"Below, we give an overview of the approaches as described in the participants' papers. Tables TABREF28 and TABREF29 offer a high-level summary."
],
[
"Team newspeak BIBREF23 achieved the best results on the test set for the FLC task using 20-way word-level classification based on BERT BIBREF24: a word could belong to one of the 18 propaganda techniques, to none of them, or to an auxiliary (token-derived) class. The team fed one sentence at a time in order to reduce the workload. In addition to experimenting with an out-of-the-box BERT, they also tried unsupervised fine-tuning both on the 1M news dataset and on Wikipedia. Their best model was based on the uncased base model of BERT, with 12 Transformer layers BIBREF25, and 110 million parameters. Moreover, oversampling of the least represented classes proved to be crucial for the final performance. Finally, careful analysis has shown that the model pays special attention to adjectives and adverbs.",
"Team Stalin BIBREF26 focused on data augmentation to address the relatively small size of the data for fine-tuning contextual embedding representations based on ELMo BIBREF27, BERT, and Grover BIBREF28. The balancing of the embedding space was carried out by means of synthetic minority class over-sampling. Then, the learned representations were fed into an LSTM."
],
[
"Team CAUnLP BIBREF29 used two context-aware representations based on BERT. In the first representation, the target sentence is followed by the title of the article. In the second representation, the previous sentence is also added. They performed subsampling in order to deal with class imbalance, and experimented with BERT$_{BASE}$ and BERT$_{LARGE}$",
"Team LIACC BIBREF30 used hand-crafted features and pre-trained ELMo embeddings. They also observed a boost in performance when balancing the dataset by dropping some negative examples.",
"Team JUSTDeep BIBREF31 used a combination of models and features, including word embeddings based on GloVe BIBREF32 concatenated with vectors representing affection and lexical features. These were combined in an ensemble of supervised models: bi-LSTM, XGBoost, and variations of BERT.",
"Team YMJA BIBREF33 also based their approach on fine-tuned BERT. Inspired by kaggle competitions on sentiment analysis, they created an ensemble of models via cross-validation.",
"Team jinfen BIBREF34 used a logistic regression model fed with a manifold of representations, including TF.IDF and BERT vectors, as well as vocabularies and readability measures.",
"Team Tha3aroon BIBREF35 implemented an ensemble of three classifiers: two based on BERT and one based on a universal sentence encoder BIBREF36.",
"Team NSIT BIBREF37 explored three of the most popular transfer learning models: various versions of ELMo, BERT, and RoBERTa BIBREF38.",
"Team Mindcoders BIBREF39 combined BERT, Bi-LSTM and Capsule networks BIBREF40 into a single deep neural network and pre-trained the resulting network on corpora used for related tasks, e.g., emotion classification.",
"Finally, team ltuorp BIBREF41 used an attention transformer using BERT trained on Wikipedia and BookCorpus."
],
[
"Team MIC-CIS BIBREF42 participated in both tasks. For the sentence-level classification, they used a voting ensemble including logistic regression, convolutional neural networks, and BERT, in all cases using FastText embeddings BIBREF43 and pre-trained BERT models. Beside these representations, multiple features of readability, sentiment and emotions were considered. For the fragment-level task, they used a multi-task neural sequence tagger, based on LSTM-CRF BIBREF44, in conjunction with linguistic features. Finally, they applied sentence- and fragment-level models jointly.",
"Team CUNLP BIBREF45 considered two approaches for the sentence-level task. The first approach was based on fine-tuning BERT. The second approach complemented the fine-tuned BERT approach by feeding its decision into a logistic regressor, together with features from the Linguistic Inquiry and Word Count (LIWC) lexicon and punctuation-derived features. Similarly to BIBREF42, for the fragment-level problem they used a Bi-LSTM-CRF architecture, combining both character- and word-level embeddings.",
"Team ProperGander BIBREF46 also used BERT, but they paid special attention to the imbalance of the data, as well as to the differences between training and testing. They showed that augmenting the training data by oversampling yielded improvements when testing on data that is temporally far from the training (by increasing recall). In order to deal with the imbalance, they performed cost-sensitive classification, i.e., the errors on the smaller positive class were more costly. For the fragment-level classification, inspired by named entity recognition, they used a model based on BERT using Continuous Random Field stacked on top of an LSTM."
],
[
"The results on the test set for the SLC task are shown in Table TABREF33, while Table TABREF34 presents the results on the development set at the end of phase 1 (cf. Section SECREF6). The general decrease of the F$_1$ values between the development and the test set could indicate that systems tend to overfit on the development set. Indeed, the winning team ltuorp chose the parameters of their system both on the development set and on a subset of the training set in order to improve the robustness of their system.",
"Tables TABREF36 and TABREF41 report the results on the test and on the development sets for the FLC task. For this task, the results tend to be more stable across the two sets. Indeed, team newspeak managed to almost keep the same difference in performance with respect to team Antiganda. Note that team MIC-CIS managed to reach the third position despite never having submitted a run on the development set."
],
[
"We have described the NLP4IF@EMNLP-IJCNLP 2019 shared task on fine-grained propaganda identification. We received 25 and 12 submissions on the test set for the sentence-level classification and the fragment-level classification tasks, respectively. Overall, the sentence-level task was easier and most submitted systems managed to outperform the baseline. The fragment-level task proved to be much more challenging, with lower absolute scores, but most teams still managed to outperform the baseline.",
"We plan to make the schema and the dataset publicly available to be used beyond NLP4IF. We hope that the corpus would raise interest outside of the community of researchers studying propaganda: the techniques related to fallacies and the ones relying on emotions might provide a novel setting for researchers interested in Argumentation and Sentiment Analysis.",
"As a kind of advertisement, Task 11 at SemEval 2020 is a follow up of this shared task. It features two complimentary tasks:",
"Given a free-text article, identify the propagandist text spans.",
"Given a text span already flagged as propagandist and its context, identify the specific propaganda technique it contains.",
"This setting would allow participants to focus their efforts on binary sequence labeling for Task 1 and on multi-class classification for Task 2."
],
[
"This research is part of the Propaganda Analysis Project, which is framed within the Tanbih project. The Tanbih project aims to limit the effect of “fake news”, propaganda, and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking, which is arguably the best way to address disinformation and “fake news.” The project is developed in collaboration between the Qatar Computing Research Institute (QCRI), HBKU and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).",
"The corpus for the task was annotated by A Data Pro, a company that performs high-quality manual annotations."
]
],
"section_name": [
"Introduction",
"Related Work",
"Propaganda Techniques",
"Propaganda Techniques ::: 1. Loaded language.",
"Propaganda Techniques ::: 2. Name calling or labeling.",
"Propaganda Techniques ::: 3. Repetition.",
"Propaganda Techniques ::: 4. Exaggeration or minimization.",
"Propaganda Techniques ::: 5. Doubt.",
"Propaganda Techniques ::: 6. Appeal to fear/prejudice.",
"Propaganda Techniques ::: 7. Flag-waving.",
"Propaganda Techniques ::: 8. Causal oversimplification.",
"Propaganda Techniques ::: 9. Slogans.",
"Propaganda Techniques ::: 10. Appeal to authority.",
"Propaganda Techniques ::: 11. Black-and-white fallacy, dictatorship.",
"Propaganda Techniques ::: 12. Thought-terminating cliché.",
"Propaganda Techniques ::: 13. Whataboutism.",
"Propaganda Techniques ::: 14. Reductio ad Hitlerum.",
"Propaganda Techniques ::: 15. Red herring.",
"Propaganda Techniques ::: 16. Bandwagon.",
"Propaganda Techniques ::: 17. Obfuscation, intentional vagueness, confusion.",
"Propaganda Techniques ::: 18. Straw man.",
"Tasks",
"Tasks ::: Fragment-Level Classification task (FLC).",
"Tasks ::: Sentence-Level Classification task (SLC).",
"Data",
"Setup",
"Evaluation ::: FLC task.",
"Evaluation ::: SLC task.",
"Baselines",
"Participants and Approaches",
"Participants and Approaches ::: Teams Participating in the Fragment-Level Classification Only",
"Participants and Approaches ::: Teams Participating in the Sentence-Level Classification Only",
"Participants and Approaches ::: Teams Participating in Both Tasks",
"Evaluation Results",
"Conclusion and Further Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"79fbb9f389ffe48f714fadb1ce7d1e9f675449d0",
"84572b16f9dfa7768225c0b385aef091f02de729"
],
"answer": [
{
"evidence": [
"Propaganda uses psychological and rhetorical techniques to achieve its objective. Such techniques include the use of logical fallacies and appeal to emotions. For the shared task, we use 18 techniques that can be found in news articles and can be judged intrinsically, without the need to retrieve supporting information from external resources. We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques:",
"Propaganda Techniques ::: 1. Loaded language.",
"Using words/phrases with strong emotional implications (positive or negative) to influence an audience BIBREF11.",
"Propaganda Techniques ::: 2. Name calling or labeling.",
"Labeling the object of the propaganda as something the target audience fears, hates, finds undesirable or otherwise loves or praises BIBREF12.",
"Propaganda Techniques ::: 3. Repetition.",
"Repeating the same message over and over again, so that the audience will eventually accept it BIBREF13, BIBREF12.",
"Propaganda Techniques ::: 4. Exaggeration or minimization.",
"Either representing something in an excessive manner: making things larger, better, worse, or making something seem less important or smaller than it actually is BIBREF14, e.g., saying that an insult was just a joke.",
"Propaganda Techniques ::: 5. Doubt.",
"Questioning the credibility of someone or something.",
"Propaganda Techniques ::: 6. Appeal to fear/prejudice.",
"Seeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative, possibly based on preconceived judgments.",
"Propaganda Techniques ::: 7. Flag-waving.",
"Playing on strong national feeling (or with respect to a group, e.g., race, gender, political preference) to justify or promote an action or idea BIBREF15.",
"Propaganda Techniques ::: 8. Causal oversimplification.",
"Assuming one cause when there are multiple causes behind an issue. We include scapegoating as well: the transfer of the blame to one person or group of people without investigating the complexities of an issue.",
"Propaganda Techniques ::: 9. Slogans.",
"A brief and striking phrase that may include labeling and stereotyping. Slogans tend to act as emotional appeals BIBREF16.",
"Propaganda Techniques ::: 10. Appeal to authority.",
"Stating that a claim is true simply because a valid authority/expert on the issue supports it, without any other supporting evidence BIBREF17. We include the special case where the reference is not an authority/expert, although it is referred to as testimonial in the literature BIBREF14.",
"Propaganda Techniques ::: 11. Black-and-white fallacy, dictatorship.",
"Presenting two alternative options as the only possibilities, when in fact more possibilities exist BIBREF13. As an extreme case, telling the audience exactly what actions to take, eliminating any other possible choice (dictatorship).",
"Propaganda Techniques ::: 12. Thought-terminating cliché.",
"Words or phrases that discourage critical thought and meaningful discussion about a given topic. They are typically short and generic sentences that offer seemingly simple answers to complex questions or that distract attention away from other lines of thought BIBREF18.",
"Propaganda Techniques ::: 13. Whataboutism.",
"Discredit an opponent's position by charging them with hypocrisy without directly disproving their argument BIBREF19.",
"Propaganda Techniques ::: 14. Reductio ad Hitlerum.",
"Persuading an audience to disapprove an action or idea by suggesting that the idea is popular with groups hated in contempt by the target audience. It can refer to any person or concept with a negative connotation BIBREF20.",
"Propaganda Techniques ::: 15. Red herring.",
"Introducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made BIBREF11. Those subjected to a red herring argument are led away from the issue that had been the focus of the discussion and urged to follow an observation or claim that may be associated with the original claim, but is not highly relevant to the issue in dispute BIBREF20.",
"Propaganda Techniques ::: 16. Bandwagon.",
"Attempting to persuade the target audience to join in and take the course of action because “everyone else is taking the same action” BIBREF15.",
"Propaganda Techniques ::: 17. Obfuscation, intentional vagueness, confusion.",
"Using deliberately unclear words, to let the audience have its own interpretation BIBREF21, BIBREF11. For instance, when an unclear phrase with multiple possible meanings is used within the argument and, therefore, it does not really support the conclusion.",
"Propaganda Techniques ::: 18. Straw man.",
"When an opponent's proposition is substituted with a similar one which is then refuted in place of the original BIBREF22."
],
"extractive_spans": [
"Loaded language",
"Name calling or labeling",
"Repetition",
"Exaggeration or minimization",
"Doubt",
"Appeal to fear/prejudice",
"Flag-waving",
"Causal oversimplification",
"Slogans",
" Appeal to authority",
"Black-and-white fallacy, dictatorship",
"Thought-terminating cliché",
"Whataboutism",
"Reductio ad Hitlerum",
"Red herring",
"Bandwagon",
"Obfuscation, intentional vagueness, confusion",
"Straw man"
],
"free_form_answer": "",
"highlighted_evidence": [
" We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques:\n\nPropaganda Techniques ::: 1. Loaded language.\nUsing words/phrases with strong emotional implications (positive or negative) to influence an audience BIBREF11.\n\nPropaganda Techniques ::: 2. Name calling or labeling.\nLabeling the object of the propaganda as something the target audience fears, hates, finds undesirable or otherwise loves or praises BIBREF12.\n\nPropaganda Techniques ::: 3. Repetition.\nRepeating the same message over and over again, so that the audience will eventually accept it BIBREF13, BIBREF12.\n\nPropaganda Techniques ::: 4. Exaggeration or minimization.\nEither representing something in an excessive manner: making things larger, better, worse, or making something seem less important or smaller than it actually is BIBREF14, e.g., saying that an insult was just a joke.\n\nPropaganda Techniques ::: 5. Doubt.\nQuestioning the credibility of someone or something.\n\nPropaganda Techniques ::: 6. Appeal to fear/prejudice.\nSeeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative, possibly based on preconceived judgments.\n\nPropaganda Techniques ::: 7. Flag-waving.\nPlaying on strong national feeling (or with respect to a group, e.g., race, gender, political preference) to justify or promote an action or idea BIBREF15.\n\nPropaganda Techniques ::: 8. Causal oversimplification.\nAssuming one cause when there are multiple causes behind an issue. We include scapegoating as well: the transfer of the blame to one person or group of people without investigating the complexities of an issue.\n\nPropaganda Techniques ::: 9. Slogans.\nA brief and striking phrase that may include labeling and stereotyping. Slogans tend to act as emotional appeals BIBREF16.\n\nPropaganda Techniques ::: 10. Appeal to authority.\nStating that a claim is true simply because a valid authority/expert on the issue supports it, without any other supporting evidence BIBREF17. We include the special case where the reference is not an authority/expert, although it is referred to as testimonial in the literature BIBREF14.\n\nPropaganda Techniques ::: 11. Black-and-white fallacy, dictatorship.\nPresenting two alternative options as the only possibilities, when in fact more possibilities exist BIBREF13. As an extreme case, telling the audience exactly what actions to take, eliminating any other possible choice (dictatorship).\n\nPropaganda Techniques ::: 12. Thought-terminating cliché.\nWords or phrases that discourage critical thought and meaningful discussion about a given topic. They are typically short and generic sentences that offer seemingly simple answers to complex questions or that distract attention away from other lines of thought BIBREF18.\n\nPropaganda Techniques ::: 13. Whataboutism.\nDiscredit an opponent's position by charging them with hypocrisy without directly disproving their argument BIBREF19.\n\nPropaganda Techniques ::: 14. Reductio ad Hitlerum.\nPersuading an audience to disapprove an action or idea by suggesting that the idea is popular with groups hated in contempt by the target audience. It can refer to any person or concept with a negative connotation BIBREF20.\n\nPropaganda Techniques ::: 15. Red herring.\nIntroducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made BIBREF11. Those subjected to a red herring argument are led away from the issue that had been the focus of the discussion and urged to follow an observation or claim that may be associated with the original claim, but is not highly relevant to the issue in dispute BIBREF20.\n\nPropaganda Techniques ::: 16. Bandwagon.\nAttempting to persuade the target audience to join in and take the course of action because “everyone else is taking the same action” BIBREF15.\n\nPropaganda Techniques ::: 17. Obfuscation, intentional vagueness, confusion.\nUsing deliberately unclear words, to let the audience have its own interpretation BIBREF21, BIBREF11. For instance, when an unclear phrase with multiple possible meanings is used within the argument and, therefore, it does not really support the conclusion.\n\nPropaganda Techniques ::: 18. Straw man.\nWhen an opponent's proposition is substituted with a similar one which is then refuted in place of the original BIBREF22."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Propaganda uses psychological and rhetorical techniques to achieve its objective. Such techniques include the use of logical fallacies and appeal to emotions. For the shared task, we use 18 techniques that can be found in news articles and can be judged intrinsically, without the need to retrieve supporting information from external resources. We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques:",
"Propaganda Techniques ::: 1. Loaded language.",
"Using words/phrases with strong emotional implications (positive or negative) to influence an audience BIBREF11.",
"Propaganda Techniques ::: 2. Name calling or labeling.",
"Labeling the object of the propaganda as something the target audience fears, hates, finds undesirable or otherwise loves or praises BIBREF12.",
"Propaganda Techniques ::: 3. Repetition.",
"Repeating the same message over and over again, so that the audience will eventually accept it BIBREF13, BIBREF12.",
"Propaganda Techniques ::: 4. Exaggeration or minimization.",
"Either representing something in an excessive manner: making things larger, better, worse, or making something seem less important or smaller than it actually is BIBREF14, e.g., saying that an insult was just a joke.",
"Propaganda Techniques ::: 5. Doubt.",
"Questioning the credibility of someone or something.",
"Propaganda Techniques ::: 6. Appeal to fear/prejudice.",
"Seeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative, possibly based on preconceived judgments.",
"Propaganda Techniques ::: 7. Flag-waving.",
"Playing on strong national feeling (or with respect to a group, e.g., race, gender, political preference) to justify or promote an action or idea BIBREF15.",
"Propaganda Techniques ::: 8. Causal oversimplification.",
"Assuming one cause when there are multiple causes behind an issue. We include scapegoating as well: the transfer of the blame to one person or group of people without investigating the complexities of an issue.",
"Propaganda Techniques ::: 9. Slogans.",
"A brief and striking phrase that may include labeling and stereotyping. Slogans tend to act as emotional appeals BIBREF16.",
"Propaganda Techniques ::: 10. Appeal to authority.",
"Stating that a claim is true simply because a valid authority/expert on the issue supports it, without any other supporting evidence BIBREF17. We include the special case where the reference is not an authority/expert, although it is referred to as testimonial in the literature BIBREF14.",
"Propaganda Techniques ::: 11. Black-and-white fallacy, dictatorship.",
"Presenting two alternative options as the only possibilities, when in fact more possibilities exist BIBREF13. As an extreme case, telling the audience exactly what actions to take, eliminating any other possible choice (dictatorship).",
"Propaganda Techniques ::: 12. Thought-terminating cliché.",
"Words or phrases that discourage critical thought and meaningful discussion about a given topic. They are typically short and generic sentences that offer seemingly simple answers to complex questions or that distract attention away from other lines of thought BIBREF18.",
"Propaganda Techniques ::: 13. Whataboutism.",
"Discredit an opponent's position by charging them with hypocrisy without directly disproving their argument BIBREF19.",
"Propaganda Techniques ::: 14. Reductio ad Hitlerum.",
"Persuading an audience to disapprove an action or idea by suggesting that the idea is popular with groups hated in contempt by the target audience. It can refer to any person or concept with a negative connotation BIBREF20.",
"Propaganda Techniques ::: 15. Red herring.",
"Introducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made BIBREF11. Those subjected to a red herring argument are led away from the issue that had been the focus of the discussion and urged to follow an observation or claim that may be associated with the original claim, but is not highly relevant to the issue in dispute BIBREF20.",
"Propaganda Techniques ::: 16. Bandwagon.",
"Attempting to persuade the target audience to join in and take the course of action because “everyone else is taking the same action” BIBREF15.",
"Propaganda Techniques ::: 17. Obfuscation, intentional vagueness, confusion.",
"Using deliberately unclear words, to let the audience have its own interpretation BIBREF21, BIBREF11. For instance, when an unclear phrase with multiple possible meanings is used within the argument and, therefore, it does not really support the conclusion.",
"Propaganda Techniques ::: 18. Straw man.",
"When an opponent's proposition is substituted with a similar one which is then refuted in place of the original BIBREF22."
],
"extractive_spans": [
"1. Loaded language",
"2. Name calling or labeling",
"3. Repetition",
"4. Exaggeration or minimization",
"5. Doubt",
"6. Appeal to fear/prejudice",
"7. Flag-waving",
"8. Causal oversimplification",
"9. Slogans",
"10. Appeal to authority",
"11. Black-and-white fallacy, dictatorship",
"12. Thought-terminating cliché",
"13. Whataboutism",
"14. Reductio ad Hitlerum",
"15. Red herring",
"16. Bandwagon",
"17. Obfuscation, intentional vagueness, confusion",
"18. Straw man"
],
"free_form_answer": "",
"highlighted_evidence": [
"We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques:\n\nPropaganda Techniques ::: 1. Loaded language.\nUsing words/phrases with strong emotional implications (positive or negative) to influence an audience BIBREF11.\n\nPropaganda Techniques ::: 2. Name calling or labeling.\nLabeling the object of the propaganda as something the target audience fears, hates, finds undesirable or otherwise loves or praises BIBREF12.\n\nPropaganda Techniques ::: 3. Repetition.\nRepeating the same message over and over again, so that the audience will eventually accept it BIBREF13, BIBREF12.\n\nPropaganda Techniques ::: 4. Exaggeration or minimization.\nEither representing something in an excessive manner: making things larger, better, worse, or making something seem less important or smaller than it actually is BIBREF14, e.g., saying that an insult was just a joke.\n\nPropaganda Techniques ::: 5. Doubt.\nQuestioning the credibility of someone or something.\n\nPropaganda Techniques ::: 6. Appeal to fear/prejudice.\nSeeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative, possibly based on preconceived judgments.\n\nPropaganda Techniques ::: 7. Flag-waving.\nPlaying on strong national feeling (or with respect to a group, e.g., race, gender, political preference) to justify or promote an action or idea BIBREF15.\n\nPropaganda Techniques ::: 8. Causal oversimplification.\nAssuming one cause when there are multiple causes behind an issue. We include scapegoating as well: the transfer of the blame to one person or group of people without investigating the complexities of an issue.\n\nPropaganda Techniques ::: 9. Slogans.\nA brief and striking phrase that may include labeling and stereotyping. Slogans tend to act as emotional appeals BIBREF16.\n\nPropaganda Techniques ::: 10. Appeal to authority.\nStating that a claim is true simply because a valid authority/expert on the issue supports it, without any other supporting evidence BIBREF17. We include the special case where the reference is not an authority/expert, although it is referred to as testimonial in the literature BIBREF14.\n\nPropaganda Techniques ::: 11. Black-and-white fallacy, dictatorship.\nPresenting two alternative options as the only possibilities, when in fact more possibilities exist BIBREF13. As an extreme case, telling the audience exactly what actions to take, eliminating any other possible choice (dictatorship).\n\nPropaganda Techniques ::: 12. Thought-terminating cliché.\nWords or phrases that discourage critical thought and meaningful discussion about a given topic. They are typically short and generic sentences that offer seemingly simple answers to complex questions or that distract attention away from other lines of thought BIBREF18.\n\nPropaganda Techniques ::: 13. Whataboutism.\nDiscredit an opponent's position by charging them with hypocrisy without directly disproving their argument BIBREF19.\n\nPropaganda Techniques ::: 14. Reductio ad Hitlerum.\nPersuading an audience to disapprove an action or idea by suggesting that the idea is popular with groups hated in contempt by the target audience. It can refer to any person or concept with a negative connotation BIBREF20.\n\nPropaganda Techniques ::: 15. Red herring.\nIntroducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made BIBREF11. Those subjected to a red herring argument are led away from the issue that had been the focus of the discussion and urged to follow an observation or claim that may be associated with the original claim, but is not highly relevant to the issue in dispute BIBREF20.\n\nPropaganda Techniques ::: 16. Bandwagon.\nAttempting to persuade the target audience to join in and take the course of action because “everyone else is taking the same action” BIBREF15.\n\nPropaganda Techniques ::: 17. Obfuscation, intentional vagueness, confusion.\nUsing deliberately unclear words, to let the audience have its own interpretation BIBREF21, BIBREF11. For instance, when an unclear phrase with multiple possible meanings is used within the argument and, therefore, it does not really support the conclusion.\n\nPropaganda Techniques ::: 18. Straw man.\nWhen an opponent's proposition is substituted with a similar one which is then refuted in place of the original BIBREF22."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6ed433750947b42f45eaeeda571baa62ba951be5",
"cf2a3b5ff7939aaa631cf3f26a5ee21939c6a863"
],
"answer": [
{
"evidence": [
"The input for both tasks consists of news articles in free-text format, collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators. More details about the data collection and the annotation, as well as statistics about the corpus can be found in BIBREF10, where an earlier version of the corpus is described, which includes 450 news articles. We further annotated 47 additional articles for the purpose of the shared task using the same protocol and the same annotators."
],
"extractive_spans": [
" news articles in free-text format"
],
"free_form_answer": "",
"highlighted_evidence": [
"The input for both tasks consists of news articles in free-text format, collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The input for both tasks consists of news articles in free-text format, collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators. More details about the data collection and the annotation, as well as statistics about the corpus can be found in BIBREF10, where an earlier version of the corpus is described, which includes 450 news articles. We further annotated 47 additional articles for the purpose of the shared task using the same protocol and the same annotators.",
"The training, the development, and the test partitions of the corpus used for the shared task consist of 350, 61, and 86 articles and of 16,965, 2,235, and 3,526 sentences, respectively. Figure FIGREF15 shows an annotated example, which contains several propaganda techniques. For example, the fragment babies on line 1 is an instance of both Name_Calling and Labeling. Note that the fragment not looking as though Trump killed his grandma on line 4 is an instance of Exaggeration_or_Minimisation and it overlaps with the fragment killed his grandma, which is an instance of Loaded_Language."
],
"extractive_spans": [
"collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators"
],
"free_form_answer": "",
"highlighted_evidence": [
"The input for both tasks consists of news articles in free-text format, collected from 36 propagandist and 12 non-propagandist news outlets and then annotated by professional annotators. More details about the data collection and the annotation, as well as statistics about the corpus can be found in BIBREF10, where an earlier version of the corpus is described, which includes 450 news articles. We further annotated 47 additional articles for the purpose of the shared task using the same protocol and the same annotators.",
"The training, the development, and the test partitions of the corpus used for the shared task consist of 350, 61, and 86 articles and of 16,965, 2,235, and 3,526 sentences, respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4873fd1c1dc00308586cf686eccb90bb27645b6f",
"4e38068bb7005fddcb03c705c4f56de3e5aa122e"
],
"answer": [
{
"evidence": [
"The baseline system for the SLC task is a very simple logistic regression classifier with default parameters, where we represent the input instances with a single feature: the length of the sentence. The performance of this baseline on the SLC task is shown in Tables TABREF33 and TABREF34.",
"The baseline for the FLC task generates spans and selects one of the 18 techniques randomly. The inefficacy of such a simple random baseline is illustrated in Tables TABREF36 and TABREF41."
],
"extractive_spans": [],
"free_form_answer": "The baseline system for the SLC task is a very simple logistic regression classifier with default parameters. The baseline for the FLC task generates spans and selects one of the 18 techniques randomly.",
"highlighted_evidence": [
"The baseline system for the SLC task is a very simple logistic regression classifier with default parameters, where we represent the input instances with a single feature: the length of the sentence. ",
"The baseline for the FLC task generates spans and selects one of the 18 techniques randomly. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The baseline system for the SLC task is a very simple logistic regression classifier with default parameters, where we represent the input instances with a single feature: the length of the sentence. The performance of this baseline on the SLC task is shown in Tables TABREF33 and TABREF34.",
"The baseline for the FLC task generates spans and selects one of the 18 techniques randomly. The inefficacy of such a simple random baseline is illustrated in Tables TABREF36 and TABREF41."
],
"extractive_spans": [
"SLC task is a very simple logistic regression classifier",
"FLC task generates spans and selects one of the 18 techniques randomly"
],
"free_form_answer": "",
"highlighted_evidence": [
"The baseline system for the SLC task is a very simple logistic regression classifier with default parameters, where we represent the input instances with a single feature: the length of the sentence.",
"The baseline for the FLC task generates spans and selects one of the 18 techniques randomly."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What are the 18 propaganda techniques?",
"What dataset was used?",
"What was the baseline for this task?"
],
"question_id": [
"6bfba3ddca5101ed15256fca75fcdc95a53cece7",
"df5a4505edccc0ee11349ed6e7958cf6b84c9ed4",
"fd753ab5177d7bd27db0e0afc12411876ee607df"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The beginning of an article with annotations.",
"Table 1: Statistics about the gold annotations for the training and the development sets.",
"Table 2: Overview of the approaches for the fragment-level classification task.",
"Table 3: Overview of the approaches used for the sentence-level classification task.",
"Table 4: Official test results for the SLC task.",
"Table 5: Results for the SLC task on the development set at the end of phase 1 (see Section 6).",
"Table 6: Official test results for the FLC task.",
"Table 7: Results for FLC tasl on the development set. The values refer to the end of phase 1 (see section 6)"
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"6-Table4-1.png",
"6-Table5-1.png",
"7-Table6-1.png",
"7-Table7-1.png"
]
} | [
"What was the baseline for this task?"
] | [
[
"1910.09982-Baselines-0",
"1910.09982-Baselines-1"
]
] | [
"The baseline system for the SLC task is a very simple logistic regression classifier with default parameters. The baseline for the FLC task generates spans and selects one of the 18 techniques randomly."
] | 55 |
1609.00559 | Improving Correlation with Human Judgments by Integrating Semantic Similarity with Second--Order Vectors | Vector space methods that measure semantic similarity and relatedness often rely on distributional information such as co--occurrence frequencies or statistical measures of association to weight the importance of particular co--occurrences. In this paper, we extend these methods by incorporating a measure of semantic similarity based on a human curated taxonomy into a second--order vector representation. This results in a measure of semantic relatedness that combines both the contextual information available in a corpus--based vector space representation with the semantic knowledge found in a biomedical ontology. Our results show that incorporating semantic similarity into a second order co--occurrence matrices improves correlation with human judgments for both similarity and relatedness, and that our method compares favorably to various different word embedding methods that have recently been evaluated on the same reference standards we have used. | {
"paragraphs": [
[
"Measures of semantic similarity and relatedness quantify the degree to which two concepts are similar (e.g., INLINEFORM0 – INLINEFORM1 ) or related (e.g., INLINEFORM2 – INLINEFORM3 ). Semantic similarity can be viewed as a special case of semantic relatedness – to be similar is one of many ways that a pair of concepts may be related. The automated discovery of groups of semantically similar or related terms is critical to improving the retrieval BIBREF0 and clustering BIBREF1 of biomedical and clinical documents, and the development of biomedical terminologies and ontologies BIBREF2 .",
"There is a long history in using distributional methods to discover semantic similarity and relatedness (e.g., BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 ). These methods are all based on the distributional hypothesis, which holds that two terms that are distributionally similar (i.e., used in the same context) will also be semantically similar BIBREF7 , BIBREF8 . Recently word embedding techniques such as word2vec BIBREF9 have become very popular. Despite the prominent role that neural networks play in many of these approaches, at their core they remain distributional techniques that typically start with a word by word co–occurrence matrix, much like many of the more traditional approaches.",
"However, despite these successes distributional methods do not perform well when data is very sparse (which is common). One possible solution is to use second–order co–occurrence vectors BIBREF10 , BIBREF11 . In this approach the similarity between two words is not strictly based on their co–occurrence frequencies, but rather on the frequencies of the other words which occur with both of them (i.e., second order co–occurrences). This approach has been shown to be successful in quantifying semantic relatedness BIBREF12 , BIBREF13 . However, while more robust in the face of sparsity, second–order methods can result in significant amounts of noise, where contextual information that is overly general is included and does not contribute to quantifying the semantic relatedness between the two concepts.",
"Our goal then is to discover methods that automatically reduce the amount of noise in a second–order co–occurrence vector. We achieve this by incorporating pairwise semantic similarity scores derived from a taxonomy into our second–order vectors, and then using these scores to select only the most semantically similar co–occurrences (thereby reducing noise).",
"We evaluate our method on two datasets that have been annotated in multiple ways. One has been annotated for both similarity and relatedness, and the other has been annotated for relatedness by two different types of experts (medical doctors and medical coders). Our results show that integrating second order co–occurrences with measures of semantic similarity increases correlation with our human reference standards. We also compare our result to a number of other studies which have applied various word embedding methods to the same reference standards we have used. We find that our method often performs at a comparable or higher level than these approaches. These results suggest that our methods of integrating semantic similarity and relatedness values have the potential to improve performance of purely distributional methods."
],
[
"This section describes the similarity and relatedness measures we integrate in our second–order co–occurrence vectors. We use two taxonomies in this study, SNOMED–CT and MeSH. SNOMED–CT (Systematized Nomenclature of Medicine Clinical Terms) is a comprehensive clinical terminology created for the electronic representation of clinical health information. MeSH (Medical Subject Headings) is a taxonomy of biomedical terms developed for indexing biomedical journal articles.",
"We obtain SNOMED–CT and MeSH via the Unified Medical Language System (UMLS) Metathesaurus (version 2016AA). The Metathesaurus contains approximately 2 million biomedical and clinical concepts from over 150 different terminologies that have been semi–automatically integrated into a single source. Concepts in the Metathesaurus are connected largely by two types of hierarchical relations: INLINEFORM0 / INLINEFORM1 (PAR/CHD) and INLINEFORM2 / INLINEFORM3 (RB/RN)."
],
[
"Measures of semantic similarity can be classified into three broad categories : path–based, feature–based and information content (IC). Path–based similarity measures use the structure of a taxonomy to measure similarity – concepts positioned close to each other are more similar than those further apart. Feature–based methods rely on set theoretic measures of overlap between features (union and intersection). The information content measures quantify the amount of information that a concept provides – more specific concepts have a higher amount of information content.",
"RadaMBB89 introduce the Conceptual Distance measure. This measure is simply the length of the shortest path between two concepts ( INLINEFORM0 and INLINEFORM1 ) in the MeSH hierarchy. Paths are based on broader than (RB) and narrower than (RN) relations. CaviedesC04 extends this measure to use parent (PAR) and child (CHD) relations. Our INLINEFORM2 measure is simply the reciprocal of this shortest path value (Equation EQREF3 ), so that larger values (approaching 1) indicate a high degree of similarity. DISPLAYFORM0 ",
"While the simplicity of INLINEFORM0 is appealing, it can be misleading when concepts are at different levels of specificity. Two very general concepts may have the same path length as two very specific concepts. WuP94 introduce a correction to INLINEFORM1 that incorporates the depth of the concepts, and the depth of their Least Common Subsumer (LCS). This is the most specific ancestor two concepts share. In this measure, similarity is twice the depth of the two concept's LCS divided by the product of the depths of the individual concepts (Equation EQREF4 ). Note that if there are multiple LCSs for a pair of concepts, the deepest of them is used in this measure. DISPLAYFORM0 ",
"ZhongZLY02 take a very similar approach and again scale the depth of the LCS by the sum of the depths of the two concepts (Equation EQREF5 ), where INLINEFORM0 . The value of INLINEFORM1 was set to 2 based on their recommendations. DISPLAYFORM0 ",
"PekarS02 offer another variation on INLINEFORM0 , where the shortest path of the two concepts to the LCS is used, in addition to the shortest bath between the LCS and the root of the taxonomy (Equation EQREF6 ). DISPLAYFORM0 ",
"Feature–based methods represent each concept as a set of features and then measure the overlap or sharing of features to measure similarity. In particular, each concept is represented as the set of their ancestors, and similarity is a ratio of the intersection and union of these features.",
"MaedcheS01 quantify the similarity between two concepts as the ratio of the intersection over their union as shown in Equation EQREF8 . DISPLAYFORM0 ",
"BatetSV11 extend this by excluding any shared features (in the numerator) as shown in Equation EQREF9 . DISPLAYFORM0 ",
"Information content is formally defined as the negative log of the probability of a concept. The effect of this is to assign rare (low probability) concepts a high measure of information content, since the underlying assumption is that more specific concepts are less frequently used than more common ones.",
"Resnik95 modified this notion of information content in order to use it as a similarity measure. He defines the similarity of two concepts to be the information content of their LCS (Equation EQREF11 ). DISPLAYFORM0 ",
"JiangC97, Lin98, and PirroE10 extend INLINEFORM0 by incorporating the information content of the individual concepts in various different ways. Lin98 defines the similarity between two concepts as the ratio of information content of the LCS with the sum of the individual concept's information content (Equation EQREF12 ). Note that INLINEFORM1 has the same form as INLINEFORM2 and INLINEFORM3 , and is in effect using information content as a measure of specificity (rather than depth). If there is more than one possible LCS, the LCS with the greatest IC is chosen. DISPLAYFORM0 ",
"JiangC97 define the distance between two concepts to be the sum of the information content of the two concepts minus twice the information content of the concepts' LCS. We modify this from a distance to a similarity measure by taking the reciprocal of the distance (Equation EQREF13 ). Note that the denominator of INLINEFORM0 is very similar to the numerator of INLINEFORM1 . DISPLAYFORM0 ",
"PirroE10 define the similarity between two concepts as the information content of the two concept's LCS divided by the sum of their individual information content values minus the information content of their LCS (Equation EQREF14 ). Note that INLINEFORM0 can be viewed as a set–theoretic version of INLINEFORM1 . DISPLAYFORM0 "
],
[
"The information content of a concept may be derived from a corpus (corpus–based) or directly from a taxonomy (intrinsic–based). In this work we focus on corpus–based techniques.",
"For corpus–based information content, we estimate the probability of a concept INLINEFORM0 by taking the sum of the probability of the concept INLINEFORM1 and the probability its descendants INLINEFORM2 (Equation EQREF16 ). DISPLAYFORM0 ",
"The initial probabilities of a concept ( INLINEFORM0 ) and its descendants ( INLINEFORM1 ) are obtained by dividing the number of times each concept and descendant occurs in the corpus, and dividing that by the total numbers of concepts ( INLINEFORM2 ).",
"Ideally the corpus from which we are estimating the probabilities of concepts will be sense–tagged. However, sense–tagging is a challenging problem in its own right, and it is not always possible to carry out reliably on larger amounts of text. In fact in this paper we did not use any sense–tagging of the corpus we derived information content from.",
"Instead, we estimated the probability of a concept by using the UMLSonMedline dataset. This was created by the National Library of Medicine and consists of concepts from the 2009AB UMLS and the counts of the number of times they occurred in a snapshot of Medline taken on 12 January, 2009. These counts were obtained by using the Essie Search Engine BIBREF14 which queried Medline with normalized strings from the 2009AB MRCONSO table in the UMLS. The frequency of a CUI was obtained by aggregating the frequency counts of the terms associated with the CUI to provide a rough estimate of its frequency. The information content measures then use this information to calculate the probability of a concept.",
"Another alternative is the use of Intrinsic Information Content. It assess the informativeness of concept based on its placement within a taxonomy by considering the number of incoming (ancestors) relative to outgoing (descendant) links BIBREF15 (Equation EQREF17 ). DISPLAYFORM0 ",
"where INLINEFORM0 are the number of descendants of concept INLINEFORM1 that are leaf nodes, INLINEFORM2 are the number of concept INLINEFORM3 's ancestors and INLINEFORM4 are the total number of leaf nodes in the taxonomy."
],
[
"Lesk86 observed that concepts that are related should share more words in their respective definitions than concepts that are less connected. He was able to perform word sense disambiguation by identifying the senses of words in a sentence with the largest number of overlaps between their definitions. An overlap is the longest sequence of one or more consecutive words that occur in both definitions. BanerjeeP03 extended this idea to WordNet, but observed that WordNet glosses are often very short, and did not contain enough information to distinguish between multiple concepts. Therefore, they created a super–gloss for each concept by adding the glosses of related concepts to the gloss of the concept itself (and then finding overlaps).",
"PatwardhanP06 adapted this measure to second–order co–occurrence vectors. In this approach, a vector is created for each word in a concept's definition that shows which words co–occur with it in a corpus. These word vectors are averaged to create a single co-occurrence vector for the concept. The similarity between the concepts is calculated by taking the cosine between the concepts second–order vectors. LiuMPMP12 modified and extended this measure to be used to quantify the relatedness between biomedical and clinical terms in the UMLS. The work in this paper can be seen as a further extension of PatwardhanP06 and LiuMPMP12."
],
[
"In this section, we describe our second–order similarity vector measure. This incorporates both contextual information using the term pair's definition and their pairwise semantic similarity scores derived from a taxonomy. There are two stages to our approach. First, a co–occurrence matrix must be constructed. Second, this matrix is used to construct a second–order co–occurrence vector for each concept in a pair of concepts to be measured for relatedness."
],
[
"We build an INLINEFORM0 similarity matrix using an external corpus where the rows and columns represent words within the corpus and the element contains the similarity score between the row word and column word using the similarity measures discussed above. If a word maps to more than one possible sense, we use the sense that returns the highest similarity score.",
"For this paper our external corpus was the NLM 2015 Medline baseline. Medline is a bibliographic database containing over 23 million citations to journal articles in the biomedical domain and is maintained by National Library of Medicine. The 2015 Medline Baseline encompasses approximately 5,600 journals starting from 1948 and contains 23,343,329 citations, of which 2,579,239 contain abstracts. In this work, we use Medline titles and abstracts from 1975 to present day. Prior to 1975, only 2% of the citations contained an abstract. We then calculate the similarity for each bigram in this dataset and include those that have a similarity score greater than a specified threshold on these experiments."
],
[
"We obtain definitions for each of the two terms we wish to measure. Due to the sparsity and inconsistencies of the definitions in the UMLS, we not only use the definition of the term (CUI) but also include the definition of its related concepts. This follows the method proposed by PatwardhanP06 for general English and WordNet, and which was adapted for the UMLS and the medical domain by LiuMPMP12. In particular we add the definitions of any concepts connected via a parent (PAR), child (CHD), RB (broader than), RN (narrower than) or TERM (terms associated with CUI) relation. All of the definitions for a term are combined into a single super–gloss. At the end of this process we should have two super–glosses, one for each term to be measured for relatedness.",
"Next, we process each super–gloss as follows:",
"We extract a first–order co–occurrence vector for each term in the super–gloss from the co–occurrence matrix created previously.",
"We take the average of the first order co–occurrence vectors associated with the terms in a super–gloss and use that to represent the meaning of the term. This is a second–order co–occurrence vector.",
"After a second–order co–occurrence vector has been constructed for each term, then we calculate the cosine between these two vectors to measure the relatedness of the terms."
],
[
"We use two reference standards to evaluate the semantic similarity and relatedness measures . UMNSRS was annotated for both similarity and relatedness by medical residents. MiniMayoSRS was annotated for relatedness by medical doctors (MD) and medical coders (coder). In this section, we describe these data sets and describe a few of their differences.",
"MiniMayoSRS: The MayoSRS, developed by PakhomovPMMRC10, consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic. The relatedness of each term pair was assessed based on a four point scale: (4.0) practically synonymous, (3.0) related, (2.0) marginally related and (1.0) unrelated. MiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter–annotator agreement was achieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78. We evaluate our method on the mean of the physician scores, and the mean of the coders scores in this subset in the same manner as reported by PedersenPPC07.",
"UMNSRS: The University of Minnesota Semantic Relatedness Set (UMNSRS) was developed by PakhomovMALPM10, and consists of 725 clinical term pairs whose semantic similarity and relatedness was determined independently by four medical residents from the University of Minnesota Medical School. The similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch a bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness. The Intraclass Correlation Coefficient (ICC) for the reference standard tagged for similarity was 0.47, and 0.50 for relatedness. Therefore, as suggested by Pakhomov and colleagues,we use a subset of the ratings consisting of 401 pairs for the similarity set and 430 pairs for the relatedness set which each have an ICC of 0.73."
],
[
"We conducted our experiments using the freely available open source software package UMLS::Similarity BIBREF16 version 1.47. This package takes as input two terms (or UMLS concepts) and returns their similarity or relatedness using the measures discussed in Section SECREF2 .",
"Correlation between the similarity measures and human judgments were estimated using Spearman's Rank Correlation ( INLINEFORM0 ). Spearman's measures the statistical dependence between two variables to assess how well the relationship between the rankings of the variables can be described using a monotonic function. We used Fisher's r-to-z transformation BIBREF17 to calculate the significance between the correlation results."
],
[
"Table TABREF26 shows the Spearman's Rank Correlation between the human scores from the four reference standards and the scores from the various measures of similarity introduced in Section SECREF2 . Each class of measure is followed by the scores obtained when integrating our second order vector approach with these measures of semantic similarity."
],
[
"The results for UMNSRS tagged for similarity ( INLINEFORM0 ) and MiniMayoSRS tagged by coders show that all of the second-order similarity vector measures ( INLINEFORM1 ) except for INLINEFORM2 - INLINEFORM3 obtain a higher correlation than the original measures. We found that INLINEFORM4 - INLINEFORM5 and INLINEFORM6 - INLINEFORM7 obtain the highest correlations of all these results with human judgments.",
"For the UMNSRS dataset tagged for relatedness and MiniMayoSRS tagged by physicians (MD), the original INLINEFORM0 measure obtains a higher correlation than our measure ( INLINEFORM1 ) although the difference is not statistically significant ( INLINEFORM2 ).",
"In order to analyze and better understand these results, we filtered the bigram pairs used to create the initial similarity matrix based on the strength of their similarity using the INLINEFORM0 and the INLINEFORM1 measures. Note that the INLINEFORM2 measure holds to a 0 to 1 scale, while INLINEFORM3 ranges from 0 to an unspecified upper bound that is dependent on the size of the corpus from which information content is estimated. As such we use a different range of threshold values for each measure. We discuss the results of this filtering below."
],
[
"Table TABREF29 shows the results of applying the threshold parameter on each of the reference standards using the INLINEFORM0 measure. For example, a threshold of 0 indicates that all of the bigrams were included in the similarity matrix; and a threshold of 1 indicates that only the bigram pairs with a similarity score greater than one were included.",
"These results show that using a threshold cutoff of 2 obtains the highest correlation for the UMNSRS dataset, and that a threshold cutoff of 4 obtains the highest correlation for the MiniMayoSRS dataset. All of the results show an increase in correlation with human judgments when incorporating a threshold cutoff over all of the original measures. The increase in the correlation for the UMNSRS tagged for similarity is statistically significant ( INLINEFORM0 ), however this is not the case for the UMNSRS tagged for relatedness nor for the MiniMayoSRS data.",
"Similarly, Table TABREF30 shows the results of applying the threshold parameter (T) on each of the reference standards using the INLINEFORM0 measure. Although, unlike INLINEFORM1 whose scores are greater than or equal to 0 without an upper limit, the INLINEFORM2 measure returns scores between 0 and 1 (inclusive). Therefore, here a threshold of 0 indicates that all of the bigrams were included in the similarity matrix; and a threshold of INLINEFORM3 indicates that only the bigram pairs with a similarity score greater than INLINEFORM4 were included. The results show an increase in accuracy for all of the datasets except for the MiniMayoSRS tagged for physicians. The increase in the results for the UMNSRS tagged for similarity and the MayoSRS is statistically significant ( INLINEFORM5 ). This is not the case for the UMNSRS tagged for relatedness nor the MiniMayoSRS.",
"Overall, these results indicate that including only those bigrams that have a sufficiently high similarity score increases the correlation results with human judgments, but what quantifies as sufficiently high varies depending on the dataset and measure."
],
[
"Recently, word embeddings BIBREF9 have become a popular method for measuring semantic relatedness in the biomedical domain. This is a neural network based approach that learns a representation of a word by word co–occurrence matrix. The basic idea is that the neural network learns a series of weights (the hidden layer within the neural network) that either maximizes the probability of a word given its context, referred to as the continuous bag of words (CBOW) approach, or that maximizes the probability of the context given a word, referred to as the Skip–gram approach. These approaches have been used in numerous recent papers.",
"muneeb2015evalutating trained both the Skip–gram and CBOW models over the PubMed Central Open Access (PMC) corpus of approximately 1.25 million articles. They evaluated the models on a subset of the UMNSRS data, removing word pairs that did not occur in their training corpus more than ten times. chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia). They evaluated their method using a subset of the UMNSRS restricting to single word term pairs and removing those not found within their training corpus. sajad2015domain trained the Skip–gram model over CUIs identified by MetaMap on the OHSUMED corpus, a collection of 348,566 biomedical research articles. They evaluated the method on the complete UMNSRS, MiniMayoSRS and the MayoSRS datasets; any subset information about the dataset was not explicitly stated therefore we believe a direct comparison may be possible.",
"In addition, a previous work very closely related to ours is a retrofitting vector method proposed by YuCBJW16 that incorporates ontological information into a vector representation by including semantically related words. In their measure, they first map a biomedical term to MeSH terms, and second build a word vector based on the documents assigned to the respective MeSH term. They then retrofit the vector by including semantically related words found in the Unified Medical Language System. They evaluate their method on the MiniMayoSRS dataset.",
"Table TABREF31 shows a comparison to the top correlation scores reported by each of these works on the respective datasets (or subsets) they evaluated their methods on. N refers to the number of term pairs in the dataset the authors report they evaluated their method. The table also includes our top scoring results: the integrated vector-res and vector-faith. The results show that integrating semantic similarity measures into second–order co–occurrence vectors obtains a higher or on–par correlation with human judgments as the previous works reported results with the exception of the UMNSRS rel dataset. The results reported by Pakhomov2016corpus and chiu2016how obtain a higher correlation although the results can not be directly compared because both works used different subsets of the term pairs from the UMNSRS dataset."
],
[
"We have presented a method for quantifying the similarity and relatedness between two terms that integrates pair–wise similarity scores into second–order vectors. The goal of this approach is two–fold. First, we restrict the context used by the vector measure to words that exist in the biomedical domain, and second, we apply larger weights to those word pairs that are more similar to each other. Our hypothesis was that this combination would reduce the amount of noise in the vectors and therefore increase their correlation with human judgments. We evaluated our method on datasets that have been manually annotated for relatedness and similarity and found evidence to support this hypothesis. In particular we discovered that guiding the creation of a second–order context vector by selecting term pairs from biomedical text based on their semantic similarity led to improved levels of correlation with human judgment.",
"We also explored using a threshold cutoff to include only those term pairs that obtained a sufficiently large level of similarity. We found that eliminating less similar pairs improved the overall results (to a point). In the future, we plan to explore metrics to automatically determine the threshold cutoff appropriate for a given dataset and measure. We also plan to explore additional features that can be integrated with a second–order vector measure that will reduce the noise but still provide sufficient information to quantify relatedness. We are particularly interested in approaches that learn word, phrase, and sentence embeddings from structured corpora such as literature BIBREF23 and dictionary entries BIBREF24 . Such embeddings could be integrated into a second–order vector or be used on their own.",
"Finally, we compared our proposed method to other distributional approaches, focusing on those that used word embeddings. Our results showed that integrating semantic similarity measures into second–order co–occurrence vectors obtains the same or higher correlation with human judgments as do various different word embedding approaches. However, a direct comparison was not possible due to variations in the subsets of the UMNSRS evaluation dataset used. In the future, we would not only like to conduct a direct comparison but also explore integrating semantic similarity into various kinds of word embeddings by training on pair–wise values of semantic similarity as well as co–occurrence statistics."
]
],
"section_name": [
"Introduction",
"Similarity and Relatedness Measures",
"Similarity Measures",
"Information Content",
"Relatedness Measures",
"Method",
"Co–occurrence Matrix Construction",
"Measure Term Pairs for Relatedness",
"Data",
"Experimental Framework",
"Results and Discussion",
"Results Comparison",
"Thresholding Experiments",
"Comparison with Previous Work",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"afe1a87fa1d1aa599e0fb80a7c153a8ac2a27fe8",
"b244bb5ed6c040738a07bcd315ed35fc13201131"
],
"answer": [
{
"evidence": [
"However, despite these successes distributional methods do not perform well when data is very sparse (which is common). One possible solution is to use second–order co–occurrence vectors BIBREF10 , BIBREF11 . In this approach the similarity between two words is not strictly based on their co–occurrence frequencies, but rather on the frequencies of the other words which occur with both of them (i.e., second order co–occurrences). This approach has been shown to be successful in quantifying semantic relatedness BIBREF12 , BIBREF13 . However, while more robust in the face of sparsity, second–order methods can result in significant amounts of noise, where contextual information that is overly general is included and does not contribute to quantifying the semantic relatedness between the two concepts."
],
"extractive_spans": [
"frequencies of the other words which occur with both of them (i.e., second order co–occurrences)"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this approach the similarity between two words is not strictly based on their co–occurrence frequencies, but rather on the frequencies of the other words which occur with both of them (i.e., second order co–occurrences). This approach has been shown to be successful in quantifying semantic relatedness BIBREF12 , BIBREF13 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"However, despite these successes distributional methods do not perform well when data is very sparse (which is common). One possible solution is to use second–order co–occurrence vectors BIBREF10 , BIBREF11 . In this approach the similarity between two words is not strictly based on their co–occurrence frequencies, but rather on the frequencies of the other words which occur with both of them (i.e., second order co–occurrences). This approach has been shown to be successful in quantifying semantic relatedness BIBREF12 , BIBREF13 . However, while more robust in the face of sparsity, second–order methods can result in significant amounts of noise, where contextual information that is overly general is included and does not contribute to quantifying the semantic relatedness between the two concepts."
],
"extractive_spans": [],
"free_form_answer": "The matrix containing co-occurrences of the words which occur with the both words of every given pair of words.",
"highlighted_evidence": [
"In this approach the similarity between two words is not strictly based on their co–occurrence frequencies, but rather on the frequencies of the other words which occur with both of them (i.e., second order co–occurrences)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"48cf8cf8129db5781878b143358ad473a498fbb0",
"c107047e2a6f47cd4096df8c629aa52b788179b0"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"MiniMayoSRS: The MayoSRS, developed by PakhomovPMMRC10, consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic. The relatedness of each term pair was assessed based on a four point scale: (4.0) practically synonymous, (3.0) related, (2.0) marginally related and (1.0) unrelated. MiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter–annotator agreement was achieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78. We evaluate our method on the mean of the physician scores, and the mean of the coders scores in this subset in the same manner as reported by PedersenPPC07.",
"UMNSRS: The University of Minnesota Semantic Relatedness Set (UMNSRS) was developed by PakhomovMALPM10, and consists of 725 clinical term pairs whose semantic similarity and relatedness was determined independently by four medical residents from the University of Minnesota Medical School. The similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch a bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness. The Intraclass Correlation Coefficient (ICC) for the reference standard tagged for similarity was 0.47, and 0.50 for relatedness. Therefore, as suggested by Pakhomov and colleagues,we use a subset of the ratings consisting of 401 pairs for the similarity set and 430 pairs for the relatedness set which each have an ICC of 0.73."
],
"extractive_spans": [],
"free_form_answer": "16",
"highlighted_evidence": [
"MiniMayoSRS: The MayoSRS, developed by PakhomovPMMRC10, consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic. ",
"UMNSRS: The University of Minnesota Semantic Relatedness Set (UMNSRS) was developed by PakhomovMALPM10, and consists of 725 clinical term pairs whose semantic similarity and relatedness was determined independently by four medical residents from the University of Minnesota Medical School. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"b117eed796cee45e35b7f79008e8720df36a3f02",
"c9efcb988a30314b1f52fd6ca3e3a74b8f85e867"
],
"answer": [
{
"evidence": [
"muneeb2015evalutating trained both the Skip–gram and CBOW models over the PubMed Central Open Access (PMC) corpus of approximately 1.25 million articles. They evaluated the models on a subset of the UMNSRS data, removing word pairs that did not occur in their training corpus more than ten times. chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia). They evaluated their method using a subset of the UMNSRS restricting to single word term pairs and removing those not found within their training corpus. sajad2015domain trained the Skip–gram model over CUIs identified by MetaMap on the OHSUMED corpus, a collection of 348,566 biomedical research articles. They evaluated the method on the complete UMNSRS, MiniMayoSRS and the MayoSRS datasets; any subset information about the dataset was not explicitly stated therefore we believe a direct comparison may be possible.",
"FLOAT SELECTED: Table 4: Comparison with Previous Work"
],
"extractive_spans": [
"Skip–gram",
"CBOW"
],
"free_form_answer": "",
"highlighted_evidence": [
"chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed.",
"FLOAT SELECTED: Table 4: Comparison with Previous Work"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF31 shows a comparison to the top correlation scores reported by each of these works on the respective datasets (or subsets) they evaluated their methods on. N refers to the number of term pairs in the dataset the authors report they evaluated their method. The table also includes our top scoring results: the integrated vector-res and vector-faith. The results show that integrating semantic similarity measures into second–order co–occurrence vectors obtains a higher or on–par correlation with human judgments as the previous works reported results with the exception of the UMNSRS rel dataset. The results reported by Pakhomov2016corpus and chiu2016how obtain a higher correlation although the results can not be directly compared because both works used different subsets of the term pairs from the UMNSRS dataset.",
"muneeb2015evalutating trained both the Skip–gram and CBOW models over the PubMed Central Open Access (PMC) corpus of approximately 1.25 million articles. They evaluated the models on a subset of the UMNSRS data, removing word pairs that did not occur in their training corpus more than ten times. chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia). They evaluated their method using a subset of the UMNSRS restricting to single word term pairs and removing those not found within their training corpus. sajad2015domain trained the Skip–gram model over CUIs identified by MetaMap on the OHSUMED corpus, a collection of 348,566 biomedical research articles. They evaluated the method on the complete UMNSRS, MiniMayoSRS and the MayoSRS datasets; any subset information about the dataset was not explicitly stated therefore we believe a direct comparison may be possible."
],
"extractive_spans": [
"integrated vector-res",
"vector-faith",
"Skip–gram",
"CBOW"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF31 shows a comparison to the top correlation scores reported by each of these works on the respective datasets (or subsets) they evaluated their methods on. N refers to the number of term pairs in the dataset the authors report they evaluated their method. The table also includes our top scoring results: the integrated vector-res and vector-faith.",
"chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What is a second order co-ocurrence matrix?",
"How many humans participated?",
"What embedding techniques are explored in the paper?"
],
"question_id": [
"88e62ea7a4d1d2921624b8480b5c6b50cfa5ad42",
"4dcf67b5e7bd1422e7e70c657f6eacccd8de06d3",
"8b3d3953454c88bde88181897a7a2c0c8dd87e23"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Spearman’s Correlation Results",
"Table 2: Threshold Correlation with vector-res",
"Table 3: Threshold Correlation with vector-faith",
"Table 4: Comparison with Previous Work"
],
"file": [
"6-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png"
]
} | [
"What is a second order co-ocurrence matrix?",
"How many humans participated?"
] | [
[
"1609.00559-Introduction-2"
],
[
"1609.00559-Data-2",
"1609.00559-Data-1"
]
] | [
"The matrix containing co-occurrences of the words which occur with the both words of every given pair of words.",
"16"
] | 56 |
1604.00727 | Character-Level Question Answering with Attention | We show that a character-level encoder-decoder framework can be successfully applied to question answering with a structured knowledge base. We use our model for single-relation question answering and demonstrate the effectiveness of our approach on the SimpleQuestions dataset (Bordes et al., 2015), where we improve state-of-the-art accuracy from 63.9% to 70.9%, without use of ensembles. Importantly, our character-level model has 16x fewer parameters than an equivalent word-level model, can be learned with significantly less data compared to previous work, which relies on data augmentation, and is robust to new entities in testing. | {
"paragraphs": [
[
"Single-relation factoid questions are the most common form of questions found in search query logs and community question answering websites BIBREF1 , BIBREF2 . A knowledge-base (KB) such as Freebase, DBpedia, or Wikidata can help answer such questions after users reformulate them as queries. For instance, the question Where was Barack Obama born? can be answered by issuing the following KB query: $\n\\lambda (x).place\\_of\\_birth(Barack\\_Obama, x)\n$ ",
" However, automatically mapping a natural language question such as Where was Barack Obama born? to its corresponding KB query remains a challenging task.",
"There are three key issues that make learning this mapping non-trivial. First, there are many paraphrases of the same question. Second, many of the KB entries are unseen during training time; however, we still need to correctly predict them at test time. Third, a KB such as Freebase typically contains millions of entities and thousands of predicates, making it difficult for a system to predict these entities at scale BIBREF1 , BIBREF3 , BIBREF0 . In this paper, we address all three of these issues with a character-level encoder-decoder framework that significantly improves performance over state-of-the-art word-level neural models, while also providing a much more compact model that can be learned from less data.",
"First, we use a long short-term memory (LSTM) BIBREF4 encoder to embed the question. Second, to make our model robust to unseen KB entries, we extract embeddings for questions, predicates and entities purely from their character-level representations. Character-level modeling has been previously shown to generalize well to new words not seen during training BIBREF5 , BIBREF6 , which makes it ideal for this task. Third, to scale our model to handle the millions of entities and thousands of predicates in the KB, instead of using a large output layer in the decoder to directly predict the entity and predicate, we use a general interaction function between the question embeddings and KB embeddings that measures their semantic relevance to determine the output. The combined use of character-level modeling and a semantic relevance function allows us to successfully produce likelihood scores for the KB entries that are not present in our vocabulary, a challenging task for standard encoder-decoder frameworks.",
"Our novel, character-level encoder-decoder model is compact, requires significantly less data to train than previous work, and is able to generalize well to unseen entities in test time. In particular, without use of ensembles, we achieve 70.9% accuracy in the Freebase2M setting and 70.3% accuracy in the Freebase5M setting on the SimpleQuestions dataset, outperforming the previous state-of-arts of 62.7% and 63.9% BIBREF0 by 8.2% and 6.4% respectively. Moreover, we only use the training questions provided in SimpleQuestions to train our model, which cover about 24% of words in entity aliases on the test set. This demonstrates the robustness of the character-level model to unseen entities. In contrast, data augmentation is usually necessary to provide more coverage for unseen entities and predicates, as done in previous work BIBREF0 , BIBREF1 ."
],
[
"Our work is motivated by three major threads of research in machine learning and natural language processing: semantic-parsing for open-domain question answering, character-level language modeling, and encoder-decoder methods.",
"Semantic parsing for open-domain question answering, which translates a question into a structured KB query, is a key component in question answering with a KB. While early approaches relied on building high-quality lexicons for domain-specific databases such as GeoQuery BIBREF7 , recent work has focused on building semantic parsing frameworks for general knowledge bases such as Freebase BIBREF1 , BIBREF8 , BIBREF0 , BIBREF9 , BIBREF2 .",
"Semantic parsing frameworks for large-scale knowledge bases have to be able to successfully generate queries for the millions of entities and thousands of predicates in the KB, many of which are unseen during training. To address this issue, recent work relies on producing embeddings for predicates and entities in a KB based on their textual descriptions BIBREF8 , BIBREF0 , BIBREF1 , BIBREF10 . A general interaction function can then be used to measure the semantic relevance of these embedded KB entries to the question and determine the most likely KB query.",
"Most of these approaches use word-level embeddings to encode entities and predicates, and therefore might suffer from the out-of-vocabulary (OOV) problem when they encounter unseen words during test time. Consequently, they often rely on significant data augmentation from sources such as Paralex BIBREF2 , which contains 18 million question-paraphrase pairs scraped from WikiAnswers, to have sufficient examples for each word they encounter BIBREF11 , BIBREF1 , BIBREF0 .",
"As opposed to word-level modeling, character-level modeling can be used to handle the OOV issue. While character-level modeling has not been applied to factoid question answering before, it has been successfully applied to information retrieval, machine translation, sentiment analysis, classification, and named entity recognition BIBREF12 , BIBREF13 , BIBREF6 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 . Moreover, gflstm demonstrate that gated-feedback LSTMs on top of character-level embeddings can capture long-term dependencies in language modeling.",
"Lastly, encoder-decoder networks have been applied to many structured machine learning tasks. First introduced in sutskever2014sequence, in an encoder-decoder network, a source sequence is first encoded with a recurrent neural network (RNN) into a fixed-length vector which intuitively captures its meaning, and then decoded into a desired target sequence. This approach and related memory-based or attention-based approaches have been successfully applied in diverse domains such as speech recognition, machine translation, image captioning, parsing, executing programs, and conversational dialogues BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 .",
"Unlike previous work, we formulate question answering as a problem of decoding the KB query given the question and KB entries which are encoded in embedding spaces. We therefore integrate the learning of question and KB embeddings in a unified encoder-decoder framework, where the whole system is optimized end-to-end."
],
[
"Since we focus on single-relation question answering in this work, our model decodes every question into a KB query that consists of exactly two elements–the topic entity, and the predicate. More formally, our model is a function $f(q, \\lbrace e\\rbrace , \\lbrace p\\rbrace )$ that takes as input a question $q$ , a set of candidate entities $\\lbrace e\\rbrace =e_1, ...,e_n$ , a set of candidate predicates $\\lbrace p\\rbrace =p_1,..., p_m$ , and produces a likelihood score $p(e_i, p_j|q)$ of generating entity $e_i$ and predicate $p_j$ given question $q$ for all $i\\in {1...n}, j\\in {1...m}$ .",
"As illustrated in Figure 1, our model consists of three components:",
"The details of each component are described in the following sections."
],
[
"To encode the question, we take two steps:",
"We first extract one-hot encoding vectors for characters in the question, $x_1,...,x_n$ , where $x_i$ represents the one-hot encoding vector for the $i^{th}$ character in the question. We keep the space, punctuation and original cases without tokenization. ",
"We feed $x_1,...,x_n$ from left to right into a two-layer gated-feedback LSTM, and keep the outputs at all time steps as the embeddings for the question, i.e., these are the vectors $s_1,...,s_n$ ."
],
[
"To encode an entity or predicate in the KB, we take two steps:",
"We first extract one-hot encoding vectors for characters in its English alias, $x_1,...,x_n$ , where $x_i$ represents the one-hot encoding vector for the $i^{th}$ character in the alias.",
"We then feed $x_1,...,x_n$ into a temporal CNN with two alternating convolutional and fully-connected layers, followed by one fully-connected layer: $\nf(x_1,...,x_n) = tanh(W_{3} \\times max(tanh (W_{2} \\times \\\\\nconv(tanh({W_{1} \\times conv(x_1,...,x_n)})))))\n$ ",
" where $f(x_{1...n}) $ is an embedding vector of size $N$ , $W_{3}$ has size $R^{N \\times h}$ , $conv$ represents a temporal convolutional neural network, and $max$ represents a max pooling layer in the temporal direction.",
"We use a CNN as opposed to an LSTM to embed KB entries primarily for computational efficiency. Also, we use two different CNNs to encode entities and predicates because they typically have significantly different styles (e.g., Barack Obama vs. /people/person/place_of_birth)."
],
[
"To generate the single topic entity and predicate to form the KB query, we use a decoder with two key components:",
"An LSTM-based decoder with attention. Its hidden states at each time step $i$ , $h_{i}$ , have the same dimensionality $N$ as the embeddings of entities/predicates. The initial hidden state $h_0$ is set to the zero vector: $\\vec{0}$ .",
"A pairwise semantic relevance function that measures the similarity between the hidden units of the LSTM and the embedding of an entity or predicate candidate. It then returns the mostly likely entity or predicate based on the similarity score.",
"In the following two sections, we will first describe the LSTM decoder with attention, followed by the semantic relevance function.",
"The attention-based LSTM decoder uses a similar architecture as the one described in aligntranslate. At each time step $i$ , we feed in a context vector $c_i$ and an input vector $v_i$ into the LSTM. At time $i=1$ we feed a special input vector $v_{<{S}>}=\\vec{0}$ into the LSTM. At time $i=2$ , during training, the input vector is the embedding of the true entity, while during testing, it is the embedding of the most likely entity as determined at the previous time step.",
"We now describe how we produce the context vector $c_i$ . Let $h_{i-1}$ be the hidden state of the LSTM at time $i-1$ , $s_j$ be the $j^{th}$ question character embedding, $n$ be the number of characters in the question, $r$ be the size of $s_j$ , and $m$ be a hyperparameter. Then the context vector $c_i$ , which represents the attention-weighted content of the question, is recomputed at each time step $h_{i-1}$0 as follows: $h_{i-1}$1 $h_{i-1}$2 ",
"where $\\lbrace \\alpha \\rbrace $ is the attention distribution that is applied over each hidden unit $s_j$ , $W_a \\in R^{m \\times N}, U_a \\in R^{m \\times r},$ and $v_a \\in {R}^{1 \\times m}$ .",
"Unlike machine translation and language modeling where the vocabulary is relatively small, there are millions of entries in the KB. If we try to directly predict the KB entries, the decoder will need an output layer with millions of nodes, which is computationally prohibitive. Therefore, we resort to a relevance function that measures the semantic similarity between the decoder's hidden state and the embeddings of KB entries. Our semantic relevance function takes two vectors $x_1$ , $x_2$ and returns a distance measure of how similar they are to each other. In current experiments we use a simple cosine-similarity metric: $cos(x_1, x_2)$ .",
"Using this similarity metric, the likelihoods of generating entity $e_j$ and predicate $p_k$ are: $\n\\hspace*{0.0pt}\nP(e_j) = \\frac{exp(\\lambda cos(h_1,e_{j}))}{\\sum _{i=1}^{n} exp(\\lambda cos(h_1,e_i))}\n\\\\\nP(p_k) = \\frac{exp(\\lambda cos(h_2,p_{k}))}{\\sum _{i=1}^{m} exp(\\lambda cos(h_2,p_{i}))}\n$ ",
" where $\\lambda $ is a constant, $h_1, h_2$ are the hidden states of the LSTM at times $t=1$ and $t=2$ , $e_1,...,e_n$ are the entity embeddings, and $p_1,...,p_m$ are the predicate embeddings. A similar likelihood function was used to train the semantic similarity modules proposed in qaacl and Yih2015SemanticPV.",
"During inference, $e_1,...,e_n$ and $p_1,...,p_m$ are the embeddings of candidate entities and predicates. During training $e_1,...,e_n$ , $p_1,...,p_m$ are the embeddings of the true entity and 50 randomly-sampled entities, and the true predicate and 50 randomly-sampled predicates, respectively."
],
[
"For each question $q$ , we generate a candidate set of entities and predicates, $\\lbrace e\\rbrace $ and $\\lbrace p\\rbrace $ , and feed it through the model $f(q, \\lbrace e\\rbrace , \\lbrace p\\rbrace )$ . We then decode the most likely (entity, predicate) pair: $\n(e^*, p^*) = argmax_{e_i, p_j} (P(e_i)*P(p_j))\n$ ",
" which becomes our semantic parse.",
"We use a similar procedure as the one described in babidataset to generate candidate entities $\\lbrace e\\rbrace $ and predicates $\\lbrace p\\rbrace $ . Namely, we take all entities whose English alias is a substring of the question, and remove all entities whose alias is a substring of another entity. For each English alias, we sort each entity with this alias by the number of facts that it has in the KB, and append the top 10 entities from this list to our set of candidate entities. All predicates ${p_j}$ for each entity in our candidate entity set become the set of candidate predicates."
],
[
"Our goal in learning is to maximize the joint likelihood $P(e_c) \\cdot P(p_c)$ of predicting the correct entity $e_c$ and predicate $p_c$ pair from a set of randomly sampled entities and predicates. We use back-propagation to learn all of the weights in our model.",
"All the parameters of our model are learned jointly without pre-training. These parameters include the weights of the character-level embeddings, CNNs, and LSTMs. Weights are randomly initialized before training. For the $i^{th}$ layer in our network, each weight is sampled from a uniform distribution between $-\\frac{1}{|l^i|}$ and $\\frac{1}{|l^i|}$ , where $|l^i|$ is the number of weights in layer $i$ ."
],
[
"We evaluate the proposed model on the SimpleQuestions dataset BIBREF0 . The dataset consists of 108,442 single-relation questions and their corresponding (topic entity, predicate, answer entity) triples from Freebase. It is split into 75,910 train, 10,845 validation, and 21,687 test questions. Only 10,843 of the 45,335 unique words in entity aliases and 886 out of 1,034 unique predicates in the test set were present in the train set. For the proposed dataset, there are two evaluation settings, called FB2M and FB5M, respectively. The former uses a KB for candidate generation which is a subset of Freebase and contains 2M entities, while the latter uses subset of Freebase with 5M entities.",
"In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines. For training, in addition to the 76K questions in the training set, the MemNNs use 3K training questions from WebQuestions BIBREF27 , 15M paraphrases from WikiAnswers BIBREF2 , and 11M and 12M automatically generated questions from the KB for the FB2M and FB5M settings, respectively. In contrast, our models are trained only on the 76K questions in the training set.",
"For our model, both layers of the LSTM-based question encoder have size 200. The hidden layers of the LSTM-based decoder have size 100, and the CNNs for entity and predicate embeddings have a hidden layer of size 200 and an output layer of size 100. The CNNs for entity and predicate embeddings use a receptive field of size 4, $\\lambda =5$ , and $m=100$ . We train the models using RMSProp with a learning rate of $1e^{-4}$ .",
"In order to make the input character sequence long enough to fill up the receptive fields of multiple CNN layers, we pad each predicate or entity using three padding symbols $P$ , a special start symbol, and a special end symbol. For instance, $Obama$ would become $S_{start}PPP ObamaPPPS_{end}$ . For consistency, we apply the same padding to the questions."
],
[
"Following babidataset, we report results on the SimpleQuestions dataset in terms of SQ accuracy, for both FB2M and FB5M settings in Table 1. SQ accuracy is defined as the percentage of questions for which the model generates a correct KB query (i.e., both the topic entity and predicate are correct). Our single character-level model achieves SQ accuracies of 70.9% and 70.3% on the FB2M and FB5M settings, outperforming the previous state-of-art results by 8.2% and 6.4%, respectively. Compared to the character-level model, which only has 1.2M parameters, our word-level model has 19.9M parameters, and only achieves a best SQ accuracy of 53.9%. In addition, in contrast to previous work, the OOV issue is much more severe for our word-level model, since we use no data augmentation to cover entities unseen in the train set."
],
[
"We carry out ablation studies in Sections 5.2.1 and 5.2.2 through a set of random-sampling experiments. In these experiments, for each question, we randomly sample 200 entities and predicates from the test set as noise samples. We then mix the gold entity and predicate into these negative samples, and evaluate the accuracy of our model in predicting the gold predicate or entity from this mixed set.",
"We first explore using word-level models as an alternative to character-level models to construct embeddings for questions, entities and predicates.",
"Both word-level and character-level models perform comparably well when predicting the predicate, reaching an accuracy of around 80% (Table 3). However, the word-level model has considerable difficulty generalizing to unseen entities, and is only able to predict 45% of the entities accurately from the mixed set. These results clearly demonstrate that the OOV issue is much more severe for entities than predicates, and the difficulty word-level models have when generalizing to new entities.",
"In contrast, character-level models have no such issues, and achieve a 96.6% accuracy in predicting the correct entity on the mixed set. This demonstrates that character-level models encode the semantic representation of entities and can match entity aliases in a KB with their mentions in natural language questions.",
"We also study the impact of the depth of neural networks in our model. The results are presented in Table 2. In the ablation experiments we compare the performance of a single-layer LSTM to a two-layer LSTM to encode the question, and a single-layer vs. two-layer CNN to encode the KB entries. We find that a two-layer LSTM boosts joint accuracy by over 6%. The majority of accuracy gains are a result of improved predicate predictions, possibly because entity accuracy is already saturated in this experimental setup."
],
[
"In order to further understand how the model performs question answering, we visualize the attention distribution over question characters in the decoding process. In each sub-figure of Figure 2, the x-axis is the character sequence of the question, and the y-axis is the attention weight distribution $\\lbrace \\alpha _i\\rbrace $ . The blue curve is the attention distribution when generating the entity, and green curve is the attention distribution when generating the predicate.",
"Interestingly, as the examples show, the attention distribution typically peaks at empty spaces. This indicates that the character-level model learns that a space defines an ending point of a complete linguistic unit. That is, the hidden state of the LSTM encoder at a space likely summarizes content about the character sequence before that space, and therefore contains important semantic information that the decoder needs to attend to.",
"Also, we observe that entity attention distributions are usually less sharp and span longer portions of words, such as john or rutters, than predicate attention distributions (e.g., Figure 2a). For entities, semantic information may accumulate gradually when seeing more and more characters, while for predicates, semantic information will become clear only after seeing the complete word. For example, it may only be clear that characters such as song by refer to a predicate after a space, as opposed to the name of a song such as song bye bye love (Figures 2a, 2b). In contrast, a sequence of characters starts to become a likely entity after seeing an incomplete name such as joh or rutt.",
"In addition, a character-level model can identify entities whose English aliases were never seen in training, such as phrenology (Figure 2d). The model apparently learns that words ending with the suffix nology are likely entity mentions, which is interesting because it reads in the input one character at a time.",
"Furthermore, as observed in Figure 2d, the attention model is capable of attending disjoint regions of the question and capture the mention of a predicate that is interrupted by entity mentions. We also note that predicate attention often peaks at the padding symbols after the last character of the question, possibly because sentence endings carry extra information that further help disambiguate predicate mentions. In certain scenarios, the network may only have sufficient information to build a semantic representation of the predicate after being ensured that it reached the end of a sentence. Finally, certain words in the question help identify both the entity and the predicate. For example, consider the word university in the question What type of educational institution is eastern new mexico university (Figure 2c). Although it is a part of the entity mention, it also helps disambiguate the predicate. However, previous semantic parsing-based QA approaches BIBREF10 , BIBREF1 assume that there is a clear separation between the predicate and entity mentions in the question. In contrast, the proposed model does not need to make this hard categorization, and attends the word university when predicting both the entity and predicate."
],
[
"We randomly sampled 50 questions where the best-performing model generated the wrong KB query and categorized the errors. For 46 out of the 50 examples, the model predicted a predicate with a very similar alias to the true predicate, i.e. /music/release/track vs. /music/release/track_list. For 21 out of the 50 examples, the model predicted the wrong entity, e.g., Album vs. Still Here for the question What type of album is still here?. Finally, for 18 of the 50 examples, the model predicted the wrong entity and predicate, i.e. (Play, /freebase/equivalent_topic/equivalent_type) for the question which instrument does amapola cabase play? Training on more data, augmenting the negative sample set with words from the question that are not an entity mention, and having more examples that disambiguate between similar predicates may ameliorate many of these errors."
],
[
"In this paper, we proposed a new character-level, attention-based encoder-decoder model for question answering. In our approach, embeddings of questions, entities, and predicates are all jointly learned to directly optimize the likelihood of generating the correct KB query. Our approach improved the state-of-the-art accuracy on the SimpleQuestions benchmark significantly, using much less data than previous work. Furthermore, thanks to character-level modeling, we have a compact model that is robust to unseen entities. Visualizations of the attention distribution reveal that our model, although built on character-level inputs, can learn higher-level semantic concepts required to answer a natural language question with a structured KB. In the future we would like to extend our system to handle multi-relation questions."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model",
"Encoding the Question",
"Encoding Entities and Predicates in the KB",
"Decoding the KB Query",
"Inference",
"Learning",
"Dataset and Experimental Settings",
"End-to-end Results on SimpleQuestions",
"Ablation and Embedding Experiments",
"Attention Mechanisms",
"Error Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"48fa32dbb48425f17b6fd16fff8f32e56670c3f0",
"dc82600f312ebc3fd8cc7aa57c16d9bd67887bf2"
],
"answer": [
{
"evidence": [
"In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines. For training, in addition to the 76K questions in the training set, the MemNNs use 3K training questions from WebQuestions BIBREF27 , 15M paraphrases from WikiAnswers BIBREF2 , and 11M and 12M automatically generated questions from the KB for the FB2M and FB5M settings, respectively. In contrast, our models are trained only on the 76K questions in the training set."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In contrast, our models are trained only on the 76K questions in the training set."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287"
]
},
{
"annotation_id": [
"9bb7a9f8a02cc7c9c060bc34badfc72324005ee0",
"ec9b74e0ec3a5e6c900f7e333abccc02d61a83f0"
],
"answer": [
{
"evidence": [
"In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines. For training, in addition to the 76K questions in the training set, the MemNNs use 3K training questions from WebQuestions BIBREF27 , 15M paraphrases from WikiAnswers BIBREF2 , and 11M and 12M automatically generated questions from the KB for the FB2M and FB5M settings, respectively. In contrast, our models are trained only on the 76K questions in the training set."
],
"extractive_spans": [],
"free_form_answer": "None",
"highlighted_evidence": [
"In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines. For training, in addition to the 76K questions in the training set, the MemNNs use 3K training questions from WebQuestions BIBREF27 , 15M paraphrases from WikiAnswers BIBREF2 , and 11M and 12M automatically generated questions from the KB for the FB2M and FB5M settings, respectively. In contrast, our models are trained only on the 76K questions in the training set.",
"We evaluate the proposed model on the SimpleQuestions dataset BIBREF0 . The dataset consists of 108,442 single-relation questions and their corresponding (topic entity, predicate, answer entity) triples from Freebase. It is split into 75,910 train, 10,845 validation, and 21,687 test questions. Only 10,843 of the 45,335 unique words in entity aliases and 886 out of 1,034 unique predicates in the test set were present in the train set. For the proposed dataset, there are two evaluation settings, called FB2M and FB5M, respectively. The former uses a KB for candidate generation which is a subset of Freebase and contains 2M entities, while the latter uses subset of Freebase with 5M entities."
],
"extractive_spans": [],
"free_form_answer": "Word-level Memory Neural Networks (MemNNs) proposed in Bordes et al. (2015)",
"highlighted_evidence": [
"In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines. For training, in addition to the 76K questions in the training set, the MemNNs use 3K training questions from WebQuestions BIBREF27 , 15M paraphrases from WikiAnswers BIBREF2 , and 11M and 12M automatically generated questions from the KB for the FB2M and FB5M settings, respectively. ",
"For the proposed dataset, there are two evaluation settings, called FB2M and FB5M, respectively. The former uses a KB for candidate generation which is a subset of Freebase and contains 2M entities, while the latter uses subset of Freebase with 5M entities."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"057bf5a20e4406f1f05cf82ecd49cf4f227dd287",
"f840a836eee0180d2c976457f8b3052d8e78050c"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat"
],
"question": [
"Do the authors also try the model on other datasets?",
"What word level and character level model baselines are used?"
],
"question_id": [
"784ce5a983c5f2cc95a2c60ce66f2a8a50f3636f",
"7705dd04acedaefee30d8b2c9978537afb2040dc"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question answering",
"question answering"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Our encoder-decoder architecture that generates a query against a structured knowledge base. We encode our question via a long short-term memory (LSTM) network and an attention mechanism to produce our context vector. During decoding, at each time step, we feed the current context vector and an embedding of the English alias of the previously generated knowledge base entry into an attention-based decoding LSTM to generate the new candidate entity or predicate.",
"Table 1: Experimental results on the SimpleQuestions dataset. MemNN results are from Bordes et al. (2015). WQ, SIQ and PRP stand for WebQuestions, SimpleQuestions and paraphrases from WikiAnswers.",
"Table 2: Results for a random sampling experiment where we varied the number of layers used for convolutions and the question-encoding LSTM. We terminated training models after 14 epochs and 3 days on a GPU.",
"Table 3: Results for a random sampling experiment where we varied the embedding type (word vs. character-level). We used 2 layered-LSTMs and CNNs for all our experiments. Our models were trained for 14 epochs and 3 days.",
"Figure 2: Attention distribution over outputs of a left-to-right LSTM on question characters."
],
"file": [
"2-Figure1-1.png",
"5-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Figure2-1.png"
]
} | [
"What word level and character level model baselines are used?"
] | [
[
"1604.00727-Dataset and Experimental Settings-0",
"1604.00727-Dataset and Experimental Settings-1"
]
] | [
"Word-level Memory Neural Networks (MemNNs) proposed in Bordes et al. (2015)"
] | 57 |
1612.02482 | Improving the Performance of Neural Machine Translation Involving Morphologically Rich Languages | The advent of the attention mechanism in neural machine translation models has improved the performance of machine translation systems by enabling selective lookup into the source sentence. In this paper, the efficiencies of translation using bidirectional encoder attention decoder models were studied with respect to translation involving morphologically rich languages. The English - Tamil language pair was selected for this analysis. First, the use of Word2Vec embedding for both the English and Tamil words improved the translation results by 0.73 BLEU points over the baseline RNNSearch model with 4.84 BLEU score. The use of morphological segmentation before word vectorization to split the morphologically rich Tamil words into their respective morphemes before the translation, caused a reduction in the target vocabulary size by a factor of 8. Also, this model (RNNMorph) improved the performance of neural machine translation by 7.05 BLEU points over the RNNSearch model used over the same corpus. Since the BLEU evaluation of the RNNMorph model might be unreliable due to an increase in the number of matching tokens per sentence, the performances of the translations were also compared by means of human evaluation metrics of adequacy, fluency and relative ranking. Further, the use of morphological segmentation also improved the efficacy of the attention mechanism. | {
"paragraphs": [
[
"The use of RNNs in the field of Statistical Machine Translation (SMT) has revolutionised the approaches to automated translation. As opposed to traditional shallow SMT models, which require a lot of memory to run, these neural translation models require only a small fraction of memory used, about 5% BIBREF0 . Also, neural translation models are optimized such that every module is trained to jointly improve translation quality. With that being said, one of the main downsides of neural translation models is the heavy corpus requirement in order to ensure learning of deeper contexts. This is where the application of these encoder decoder architectures in translation to and/or from morphologically rich languages takes a severe hit.",
"For any language pair, the efficiency of an MT system depends on two major factors: the availability and size of parallel corpus used for training and the syntactic divergence between the two languages i.e morphological richness, word order differences, grammatical structure etc. BIBREF0 . The main differences between the languages stem from the fact that languages similar to English are predominantly fusional languages whereas many of the morphologically rich languages are agglutinative in nature. The nature of morphologically rich languages being structurally and semantically discordant from languages like English adds to the difficulty of SMT involving such languages.",
"In morphologically rich languages, any suffix can be added to any verb or noun to simply mean one specific thing about that particular word that the suffix commonly represents (agglutination). This means that there exists a lot of inflectional forms of the same noun and verb base words, conveying similar notions. For example, in Tamil, there are at least 30,000 inflectional forms of any given verb and about 5,000 forms of inflectional forms for any noun. The merged words carry information about part of speech (POS) tags, tense, plurality and so forth that are important for analyzing text for Machine Translation (MT). Not only are these hidden meanings not captured, the corresponding root words are trained as different units, thereby increasing the complexity of developing such MT systems BIBREF1 .",
"To add to the complexities of being a morphologically rich language, there are several factors unique to Tamil that make translation very difficult. The availability of parallel corpus for Tamil is very scarce. Most of the other models in the field of English–Tamil MT have made use of their own translation corpora that were manually created for the purposes of research. Most of these corpora are not available online for use.",
"Another issue specific to Tamil is the addition of suffix characters included to the words in the language for smoothness in pronunciation. These characters are of so many different types; there is a unique suffix for each and every consonant in the language. These suffixes degrade performance of MT because the same words with different such pronounciation-based suffixes will be taken as different words in training.",
"Also to take into consideration is the existence of two different forms of the language being used. Traditionally defined Tamil and its pronunciations aren't acoustically pleasing to use. There's no linguistic flow between syllables and its usage in verbal communication is time consuming. Therefore, there exists two forms of the language, the written form, rigid in structure and syntax, and the spoken form, in which the flow and pace of the language is given priority over syntax and correctness of spelling. This divide leads to the corpus having 2 different versions of the language that increase the vocabulary even with the same words. This can be evidently seen in the corpus between the sentences used in the Bible, which is in traditional Tamil and sentences from movie subtitles, being in spoken Tamil format.",
"To account for such difficulties, a trade-off between domain specificity and size of the corpus is integral in building an English–Tamil neural MT system."
],
[
"The corpus selected for this experiment was a combination of different corpora from various domains. The major part of the corpus was made up by the EnTam v2 corpus BIBREF2 . This corpus contained sentences taken from parallel news articles, English and Tamil bible corpus and movie subtitles. It also comprised of a tourism corpus that was obtained from TDIL (Technology Development for Indian Languages) and a corpus created from Tamil novels and short stories from AU-KBC, Anna university. The complete corpus consisted of 197,792 sentences. Fig. FIGREF20 shows the skinny shift and heatmap representations of the relativity between the sentences in terms of their sentence lengths.",
"An extra monolingual Tamil corpus, collated from various online sources was used for the word2vec embedding of the Tamil target language to enhance the richness of context of the word vectors. It was also used to create the language model for the phrase-based SMT model. This corpus contained 567,772 sentences and was self-collected by combining hundreds of ancient Tamil scriptures, novels and poems by accessing the websites of popular online ebook libraries in Python using the urllib package. Since the sources had Tamil text in different encodings, the encoding scheme was standardized to be UTF-8 for the entirety of the monolingual and parallel corpora using the chardet package. The corpora were cleaned for any stray special characters, unnecessary html tags and website URLs."
],
[
"The word embeddings of the source and target language sentences are used as initial vectors of the model to improve contextualization. The skip gram model of the word2vec algorithm optimizes the vectors by accounting for the average log probability of context words given a source word. DISPLAYFORM0 ",
"where k is the context window taken for the vectorization, INLINEFORM0 refers to the INLINEFORM1 word of the corpus and INLINEFORM2 is the size of the training corpus in terms of the number of words. Here, the probabily INLINEFORM3 is computed as a hierarchical softmax of the product of the transpose of the output vector of INLINEFORM4 and the input vector of INLINEFORM5 for each and every pair over the entire vocabulary. The processes of negative sampling and subsampling of frequent words that were used in the original model aren't used in this experiment BIBREF3 .",
"For the process of creating semantically meaningful word embeddings, a monolingual corpus of 569,772 Tamil sentences was used. This gave the vectors more contextual richness due to the increased size of the corpus as opposed to using just the bilingual corpus' target side sentences BIBREF3 .",
"In the experiment, the word2vec model was trained using a vector size of 100 to ensure that the bulk of the limited memory of the GPU will be used for the neural attention translation model. It has been shown that any size over that of 150 used for word vectorization gives similar results and that a size of 100 performs close to the model with 150-sized word vectors BIBREF7 . A standard size of 5 was used as window size and the model was trained over 7 worker threads simultaneously. A batch size of 50 words was used for training. The negative sampling was set at 1 as it is the nature of morphologically rich languages to have a lot of important words that don't occur more than once in the corpus. The gensim word2vec toolkit was used to implement this word embedding process BIBREF8 ."
],
[
"The model used for translation is the one implemented by Bahdanau et al. Bahdanau2014. A bidirectional LSTM encoder first takes the source sentence and encodes it into a context vector which acts as input for the decoder. The decoder is attention-based where the hidden states of the decoder get as input the weighted sum of all the hidden layer outputs of the encoder alongwith the output of the previous hidden layer and the previously decoded word. This provides a contextual reference into the source language sentence BIBREF4 .",
"Neural Machine Translation models directly compute the probability of the target language sentence given the source language sentence, word by word for every time step. The model with a basic decoder without the attention module computes the log probability of target sentence given source sentence as the sum of log probabilities of every word given every word before that. The attention-based model, on the other hand, calculates: DISPLAYFORM0 ",
"where INLINEFORM0 is the number of words in the target sentence, INLINEFORM1 is the target sentence, INLINEFORM2 is the source sentence, INLINEFORM3 is the fixed length output vector of the encoder and INLINEFORM4 is the weighted sum of all the hidden layer outputs of the encoder at every time step. Both the encoder's output context vector and the weighted sum (known as attention vector) help to improve the quality of translation by enabling selective source sentence lookup.",
"The decoder LSTM computes: DISPLAYFORM0 ",
"where the probability is computed as a function of the decoder's output in the previous time step INLINEFORM0 , the hidden layer vector of the decoder in the current timestep INLINEFORM1 and the context vector from the attention mechanism INLINEFORM2 . The context vector INLINEFORM3 for time step INLINEFORM4 is computed as a weighted sum of the output of the entire sentence using a weight parameter INLINEFORM5 : DISPLAYFORM0 ",
"where INLINEFORM0 is the number of tokens in the source sentence, INLINEFORM1 refers to the value of the hidden layer of the encoder at time step INLINEFORM2 , and INLINEFORM3 is the alignment parameter. This parameter is calculated by means of a feed forward neural network to ensure that the alignment model is free from the difficulties of contextualization of long sentences into a single vector. The feed forward network is trained along with the neural translation model to jointly improve the performance of the translation. Mathematically, DISPLAYFORM0 DISPLAYFORM1 ",
"where INLINEFORM0 is the softmax output of the result of the feedforward network, INLINEFORM1 is the hidden state value of the decoder at timestep INLINEFORM2 and INLINEFORM3 is the encoder's hidden layer annotation at timestep INLINEFORM4 . A concatenation of the forward and the reverse hidden layer parameters of the encoder is used at each step to compute the weights INLINEFORM5 for the attention mechanism. This is done to enable an overall context of the sentence, as opposed to a context of only all the previous words of the sentence for every word in consideration. Fig. FIGREF12 is the general architecture of the neural translation model without the Bidirectional LSTM encoder.",
"A global attention mechanism is preferred over local attention because the differences in the structures of the languages cannot be mapped efficiently to enable lookup into the right parts of the source sentence. Using local attention mechanism with a monotonic context lookup, where the region around INLINEFORM0 source word is looked up for the prediction of the INLINEFORM1 target word, is impractical because of the structural discordance between the English and Tamil sentences (see Figs. FIGREF37 and FIGREF44 ). The use of gaussian and other such distributions to facilitate local attention would also be inefficient because the existence of various forms of translations for the same source sentence involving morphological and structural variations that don't stay uniform through the entire corpus BIBREF5 .",
"The No Peepholes (NP) variant of the LSTM cell, formulated in Greff et al. greff2015lstm is used in this experiment as it proved to give the best results amongst all the variants of an LSTM cell. It is specified by means of a gated mechanism designed to ensure that the vanishing gradient problem is prevented. LSTM maintains its hidden layer in two components, the cell vector INLINEFORM0 and the actual hidden layer output vector INLINEFORM1 . The cell vector is ensured to never reach zero by means of a weighted sum of the previous layer's cell vector INLINEFORM2 regulated by the forget gate INLINEFORM3 and an activation of the weighted sum of the input INLINEFORM4 in the current timestep INLINEFORM5 and the previous timestep's hidden layer output vector INLINEFORM6 . The combination is similarly regulated by the input gate INLINEFORM7 . The hidden layer output is determined as an activation of the cell gate, regulated by the output gate INLINEFORM8 . The interplay between these two vectors ( INLINEFORM9 and INLINEFORM10 ) at every timestep ensures that the problem of vanishing gradients doesn't occur. The three gates are also formed as a sigmoid of the weighted sum of the previous hidden layer output INLINEFORM11 and the input in the current timestep INLINEFORM12 . The output generated out of the LSTM's hidden layer is specified as a weighted softmax over the hidden layer output INLINEFORM13 . The learnable parameters of an LSTM cell are all the weights INLINEFORM14 and the biases INLINEFORM15 . DISPLAYFORM0 ",
"The LSTM specified by equations 7 through 11 is the one used for the decoder of the model. The encoder uses a bidirectional RNN LSTM cell in which there are two hidden layer components INLINEFORM0 and INLINEFORM1 that contribute to the output INLINEFORM2 of each time step INLINEFORM3 . Both the components have their own sets of LSTM equations in such a way that INLINEFORM4 for every timestep is computed from the first timestep till the INLINEFORM5 token is reached and INLINEFORM6 is computed from the INLINEFORM7 timestep backwards until the first token is reached. All the five vectors of the two components are all exactly the same as the LSTM equations specified with one variation in the computation of the result. DISPLAYFORM0 "
],
[
"The morphological segmentation used is a semi-supervised extension to the generative probabilistic model of maximizing the probability of a INLINEFORM0 prefix,root,postfix INLINEFORM1 recursive split up of words based on an exhaustive combination of all possible morphemes. The details of this model are specified and extensively studied in Kohonen et al. kohonen2010semi. The model parameters INLINEFORM2 include the morph type count, morph token count of training data, the morph strings and their counts. The model is trained by maximizing the Maximum A Posteriori (MAP) probability using Bayes' rule: DISPLAYFORM0 ",
"where INLINEFORM0 refers to every word in the training lexicon. The prior INLINEFORM1 is estimated using the Minimum Description Length(MDL) principle. The likelihood INLINEFORM2 is estimated as: DISPLAYFORM0 ",
"where INLINEFORM0 refers to the intermediate analyses and INLINEFORM1 refers to the INLINEFORM2 morpheme of word INLINEFORM3 .",
"An extension to the Viterbi algorithm is used for the decoding step based on exhaustive mapping of morphemes. To account for over-segmentation and under-segmentation issues associated with unsupervised morphological segmentation, extra parameters ( INLINEFORM0 ) and ( INLINEFORM1 ) are used with the cost function INLINEFORM2 DISPLAYFORM0 ",
" where INLINEFORM0 is the likelihood of the cost function, INLINEFORM1 describes the likelihood of contribution of the annotated dataset to the cost function and INLINEFORM2 is the likelihood of the labeled data. A decrease in the value of INLINEFORM3 will cause smaller segments and vice versa. INLINEFORM4 takes care of size discrepancies due to reduced availability of annotated corpus as compared to the training corpus BIBREF2 , BIBREF6 .",
"The Python extension to the morphological segmentation tool morfessor 2.0 was used for this experiment to perform the segmentation. The annotation data for Tamil language collated and released by Anoop Kunchukkutan in the Indic NLP Library was used as the semi-supervised input to the model BIBREF9 , BIBREF6 ."
],
[
"The complexities of neural machine translation of morphologically rich languages were studied with respect to English to Tamil machine translation using the RNN LSTM Bi-directional encoder attention decoder architecture. To compare with a baseline system, a phrase based SMT system was implemented using the same corpus. The Factored SMT model with source-side preprocessing by Kumar et al. kumar2014improving was used as a reference for the translation between these language pairs. Also, an additional 569,772 monolingual Tamil sentences were used for the language model of the SMT system. The model used could be split up into various modules as expanded in Fig. FIGREF17 ."
],
[
"The input source and target language sentences used for training were taken and divided into bucketed pairs of sentences of a fixed number of sizes. This relationship was determined by examining the distribution of words in the corpus primarily to minimize the number of PAD tokens in the sentence. The heat map of the number of words in the English–Tamil sentence pairs of the corpus revealed that the distribution is centered around the 10–20 words region. Therefore, more buckets in that region were applied as there would be enough number of examples in each of these bucket pairs for the model to learn about the sentences in each and every bucket. The exact scheme used for the RNNSearch models is specified by Fig. FIGREF21 . The bucketing scheme for the RNNMorph model, involving morphs instead of words, was a simple shifted scheme of the one used in Fig. FIGREF21 , where every target sentence bucket count was increased uniformly by 5."
],
[
"Due to various computational constraints and lack of availability of comprehensive corpora, the vocabularies for English and Tamil languages for the RNNSearch model were restricted to 60,000 out of 67,768 and 150,000 out of 340,325 respectively. The vocabulary of the languages for the RNNMorph didn't have to be restricted and the actual number of words in the corpus i.e. 67,768 words for English and 41,906 words for Tamil could be accommodated into the training. Words not in the vocabulary from the test set input and output were replaced with the universal INLINEFORM0 UNK INLINEFORM1 token, symbolizing an unknown word. The LSTM hidden layer size, the training batch size, and the vocabulary sizes of the languages, together, acted as a bottleneck. The model was run on a 2GB NVIDIA GeForce GT 650M card with 384 cores and the memory allotment was constrained to the limits of the GPU. Therefore, after repeated experimentation, it was determined that with a batch size of 16, the maximum hidden layer size possible was 500, which was the size used. Attempts to reduce the batch size resulted in poor convergence, and so the parameters were set to center around the batch size of 16. The models used were of 4 layers of LSTM hidden units in the bidirectional encoder and attention decoder.",
"The model used a Stochastic Gradient Descent (SGD) optimization algorithm with a sampled softmax loss of 512 per sample to handle large vocabulary size of the target language BIBREF10 . The model was trained with a learning rate 1.0 and a decay of rate 0.5 enforced manually. Gradient clipping based on the global norm of 5.0 was carried out to prevent gradients exploding and going to unrecoverable values tending towards infinity. The model described is the one used in the Tensorflow BIBREF11 seq2seq library."
],
[
"The BLEU metric parameters (modified 1-gram, 2-gram, 3-gram and 4-gram precision values) and human evaluation metrics of adequacy, fluency and relative ranking values were used to evaluate the performance of the models."
],
[
"The BLEU scores obtained using the various models used in the experiment are tabulated in Table TABREF25 .",
"The BLEU metric computes the BLEU unigram, bigram, trigram and BLEU-4 modified precision values, each micro-averaged over the test set sentences BIBREF7 . It was observed, as expected, that the performance of the phrase-based SMT model was inferior to that of the RNNSearch model. The baseline RNNSearch system was further refined by using word2vec vectors to embed semantic understanding, as observed with the slight increase in the BLEU scores. Fig. FIGREF26 plots the BLEU scores as a line graph for visualization of the improvement in performance. Also, the 4-gram BLEU scores for the various models were plotted as a bar graph in Fig. FIGREF26 ",
"Due to the agglutinative and morphologically rich nature of the target language i.e. Tamil, the use of morphological segmentation to split the words into morphemes further improved the BLEU precision values in the RNNMorph model. One of the reasons for the large extent of increase in the BLEU score could be attributed to the overall increase in the number of word units per sentence. Since the BLEU score computes micro-average precision scores, an increase in both the numerator and denominator of the precision scores is apparent with an increase in the number of tokens due to morphological segmentation of the target language. Thus, the numeric extent of the increase of accuracy might not efficiently describe the improvement in performance of the translation."
],
[
"To ensure that the increase in BLEU score correlated to actual increase in performance of translation, human evaluation metrics like adequacy, precision and ranking values (between RNNSearch and RNNMorph outputs) were estimated in Table TABREF30 . A group of 50 native people who were well-versed in both English and Tamil languages acted as annotators for the evaluation. A collection of samples of about 100 sentences were taken from the test set results for comparison. This set included a randomized selection of the translation results to ensure the objectivity of evaluation. Fluency and adequacy results for the RNNMorph results are tabulated. Adequacy rating was calculated on a 5-point scale of how much of the meaning is conveyed by the translation (All, Most, Much, Little, None). The fluency rating was calculated based on grammatical correctness on a 5-point scale of (Flawless, Good, Non-native, Disfluent, Incomprehensive). For the comparison process, the RNNMorph and the RNNSearch + Word2Vec models’ sentence level translations were individually ranked between each other, permitting the two translations to have ties in the ranking. The intra-annotator values were computed for these metrics and the scores are shown in Table TABREF32 BIBREF12 , BIBREF13 .",
"The human evaluation Kappa co-efficient results are calculated with respect to: DISPLAYFORM0 ",
"It was observed that the ranking Kappa co-efficient for intra-annotator ranking of the RNNMorph model was at 0.573, higher that the 0.410 of the RNNSearch+Word2Vec model, implying that the annotators found the RNNMorph model to produce better results when compared to the RNNSearch + Word2Vec model."
],
[
"The learning rate decay through the training process of the RNNMorph model is showcased in the graph in Fig. FIGREF34 . This process was done manually where the learning rate was decayed after the end of specific epochs based on an observed stagnation in perplexity.The RNNMorph model achieved saturation of perplexities much earlier through the epochs than the RNNSearch + Word2Vec model. This conforms to the expected outcome as the morphological segmentation has reduced the vocabulary size of the target language from 340,325 words to a mere 41,906 morphs.",
"The error function used was the sampled SoftMax loss to ensure a large target vocabulary could be accommodated BIBREF10 . A zoomed inset graph (Fig. FIGREF35 ) has been used to visualize the values of the error function for the RNNSearch + Word2Vec and RNNMorph models with 4 hidden layers. It can be seen that the RNNMorph model is consistently better in terms of the perplexity values through the time steps."
],
[
"In order to further demonstrate the quality of the RNNMorph model, the attention vectors of both the RNNSearch with Word2Vec embedding and RNNMorph models are compared for several good translations in Figs. FIGREF37 and FIGREF44 . It is observed that the reduction in vocabulary size has improved the source sentence lookup by quite an extent. Each cell in the heatmap displays the magnitude of the attention layer weight INLINEFORM0 for the INLINEFORM1 Tamil word and the INLINEFORM2 English word in the respective sentences. The intensity of black corresponds to the magnitude of the cell INLINEFORM3 . Also, the attention vectors of the RNNSearch model with Word2Vec embeddings tend to attend to INLINEFORM4 EOS INLINEFORM5 token in the middle of the sentence leading to incomplete translations. This could be due to the fact that only 44% of the Tamil vocabulary and 74% of the English vocabulary is taken for training in this model, as opposed to 100% of English and Tamil words in the RNNMorph model."
],
[
"A very large target vocabulary is an inadvertent consequence of the morphological richness of the Tamil language. This creates a potential restriction on the accuracy of the model as many inflectional forms of the same word are trained as independent units. One of the advantages of morphological segmentation of Tamil text is that the target vocabulary size decreased from 340,325 to a mere 41,906. This reduction helps improve the performance of the translation as the occurrence of unknown tokens was reduced compared to the RNNSearch model. This morphologically segmented vocabulary is divided into a collection of morphological roots and inflections as individual units."
],
[
"Some of the translations of the RNNMorph model have repetitions of the same phrases (Fig. FIGREF53 ), whereas such repetitions occur much less frequently in the RNNSearch predictions. Such translations would make for good results if the repetitions weren't present and all parts of the sentence occur just once. These repetitions might be due to the increase in the general sequence length of the target sentences because of the morphological segmentation. While it is true the target vocabulary size has decreased due to morphological segmentation, the RNNMorph has more input units (morphs) per sentence, which makes it more demanding of the LSTM's memory units and the feed forward network of the attention model. Additionally, this behavior could also be attributed to the errors in the semi-supervised morphological segmentation due to the complexities of the Tamil language and the extent of the corpus."
],
[
"The translation outputs of the RNNSearch + Word2Vec and Morph2Vec models for the same input sentences from the test set demonstrate the effectiveness of using a morphological segmentation tool and how the morphemes have changed the sentence to be more grammatically sound. It is also observed (from Fig. FIGREF55 ) that most of the translation sentences of the Morph2Vec model have no INLINEFORM0 UNK INLINEFORM1 tokens. They exist in the predictions mostly only due to a word in the English test sentence not present in the source vocabulary."
],
[
"Professors CN Krishnan, Sobha et al developed a machine-aided-translation (MAT) system similar to the Anusaakara English Hindi MT system, using a small corpus and very few transfer rules, available at AU-KBC website BIBREF14 . Balajapally et al. balajapally2006multilingual developed an example based machine translation (EBMT) system with 700000 sentences for English to INLINEFORM0 Tamil, Kannada, Hindi INLINEFORM1 transliterated text BIBREF15 , BIBREF16 . Renganathan renganathan2002interactive developed a rule based MT system for English and Tamil using grammar rules for the language pair. Vetrivel et al. vetrivel2010english used HMMs to align and translate English and Tamil parallel sentences to build an SMT system. Irvine et al. irvine2013combining tried to combine parallel and similar corpora to improve the performance of English to Tamil SMT amongst other languages. Kasthuri et al. kasthuri2014rule used a rule based MT system using transfer lexicon and morphological analysis tools. Anglabharathi was developed at IIT Kanpur, a system translating English to a collection of Indian languages including Tamil using CFG like structures to create a pseudo target to convert to Indian languages BIBREF17 , BIBREF18 . A variety of hybrid approaches have also been used for English–Tamil MT in combinations of rule based (transfer methods), interlingua representations BIBREF19 , BIBREF20 , BIBREF21 . The use of Statistical Machine Translation took over the English–Tamil MT system research because of its desirable properties of language independence, better generalization features and a reduced requirement of linguistic expertise BIBREF1 , BIBREF22 , BIBREF23 . Various enhancement techniques external to the MT system have also been proposed to improve the performance of translation using morphological pre and post processing techniques BIBREF24 , BIBREF25 , BIBREF26 .",
"The use of RNN Encoder Decoder models in machine translation has shown good results in languages with similar grammatical structure. Deep MT systems have been performing better than the other shallow SMT models recently, with the availability of computational resources and hardware making it feasible to train such models. The first of these models came in 2014, with Cho et al SecondOneByCho. The model used was the RNN LSTM encoder decoder model with the context vector output of the encoder (run for every word in the sentence) is fed to every decoder unit along with the previous word output until INLINEFORM0 EOS INLINEFORM1 is reached. This model was used to score translation results of another MT system. Sutskever et al. sutskever2014sequence created a similar encoder decoder model with the decoder getting the context vector only for the first word of the target language sentence. After that, only the decoded target outputs act as inputs to the various time steps of the decoder. One major drawback of these models is the size of the context vector of the encoder being static in nature. The same sized vector was expected to to represent sentences of arbitrary length, which was impractical when it came to very long sentences.",
"The next breakthrough came from Bahdanau et al. Bahdanau2014 where variable length word vectors were used and instead of just the context vector, a weighted sum of the inputs is given for the decoder. This enabled selective lookup to the source sentence during decoding and is known as the attention mechanism BIBREF27 . The attention mechanism was further analysed by Luong et al. luong2015effective where they made a distinction between global and local attention by means of AER scores of the attention vectors. A Gaussian distribution and a monotonic lookup were used to facilitate the corresponding local source sentence look-up."
],
[
"Thus, it is seen that the use of morphological segmentation on a morphologically rich language before translation helps with the performance of the translation in multiple ways. Thus, machine translation involving morphologically rich languages should ideally be carried out only after morphological segmentation. If the translation has to be carried out between two morphologically rich languages, then both the languages' sentences should be individually segmented based on morphology. This is because while it is true that they are both morphologically rich languages, the schemes that the languages use for the process of agglutination might be different, in which case a mapping between the units would be difficult without the segmentation.",
"One drawback of morphological segmentation is the increase in complexity of the model due to an increase in the average sentence lengths. This cannot be avoided as it is essential to enable a correspondence between the sentences of the two languages when one of them is a simple fusional language. Even with the increase in the average sentence length, the attention models that have been developed to ensure correctness of translation of long sequences can be put to good use when involving morphologically rich languages. Another point to note here is that morphologically rich languages like Tamil generally have lesser number of words per sentence than languages like English due to the inherent property of agglutination."
],
[
"The model implemented in this paper only includes source-side morphological segmentation and does not include a target side morphological agglutination to give back the output in words rather than morphemes. In order to implement an end-to-end translation system for morphologically rich languages, a morphological generator is essential because the output units of the translation cannot be morphemes.",
"The same model implemented can be further enhanced by means of a better corpus that can generalize over more than just domain specific source sentences. Also, the use of a better GPU would result in a better allocation of the hidden layer sizes and the batch sizes thereby possibly increasing the scope and accuracy of learning of the translation model.",
"Although not directly related to Machine Translation, the novel encoder– decoder architecture proposed in by Rocktaschel et al. rocktaschel2015reasoning for Natural Language Inference (NLI) can be used for the same. Their model fuses inferences from each and every individual word, summarizing information at each step, thereby linking the hidden state of the encoder with that of the decoder by means of a weighted sum, trained for optimization."
],
[
"I would like to thank Dr. M. Anand Kumar, Assistant Professor, Amrita Vishwa Vidyapeetham for his continuous support and guidance. I would also like to thank Dr. Arvindan, Professor, SSN College Of Engineering for his inputs and suggestions. "
]
],
"section_name": [
"Introduction",
"Corpus",
"Word2Vec",
"Neural Translation Model",
"Morphological Segmentation",
"Experiment",
"Bucketing",
"Model Details",
"Results and Discussion",
"BLEU Evaluation",
"Human Evaluation",
"Model Parameters",
"Attention Vectors",
"Target vocabulary size",
"Repetitions",
"Model Outputs",
"Related Work",
"Conclusion",
"Future Work",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"4b3fcee528d9f3756a11b9260219468955c08563",
"9338c02595f8d5c25944069fe08f13b19279ab58"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8e3abf2a41a81f90c7cc1c23ff8bd3c504eaff1e",
"fe98e062f294135147956cfc430c648387e78d16"
],
"answer": [
{
"evidence": [
"To ensure that the increase in BLEU score correlated to actual increase in performance of translation, human evaluation metrics like adequacy, precision and ranking values (between RNNSearch and RNNMorph outputs) were estimated in Table TABREF30 . A group of 50 native people who were well-versed in both English and Tamil languages acted as annotators for the evaluation. A collection of samples of about 100 sentences were taken from the test set results for comparison. This set included a randomized selection of the translation results to ensure the objectivity of evaluation. Fluency and adequacy results for the RNNMorph results are tabulated. Adequacy rating was calculated on a 5-point scale of how much of the meaning is conveyed by the translation (All, Most, Much, Little, None). The fluency rating was calculated based on grammatical correctness on a 5-point scale of (Flawless, Good, Non-native, Disfluent, Incomprehensive). For the comparison process, the RNNMorph and the RNNSearch + Word2Vec models’ sentence level translations were individually ranked between each other, permitting the two translations to have ties in the ranking. The intra-annotator values were computed for these metrics and the scores are shown in Table TABREF32 BIBREF12 , BIBREF13 ."
],
"extractive_spans": [],
"free_form_answer": "50 human annotators ranked a random sample of 100 translations by Adequacy, Fluency and overall ranking on a 5-point scale.",
"highlighted_evidence": [
"A group of 50 native people who were well-versed in both English and Tamil languages acted as annotators for the evaluation. A collection of samples of about 100 sentences were taken from the test set results for comparison. This set included a randomized selection of the translation results to ensure the objectivity of evaluation. Fluency and adequacy results for the RNNMorph results are tabulated. Adequacy rating was calculated on a 5-point scale of how much of the meaning is conveyed by the translation (All, Most, Much, Little, None). The fluency rating was calculated based on grammatical correctness on a 5-point scale of (Flawless, Good, Non-native, Disfluent, Incomprehensive). For the comparison process, the RNNMorph and the RNNSearch + Word2Vec models’ sentence level translations were individually ranked between each other, permitting the two translations to have ties in the ranking."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To ensure that the increase in BLEU score correlated to actual increase in performance of translation, human evaluation metrics like adequacy, precision and ranking values (between RNNSearch and RNNMorph outputs) were estimated in Table TABREF30 . A group of 50 native people who were well-versed in both English and Tamil languages acted as annotators for the evaluation. A collection of samples of about 100 sentences were taken from the test set results for comparison. This set included a randomized selection of the translation results to ensure the objectivity of evaluation. Fluency and adequacy results for the RNNMorph results are tabulated. Adequacy rating was calculated on a 5-point scale of how much of the meaning is conveyed by the translation (All, Most, Much, Little, None). The fluency rating was calculated based on grammatical correctness on a 5-point scale of (Flawless, Good, Non-native, Disfluent, Incomprehensive). For the comparison process, the RNNMorph and the RNNSearch + Word2Vec models’ sentence level translations were individually ranked between each other, permitting the two translations to have ties in the ranking. The intra-annotator values were computed for these metrics and the scores are shown in Table TABREF32 BIBREF12 , BIBREF13 ."
],
"extractive_spans": [
"adequacy, precision and ranking values"
],
"free_form_answer": "",
"highlighted_evidence": [
"To ensure that the increase in BLEU score correlated to actual increase in performance of translation, human evaluation metrics like adequacy, precision and ranking values (between RNNSearch and RNNMorph outputs) were estimated in Table TABREF30 . A group of 50 native people who were well-versed in both English and Tamil languages acted as annotators for the evaluation. A collection of samples of about 100 sentences were taken from the test set results for comparison. This set included a randomized selection of the translation results to ensure the objectivity of evaluation. Fluency and adequacy results for the RNNMorph results are tabulated. Adequacy rating was calculated on a 5-point scale of how much of the meaning is conveyed by the translation (All, Most, Much, Little, None). The fluency rating was calculated based on grammatical correctness on a 5-point scale of (Flawless, Good, Non-native, Disfluent, Incomprehensive)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"By how much do they improve the efficacy of the attention mechanism?",
"How were the human judgements assembled?"
],
"question_id": [
"44497509fdf5e87cff05cdcbe254fbd288d857ad",
"0ee73909ac638903da4a0e5565c8571fc794ab96"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Neural Translation Model.",
"Figure 2: RNNMorph in Training.",
"Figure 3: Corpus Analysis.",
"Figure 4: Bucketing.",
"Table 1: BLEU Scores for different models",
"Figure 5: BLEU Evaluation",
"Table 2: RNNMorph Intra-Annotator Agreement",
"Table 3: Intra-Annotator Ranking",
"Figure 6: Learning Rate Decay.",
"Figure 7: Perplexity Function.",
"Figure 8: Comparison of Attention Vectors - 1",
"Figure 9: Comparison of Attention Vectors - 2",
"Figure 10: Repetitions in RNNMorph model.",
"Figure 11: Translation Results."
],
"file": [
"6-Figure1-1.png",
"8-Figure2-1.png",
"8-Figure3-1.png",
"9-Figure4-1.png",
"10-Table1-1.png",
"11-Figure5-1.png",
"11-Table2-1.png",
"12-Table3-1.png",
"12-Figure6-1.png",
"13-Figure7-1.png",
"14-Figure8-1.png",
"15-Figure9-1.png",
"16-Figure10-1.png",
"17-Figure11-1.png"
]
} | [
"How were the human judgements assembled?"
] | [
[
"1612.02482-Human Evaluation-0"
]
] | [
"50 human annotators ranked a random sample of 100 translations by Adequacy, Fluency and overall ranking on a 5-point scale."
] | 58 |
1904.10503 | Fine-Grained Named Entity Recognition using ELMo and Wikidata | Fine-grained Named Entity Recognition is a task whereby we detect and classify entity mentions to a large set of types. These types can span diverse domains such as finance, healthcare, and politics. We observe that when the type set spans several domains the accuracy of the entity detection becomes a limitation for supervised learning models. The primary reason being the lack of datasets where entity boundaries are properly annotated, whilst covering a large spectrum of entity types. Furthermore, many named entity systems suffer when considering the categorization of fine grained entity types. Our work attempts to address these issues, in part, by combining state-of-the-art deep learning models (ELMo) with an expansive knowledge base (Wikidata). Using our framework, we cross-validate our model on the 112 fine-grained entity types based on the hierarchy given from the Wiki(gold) dataset. | {
"paragraphs": [
[
"Named entity recognition (NER) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 is the process by which we identify text spans which mention named entities, and to classify them into predefined categories such as person, location, organization etc. NER serves as the basis for a variety of natural language processing (NLP) applications such as relation extraction BIBREF4 , machine translation BIBREF5 , question answering BIBREF6 and knowledge base construction BIBREF7 . Although early NER systems have been successful in producing adequate recognition accuracy, they often require significant human effort in carefully designing rules or features.",
"In recent years, deep learning methods been employed in NER systems, yielding state-of-the-art performance. However, the number of types detected are still not sufficient for certain domain-specific applications. For relation extraction, identifying fine-grained types has been shown to significantly increase the performance of the extractor BIBREF8 , BIBREF9 since this helps in filtering out candidate relation types which do not follow this type constraint. Furthermore, for question answering fine-grained Named Entity Recognition (FgNER) can provide additional information helping to match questions to its potential answers thus improving performance BIBREF10 . For example, Li and Roth BIBREF11 rank questions based on their expected answer types (i.e. will the answer be food, vehicle or disease).",
"Typically, FgNER systems use over a hundred labels, arranged in a hierarchical structure. We find that available training data for FgNER typically contain noisy labels, and creating manually annotated training data for FgNER is a time-consuming process. Furthermore, human annotators will have to assign a subset of correct labels from hundreds of possible labels making this a somewhat arduous task. Currently, FgNER systems use distant supervision BIBREF12 to automatically generate training data. Distant supervision is a technique which maps each entity in the corpus to knowledge bases such as Freebase BIBREF13 , DBpedia BIBREF14 , YAGO BIBREF15 and helps with the generation of labeled data. This method will assign the same set of labels to all mentions of a particular entity in the corpus. For example, “Barack Obama” is a person, politician, lawyer, and author. If a knowledge base has these four matching labels, the distant supervision technique will assign all of them to every mention of “Barack Obama”. Therefore, the training data will also fail to distinguish between mentions of “Barack Obama” in all subsequent utterances.",
"Ling et al. ling2012fine proposed the first system for FgNER, where they used 112 overlapping labels with a linear classifier perceptron for multi-label classification. Yosef et al. spaniol2012hyena used multiple binary SVM classifiers to assign entities to a set of 505 types. Gillick et al. gillick2014context introduced context dependent FgNER and proposed a set of heuristics for pruning labels that might not be relevant given the local context of the entity. Yogatama et al. yogatama2015embedding proposed an embedding based model where user-defined features and labels were embedded into a low dimensional feature space to facilitate information sharing among labels.",
"Shimaoka et al. shimaoka2016attentive proposed an attentive neural network model which used long short-term memory (LSTMs) to encode the context of the entity, then used an attention mechanism to allow the model to focus on relevant expressions in the entity mention's context. To learn entity representations, we propose a scheme which is potentially more generalizable."
],
[
"We evaluate our model on two publicly available datasets. The statistics for both are shown in Table TABREF3 . The details of these datasets are as follows:",
"OntoNotes: OntoNotes 5.0 BIBREF16 includes texts from five different text genres: broadcast conversation (200k), broadcast news (200k), magazine (120k), newswire (625k), and web data (300k). This dataset is annotated with 18 categories.",
"Wiki(gold): The training data consists of Wikipedia sentences and was automatically generated using a distant supervision method, mapping hyperlinks in Wikipedia articles to Freebase, which we do not use in this study. The test data, mainly consisting of sentences from news reports, was manually annotated as described in BIBREF8 . The class hierarchy is shown in Figure FIGREF2 . This dataset is annotated with 7 main categories (bold text in Figure FIGREF2 ), which maps directly to OntoNotes. The miscellaneous category in Figure FIGREF2 does not have direct mappings, so future work may include redefining these categories so the mappings are more meaningful."
],
[
"NER involves identifying both entity boundaries and entity types. With “exact-match evaluation”, a named entity is considered correctly recognized only if both the boundaries and type match the ground truth BIBREF8 , BIBREF17 , BIBREF18 . Precision, Recall, and F-1 scores are computed on the number of true positives (TP), false positives (FP), and false negatives (FN). Their formal definitions are as follows:",
"True Positive (TP): entities that are recognized by NER and match the ground truth.",
"False Positive (FP): entities that are recognized by NER but do not match the ground truth.",
"False Negative (FN): entities annotated in the ground which that are not recognized by NER.",
"Precision measures the ability of a NER system to present only correct entities, and Recall measures the ability of a NER system to recognize all entities in a corpus. DISPLAYFORM0 ",
"The F-1 score is the harmonic mean of precision and recall, and the balanced F-1 score is the variant which is most commonly used. This is defined as: DISPLAYFORM0 ",
"Since most NER systems involve multiple entity types, it is often required to assess the performance across all entity classes. Two measures are commonly used for this purpose: the macro-averaged F-1 score and the micro-averaged F-1 score. The macro-averaged F-1 score computes the F-1 score independently for each entity type, then takes the average (hence treating all entity types equally). The micro-averaged F-1 score aggregates the contributions of entities from all classes to compute the average (treating all entities equally). We use the micro-averaged F-1 in our study since this accounts for label imbalances in the evaluation data and therefore a more meaningful statistic."
],
[
"Over the few past years, the emergence of deep neural networks has fundamentally changed the design of entity detection systems. Consequently, recurrent neural networks (RNN) have found popularity in the field since they are able to learn long term dependencies of sequential data. The recent success of neural network based architectures principally comes from its deep structure. Training a deep neural network, however, is a difficult problem due to vanishing or exploding gradients. In order to solve this, LSTMs were proposed. An LSTM is an internal memory cell controlled by forget gate and input gate networks. A forget gate in an LSTM layer which determines how much prior memory should be passed into the next time increment. Similarly, an input gate scales new input to memory cells. Depending on the states of both gates, LSTM is able to capture long-term or short-term dependencies for sequential data. This is an ideal property for many NLP tasks."
],
[
"Recently, Peters et al. BIBREF19 proposed ELMo word representations. ELMo extends a traditional word embedding model with features produced bidirectionally with character convolutions. It has been shown that the utilization of ELMo for different NLP tasks result in improved performance compared to other types of word embedding models such as Word2Vec BIBREF20 , GloVe BIBREF21 , and fastText BIBREF22 .",
"The architecture of our proposed model is shown in Figure FIGREF12 . The input is a list of tokens and the output are the predicted entity types. The ELMo embeddings are then used with a residual LSTM to learn informative morphological representations from the character sequence of each token. We then pass this to a softmax layer as a tag decoder to predict the entity types.",
"Hyperparameter settings: The hidden-layer size of each LSTM within the model is set 512. We use a dropout with the probability of 0.2 on the output of the LSTM encoders. The embedding dimension from ELMo is 1024. The optimization method we use is Adam BIBREF23 . We train with a batch size of 32 for 30 epochs. The model was implemented using the TensorFlow framework."
],
[
"Entity linking (EL) BIBREF24 , also known as named entity disambiguation or normalization, is the task to determine the identity of entities mentioned in a piece of text with reference to a knowledge base. There are a number of knowledge bases that provide a background repository for entity classification of this type. For this study, we use Wikidata, which can be seen diagrammatically in Figure FIGREF12 . Systems such as DeepType BIBREF25 integrate symbolic information into the reasoning process of a neural network with a type system and show state-of-the-art performances for EL. They do not, however, quote results on Wiki(gold) so a direct comparison is difficult.",
"While these knowledge bases provide semantically rich and fine-granular classes and relationship types, the task of entity classification often requires associating coarse-grained classes with discovered surface forms of entities. Most existing studies consider NER and entity linking as two separate tasks, whereas we try to combine the two. It has been shown that one can significantly increase the semantic information carried by a NER system when we successfully linking entities from a deep learning method to the related entities from a knowledge base BIBREF26 , BIBREF27 .",
"Redirection: For the Wikidata linking element, we recognize that the lookup will be constrained by the most common lookup name for each entity. Consider the utterance (referring to the NBA basketball player) from Figure FIGREF12 “Michael Jeffrey Jordan in San Jose” as an example. The lookup for this entity in Wikidata is “Michael Jordan” and consequently will not be picked up if we were to use an exact string match. A simple method to circumvent such a problem is the usage of a redirection list. Such a list is provided on an entity by entity basis in the “Also known as” section in Wikidata. Using this redirection list, when we do not find an exact string match improves the recall of our model by 5-10%. Moreover, with the example of Michael Jordan (person), using our current framework, we will always refer to the retired basketball player (Q41421). We will never, for instance, pick up Michael Jordan (Q27069141) the American football cornerback. Or in fact any other Michael Jordan, famous or otherwise. One possible method to overcome this is to add a disambiguation layer, which seeks to use context from earlier parts of the text. This is, however, work for future improvement and we only consider the most common version of that entity.",
"Clustering: The Wikidata taxonomy provides thousands of possible instance of, and subclass of types for our entities. Consequently, in order to perform a meaningful validation of our model, we must find a way to cluster these onto the 112 types provided by Wiki(gold). Our clustering is performed as follows:",
"If the entity type is either person, location, organization we use the NECKAr BIBREF28 tool to narrow down our list of searchable entities.",
"We then look at either the occupation for person, or instance of for location/organization categories to map to the available subtypes.",
"If the entity type is not person, location, or organization we search all of Wikidata.",
"The clustering we perform in part 1 or 2 is from a cosine similarity of the entity description to the list of possible subtypes for that entity. For this we use Word2Vec word embeddings trained on Wikipedia. We set the minimum threshold of the average cosine similarity to be 0.1.",
"As an example, consider the test sentence: “The device will be available on sale on 20th April 2011 on amazon uk Apple's iPad” from Figure FIGREF18 . First, we tag iPad as product using the context encoder described in Section 2.1. We then search Wikidata and return the most common variant for that entity in this case Q2796 (the most referenced variant is the one with the lowest Q-id). We then calculate a cosine similarity of the description, in this case “line of tablet computers”, with the possible subtypes of product. The possible subtypes, in this case, are engine, airplane, car, ship, spacecraft, train, camera, mobile phone, computer, software, game, instrument, ship, weapon. We return the highest result above 0.1, which in this case is computer (0.54)."
],
[
"The results for each class type are shown in Table TABREF19 , with some specific examples shown in Figure FIGREF18 . For the Wiki(gold) we quote the micro-averaged F-1 scores for the entire top level entity category. The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. It is worth noting that one could improve Wiki(gold) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. The results in Table TABREF19 (OntoNotes) only show the main 7 categories in OntoNotes which map to Wiki(gold) for clarity. The other categories (date, time, norp, language, ordinal, cardinal, quantity, percent, money, law) have F-1 scores between 80-90%, with the exception of time (65%)"
],
[
"In this paper, we present a deep neural network model for the task of fine-grained named entity classification using ELMo embeddings and Wikidata. The proposed model learns representations for entity mentions based on its context and incorporates the rich structure of Wikidata to augment these labels into finer-grained subtypes. We can see comparisons of our model made on Wiki(gold) in Table TABREF20 . We note that the model performs similarly to existing systems without being trained or tuned on that particular dataset. Future work may include refining the clustering method described in Section 2.2 to extend to types other than person, location, organization, and also to include disambiguation of entity types."
]
],
"section_name": [
"Introduction",
"Datasets",
"Evaluation Metrics",
"Method",
"NER using ELMo",
"Entity Linking using Wikidata",
"Results",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"53186f2cad76fa7a42df1d8c21d79ec1ddb59da5",
"dd9156a85e765761c2c3a9bb694b18bad07c054c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Comparison with existing models."
],
"extractive_spans": [],
"free_form_answer": "Akbik et al. (2018), Link et al. (2012)",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Comparison with existing models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we present a deep neural network model for the task of fine-grained named entity classification using ELMo embeddings and Wikidata. The proposed model learns representations for entity mentions based on its context and incorporates the rich structure of Wikidata to augment these labels into finer-grained subtypes. We can see comparisons of our model made on Wiki(gold) in Table TABREF20 . We note that the model performs similarly to existing systems without being trained or tuned on that particular dataset. Future work may include refining the clustering method described in Section 2.2 to extend to types other than person, location, organization, and also to include disambiguation of entity types.",
"FLOAT SELECTED: Table 3: Comparison with existing models."
],
"extractive_spans": [],
"free_form_answer": "They compare to Akbik et al. (2018) and Link et al. (2012).",
"highlighted_evidence": [
"We can see comparisons of our model made on Wiki(gold) in Table TABREF20 .",
"FLOAT SELECTED: Table 3: Comparison with existing models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4cbb4c87c1ba12f24c101941969835ae60d88c5b",
"f4eb7f67fb41888ab6b1fa018211c89994b6e8fe"
],
"answer": [
{
"evidence": [
"The results for each class type are shown in Table TABREF19 , with some specific examples shown in Figure FIGREF18 . For the Wiki(gold) we quote the micro-averaged F-1 scores for the entire top level entity category. The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. It is worth noting that one could improve Wiki(gold) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. The results in Table TABREF19 (OntoNotes) only show the main 7 categories in OntoNotes which map to Wiki(gold) for clarity. The other categories (date, time, norp, language, ordinal, cardinal, quantity, percent, money, law) have F-1 scores between 80-90%, with the exception of time (65%)"
],
"extractive_spans": [],
"free_form_answer": "F-1 score on the OntoNotes is 88%, and it is 53% on Wiki (gold).",
"highlighted_evidence": [
"The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The results for each class type are shown in Table TABREF19 , with some specific examples shown in Figure FIGREF18 . For the Wiki(gold) we quote the micro-averaged F-1 scores for the entire top level entity category. The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. It is worth noting that one could improve Wiki(gold) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. The results in Table TABREF19 (OntoNotes) only show the main 7 categories in OntoNotes which map to Wiki(gold) for clarity. The other categories (date, time, norp, language, ordinal, cardinal, quantity, percent, money, law) have F-1 scores between 80-90%, with the exception of time (65%)"
],
"extractive_spans": [
" total F-1 score on the OntoNotes dataset is 88%",
"total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%"
],
"free_form_answer": "",
"highlighted_evidence": [
"The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5573af4ed6908677bb3e911b0699fecd657a3e38",
"840ad086e4d17de937b07c8cce677a91e42ecaf6"
],
"answer": [
{
"evidence": [
"While these knowledge bases provide semantically rich and fine-granular classes and relationship types, the task of entity classification often requires associating coarse-grained classes with discovered surface forms of entities. Most existing studies consider NER and entity linking as two separate tasks, whereas we try to combine the two. It has been shown that one can significantly increase the semantic information carried by a NER system when we successfully linking entities from a deep learning method to the related entities from a knowledge base BIBREF26 , BIBREF27 .",
"Redirection: For the Wikidata linking element, we recognize that the lookup will be constrained by the most common lookup name for each entity. Consider the utterance (referring to the NBA basketball player) from Figure FIGREF12 “Michael Jeffrey Jordan in San Jose” as an example. The lookup for this entity in Wikidata is “Michael Jordan” and consequently will not be picked up if we were to use an exact string match. A simple method to circumvent such a problem is the usage of a redirection list. Such a list is provided on an entity by entity basis in the “Also known as” section in Wikidata. Using this redirection list, when we do not find an exact string match improves the recall of our model by 5-10%. Moreover, with the example of Michael Jordan (person), using our current framework, we will always refer to the retired basketball player (Q41421). We will never, for instance, pick up Michael Jordan (Q27069141) the American football cornerback. Or in fact any other Michael Jordan, famous or otherwise. One possible method to overcome this is to add a disambiguation layer, which seeks to use context from earlier parts of the text. This is, however, work for future improvement and we only consider the most common version of that entity."
],
"extractive_spans": [],
"free_form_answer": "Entities from a deep learning model are linked to the related entities from a knowledge base by a lookup.",
"highlighted_evidence": [
"It has been shown that one can significantly increase the semantic information carried by a NER system when we successfully linking entities from a deep learning method to the related entities from a knowledge base BIBREF26 , BIBREF27 .",
"Redirection: For the Wikidata linking element, we recognize that the lookup will be constrained by the most common lookup name for each entity. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The architecture of our proposed model is shown in Figure FIGREF12 . The input is a list of tokens and the output are the predicted entity types. The ELMo embeddings are then used with a residual LSTM to learn informative morphological representations from the character sequence of each token. We then pass this to a softmax layer as a tag decoder to predict the entity types."
],
"extractive_spans": [
"ELMo embeddings are then used with a residual LSTM to learn informative morphological representations from the character sequence of each token"
],
"free_form_answer": "",
"highlighted_evidence": [
"The input is a list of tokens and the output are the predicted entity types. The ELMo embeddings are then used with a residual LSTM to learn informative morphological representations from the character sequence of each token. We then pass this to a softmax layer as a tag decoder to predict the entity types."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which other approaches do they compare their model with?",
"What results do they achieve using their proposed approach?",
"How do they combine a deep learning model with a knowledge base?"
],
"question_id": [
"5a65ad10ff954d0f27bb3ccd9027e3d8f7f6bb76",
"729694a9fe1e05d329b7a4078a596fe606bc5a95",
"1c997c268c68149ae6fb43d83ffcd53f0e7fe57e"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Statistics of the datasets used in this work.",
"Figure 1: The 112 tags used in Wiki(GOLD). The tags in bold are extracted in the step described in Section 2.1. The finer grained tags are extracted as a final step described in Section 2.2.",
"Figure 2: The full model pipeline. The first level involves token embeddings from ELMo which are fed into a residual LSTM module. The final layer involves passing the detected entities into a knowledge base, which in our case is Wikidata.",
"Figure 3: Some example outputs from the full model pipeline on the Wiki(GOLD) evaluation set.",
"Table 2: Performance of our model from the NER classifier evaluated on OntoNotes, and the 112 subclass Wikidata linking step evaluated on Wiki(GOLD). The first column denotes the percentage breakdown per class type. The precision, recall, and F-1 scores are shown for Wiki(GOLD). For OntoNotes the precision and recall are identical for each category, therefore we only quote F-1. All values are quoted as a percentage and rounded to the nearest whole number. Since the table only shows 7 categories, the percentages will not sum to 100.",
"Table 3: Comparison with existing models."
],
"file": [
"2-Table1-1.png",
"2-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Table2-1.png",
"6-Table3-1.png"
]
} | [
"Which other approaches do they compare their model with?",
"What results do they achieve using their proposed approach?",
"How do they combine a deep learning model with a knowledge base?"
] | [
[
"1904.10503-Conclusion and Future Work-0",
"1904.10503-6-Table3-1.png"
],
[
"1904.10503-Results-0"
],
[
"1904.10503-NER using ELMo-1",
"1904.10503-Entity Linking using Wikidata-1",
"1904.10503-Entity Linking using Wikidata-2"
]
] | [
"They compare to Akbik et al. (2018) and Link et al. (2012).",
"F-1 score on the OntoNotes is 88%, and it is 53% on Wiki (gold).",
"Entities from a deep learning model are linked to the related entities from a knowledge base by a lookup."
] | 60 |
1912.01772 | A Resource for Computational Experiments on Mapudungun | We present a resource for computational experiments on Mapudungun, a polysynthetic indigenous language spoken in Chile with upwards of 200 thousand speakers. We provide 142 hours of culturally significant conversations in the domain of medical treatment. The conversations are fully transcribed and translated into Spanish. The transcriptions also include annotations for code-switching and non-standard pronunciations. We also provide baseline results on three core NLP tasks: speech recognition, speech synthesis, and machine translation between Spanish and Mapudungun. We further explore other applications for which the corpus will be suitable, including the study of code-switching, historical orthography change, linguistic structure, and sociological and anthropological studies. | {
"paragraphs": [
[
"Recent years have seen unprecedented progress for Natural Language Processing (NLP) on almost every NLP subtask. Even though low-resource settings have also been explored, this progress has overwhelmingly been observed in languages with significant data resources that can be leveraged to train deep neural networks. Low-resource languages still lag behind.",
"Endangered languages pose an additional challenge. The process of documenting an endangered language typically includes the creation of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored in large online linguistics archives. This process is often hindered by the Transcription Bottleneck: the linguistic fieldworker and the language community may not have time to transcribe all of the recordings and may only transcribe segments that are linguistically salient for publication or culturally significant for the creation of community resources.",
"With this work we make publicly available a large corpus in Mapudungun, a language of the indigenous Mapuche people of southern Chile and western Argentina. We hope to ameliorate the resource gap and the transcription bottleneck in two ways. First, we are providing a larger data set than has previously been available, and second, we are providing baselines for NLP tasks (speech recognition, speech synthesis, and machine translation). In providing baselines and datasets splits, we hope to further facilitate research on low-resource NLP for this language through our data set. Research on low-resource speech recognition is particularly important in relieving the transcription bottleneck, while tackling the research challenges that speech synthesis and machine translation pose for such languages could lead to such systems being deployed to serve more under-represented communities."
],
[
"Mapudungun (iso 639-3: arn) is an indigenous language of the Americas spoken natively in Chile and Argentina, with an estimated 100 to 200 thousand speakers in Chile and 27 to 60 thousand speakers in Argentina BIBREF0. It is an isolate language and is classified as threatened by Ethnologue, hence the critical importance of all documentary efforts. Although the morphology of nouns is relatively simple, Mapudungun verb morphology is highly agglutinative and complex. Some analyses provide as many as 36 verb suffix slots BIBREF1. A typical complex verb form occurring in our corpus of spoken Mapudungun consists of five or six morphemes.",
"Mapudungun has several interesting grammatical properties. It is a polysynthetic language in the sense of BIBREF2; see BIBREF3 for explicit argumentation. As with other polysynthetic languages, Mapudungun has Noun Incorporation; however, it is unique insofar as the Noun appears to the right of the Verb, instead of to the left, as in most polysynthetic languages BIBREF4. One further distinction of Mapudungun is that, whereas other polysynthetic languages are characterized by a lack of infinitives, Mapudungun has infinitival verb forms; that is, while subordinate clauses in Mapudungun closely resemble possessed nominals and may occur with an analytic marker resembling possessor agreement, there is no agreement inflection on the verb itself. One further remarkable property of Mapudungun is its inverse voice system of agreement, whereby the highest agreement is with the argument highest in an animacy hierarchy regardless of thematic role BIBREF5."
],
[
"The resource is comprised of 142 hours of spoken Mapudungun that was recorded during the AVENUE project BIBREF6 in 2001 to 2005. The data was recorded under a partnership between the AVENUE project, funded by the US National Science Foundation at Carnegie Mellon University, the Chilean Ministry of Education (Mineduc), and the Instituto de Estudios Indígenas at Universidad de La Frontera, originally spanning 170 hours of audio. We have recently cleaned the data and are releasing it publicly for the first time (although it has been shared with individual researchers in the past) along with NLP baselines.",
"The recordings were transcribed and translated into Spanish at the Instituto de Estudios Indígenas at Universidad de La Frontera. The corpus covers three dialects of Mapudungun: about 110 hours of Nguluche, 20 hours of Lafkenche and 10 hours of Pewenche. The three dialects are quite similar, with some minor semantic and phonetic differences. The fourth traditionally distinguished dialect, Huilliche, has several grammatical differences from the other three and is classified by Ethnologue as a separate language, iso 639-3: huh, and as nearly extinct.",
"The recordings are restricted to a single domain: primary, preventive, and treatment health care, including both Western and Mapuche traditional medicine. The recording sessions were conducted as interactive conversations so as to be natural in Mapuche culture, and they were open-ended, following an ethnographic approach. The interviewer was trained in these methods along with the use of the digital recording systems that were available at the time. We also followed human subject protocol. Each person signed a consent form to release the recordings for research purposes and the data have been accordingly anonymized. Because Machi (traditional Mapuche healers) were interviewed, we asked the transcribers to delete any culturally proprietary knowledge that a Machi may have revealed during the conversation. Similarly, we deleted any names or any information that may identify the participants.",
"The corpus is culturally relevant because it was created by Mapuche people, using traditional ways of relating to each other in conversations. They discussed personal experiences with primary health care in the traditional Mapuche system and the Chilean health care system, talking about illnesses and the way they were cured. The participants ranged from 16 years old to 100 years old, almost in equal numbers of men and women, and they were all native speakers of Mapudungun."
],
[
"At the time of the collection and transcription of the corpus, the orthography of Mapudungun was not standardized. The Mapuche team at the Instituto de Estudios Indígenas (IEI – Institute for Indigenous Studies) developed a supra-dialectal alphabet that comprises 28 letters that cover 32 phones used in the three Mapudungun variants. The main criterion for choosing alphabetic characters was to use the current Spanish keyboard that was available on all computers in Chilean offices and schools. The alphabet used the same letters used in Spanish for those phonemes that sound like Spanish phonemes. Diacritics such as apostrophes were used for sounds that are not found in Spanish.",
"As a result, certain orthographic conventions that were made at the time deviate from the now-standard orthography of Mapudungun, Azumchefe. We plan to normalize the orthography of the corpus, and in fact a small sample has already been converted to the modern orthography. However, we believe that the original transcriptions will also be invaluable for academic, historical, and cultural purposes, hence we release the corpus using these conventions."
],
[
"In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such."
],
[
"The dialogues were originally recorded using a Sony DAT recorder (48kHz), model TCD-D8, and Sony digital stereo microphone, model ECM-DS70P. Transcription was performed with the TransEdit transcription tool v.1.1 beta 10, which synchronizes the transcribed text and the wave files.",
"However, we found that a non-trivial number of the utterance boundaries and speaker annotations were flawed. Also some recording sessions did not have a complete set of matching audio, transcription, and translation files. Hence, in an effort to provide a relatively “clean\" corpus for modern computational experiments, we converted the encoding of the textual transcription from Latin-1 to Unicode, DOS to UNIX line endings, a now more standard text encoding format than what was used when the data was first collected. Additionally, we renamed a small portion of files which had been misnamed and removed several duplicate files.",
"Although all of the data was recorded with similar equipment in relatively quiet environments, the acoustics are not as uniform as we would like for building speech synthesizers. Thus we applied standardized power normalization. We also moved the boundaries of the turns to standardize the amount of leading and trailing silence in each turn. This is a standard procedure for speech recognition and synthesis datasets. Finally we used the techniques in BIBREF7 for found data to re-align the text to the audio and find out which turns are best (or worst) aligned so that we can select segments that give the most accurate alignments. Some of the misalignments may in part be due to varied orthography, and we intend, but have not yet, to investigate normalization of orthography (i.e. spelling correction) to mitigate this."
],
[
"We create two training sets, one appropriate for single-speaker speech synthesis experiments, and one appropriate for multiple-speaker speech recognition and machine translation experiments. In both cases, our training, development, and test splits are performed at the dialogue level, so that all examples from each dialogue belong to exactly one of these sets.",
"For single-speaker speech synthesis, we only use the dialog turns of the speaker with the largest volume of data (nmlch – one of the interviewers). The training set includes $221.8$ thousand sentences from 285 dialogues, with 12 and 46 conversations reserved for the development and test set.",
"For speech recognition experiments, we ensure that our test set includes unique speakers as well as speakers that overlap with the training set, in order to allow for comparisons of the ability of the speech recognition system to generalize over seen and new speakers. For consistency, we use the same dataset splits for the machine translation experiments. The statistics in Table reflect this split."
],
[
"Our resource has the potential to be the basis of computational research in Mapudungun across several areas. Since the collected audio has been transcribed, our resource is appropriate for the study of automatic speech recognition and speech synthesis. The Spanish translations enable the creation of machine translation systems between Mapudungun and Spanish, as well as end-to-end (or direct) speech translation. We in fact built such speech synthesis, speech recognition, and machine translation systems as a showcase of the usefulness of our corpus in that research direction.",
"Furthermore, our annotations of the Spanish words interspersed in Mapudungun speech could allow for a study of code-switching patterns within the Mapuche community. In addition, our annotations of non-standardized orthographic transcriptions could be extremely useful in the study of historical language and orthography change as a language moves from predominantly oral to being written in a standardized orthography, as well as in building spelling normalization and correction systems. The relatively large amount of data that we collected will also allow for the training of large language models, which in turn could be used as the basis for predictive keyboards tailored to Mapudungun. Last, since all data are dialogues annotated for the different speaker turns, they could be useful for building Mapudungun dialogue systems and chatbot-like applications.",
"The potential applications of our resource, however, are not exhausted in language technologies. The resource as a whole could be invaluable for ethnographic and sociological research, as the conversations contrast traditional and Western medicine practices, and they could reveal interesting aspects of the Mapuche culture.",
"In addition, the corpus is a goldmine of data for studying the morphostyntax of Mapudungun BIBREF8. As an isolate polysynthetic language, the study of Mapudungun can provide insights into the range of possibilities within human languages can work."
],
[
"Using the aforementioned higher quality portions of the corpus, we trained baseline systems for Mapudungun speech recognition and speech synthesis, as well as Machine Translation systems between Mapudungun and Spanish."
],
[
"In our previous work on building speech systems on found data in 700 languages, BIBREF7, we addressed alignment issues (when audio is not segmented into turn/sentence sized chunks) and correctness issues (when the audio does not match the transcription). We used the same techniques here, as described above.",
"For the best quality speech synthesis we need a few hours of phonetically-balanced, single-speaker, read speech. Our first step was to use the start and end points for each turn in the dialogues, and select those of the most frequent speaker, nmlch. This gave us around 18250 segments. We further automatically removed excessive silence from the start, middle and end of these turns (based on occurrence of F0). This gave us 13 hours and 48 minutes of speech.",
"We phonetically aligned this data and built a speech clustergen statistical speech synthesizer BIBREF9 from all of this data. We resynthesized all of the data and measured the difference between the synthesized data and the original data using Mel Cepstral Distortion, a standard method for automatically measuring quality of speech generation BIBREF10. We then ordered the segments by their generation score and took the top 2000 turns to build a new synthesizer, assuming the better scores corresponded to better alignments, following the techniques of BIBREF7.",
"The initial build gave an MCD on held out data of 6.483. While the 2000 best segment dataset gives an MCD of 5.551, which is a large improvement. The quality of the generated speech goes from understandable, only if you can see the text, to understandable, and transcribable even for non-Mapudungun speakers. We do not believe we are building the best synthesizer with our current (non-neural) techniques, but we do believe we are selecting the best training data for other statistical and neural training techniques in both speech synthesis and speech recognition."
],
[
"For speech recognition (ASR) we used Kaldi BIBREF11. As we do not have access to pronunciation lexica for Mapudungun, we had to approximate them with two settings. In the first setting, we make the simple assumption that each character corresponds to a pronunced phoneme. In the second setting, we instead used the generated phonetic lexicon also used in the above-mentioned speech synthesis techniques. The train/dev/test splits are across conversations, as described above.",
"Under the first setting, we obtained a 60% character error rate, while the generated lexicon significantly boosts performance, as our systems achieve a notably reduced 30% phone error rate. Naturally, these results are relatively far from the quality of ASR systems trained on large amounts of clean data such as those available in English. Given the quality of the recordings, and the lack of additional resources, we consider our results fairly reasonable and they would still be usable for simple dialog-like tasks. We anticipate, though, that one could significantly improve ASR quality over our dataset, by using in-domain language models, or by training end-to-end neural recognizers leveraging languages with similar phonetic inventories BIBREF12 or by using the available Spanish translations in a multi-source scenario BIBREF13."
],
[
"We built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15. We train our systems at the subword level using Byte-Pair Encoding BIBREF16 with a vocabulary of 5000 subwords, shared between the source and target languages. We use five layers for each of the encoder and the decoder, an embedding size of 512, feed forward transformation size of 2048, and eight attention heads. We use dropout BIBREF17 with $0.4$ probability as well as label smoothing set to $0.1$. We train with the Adam optimizer BIBREF18 for up to 200 epochs using learning decay with a patience of six epochs.",
"The baseline results using different portions of the training set (10k, 50k, 100k, and all (220k) parallel sentences) on both translation directions are presented in Table , using detokenized BLEU BIBREF19 (a standard MT metric) and chrF BIBREF20 (a metric that we consider to be more appropriate for polysynthetic languages, as it does not rely on word n-grams) computed with the sacreBLEU toolkit BIBREF21. It it worth noting the difference in quality between the two directions, with translation into Spanish reaching 20.4 (almost 21) BLEU points in the development set, while the opposite direction (translating into Mapudungun) shows about a 7 BLEU points worse performance. This is most likely due to Mapudungun being a polysynthetic language, with its complicated morphology posing a challenge for proper generation."
],
[
"Mapudungun grammar has been studied since the arrival of European missionaries and colonizers hundreds of years ago. More recent descriptions of Mapudungun grammar BIBREF1 and BIBREF0 informed the collection of the resource that we are presenting in this paper.",
"Portions of our resource have been used in early efforts to build language systems for Mapudungun. In particular, BIBREF22 focused on Mapudungun morphology in order to create spelling correction systems, while BIBREF23, BIBREF6, BIBREF24, and BIBREF25 developed hybrid rule- and phrase-based Statistical Machine Translation systems.",
"Naturally, similar works in collecting corpora in Indigenous languages of Latin America are abundant, but very few, if any, have the scale and potential of our resource to be useful in many downstream language-specific and inter-disciplinary applications. A general overview of the state of NLP for the under-represented languages of the Americas can be found at BIBREF26. To name a few of the many notable works, BIBREF27 created a parallel Mixtec-Spanish corpus for Machine Translation and BIBREF28 created lexical resources for Arapaho, while BIBREF29 and BIBREF30 focused on building speech corpora for Southern Quechua and Chatino respectively."
],
[
"With this work we present a resource that will be extremely useful for building language systems in an endangered, under-represented language, Mapudungun. We benchmark NLP systems for speech synthesis, speech recognition, and machine translation, providing strong baseline results. The size of our resource (142 hours, more than 260k total sentences) has the potential to alleviate many of the issues faced when building language technologies for Mapudungun, in contrast to other indigenous languages of the Americas that unfortunately remain low-resource.",
"Our resource could also be used for ethnographic and anthropological research into the Mapuche culture, and has the potential to contribute to intercultural bilingual education, preservation activities and further general advancement of the Mapudungun-speaking community."
],
[
"The data collection described in this paper was supported by NSF grants IIS-0121631 (AVENUE) and IIS-0534217 (LETRAS), with supplemental funding from NSF's Office of International Science and Education. Preliminary funding for work on Mapudungun was also provided by DARPA The experimental material is based upon work generously supported by the National Science Foundation under grant 1761548."
]
],
"section_name": [
"Introduction",
"The Mapudungun Language",
"The Resource",
"The Resource ::: Orthography",
"The Resource ::: Additional Annotations",
"The Resource ::: Cleaning",
"The Resource ::: Training, Dev, and Test Splits",
"Applications",
"Baseline Results",
"Baseline Results ::: Speech Synthesis",
"Baseline Results ::: Speech Recognition",
"Baseline Results ::: Mapudungun–Spanish Machine Translation",
"Related Work",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"510f88ed1cac2c1589e21daae2dfe0448eb7e965",
"6fb7a44681f7f25b6c66693cbd2337d640bb08b1"
],
"answer": [
{
"evidence": [
"Baseline Results ::: Speech Synthesis",
"In our previous work on building speech systems on found data in 700 languages, BIBREF7, we addressed alignment issues (when audio is not segmented into turn/sentence sized chunks) and correctness issues (when the audio does not match the transcription). We used the same techniques here, as described above.",
"For the best quality speech synthesis we need a few hours of phonetically-balanced, single-speaker, read speech. Our first step was to use the start and end points for each turn in the dialogues, and select those of the most frequent speaker, nmlch. This gave us around 18250 segments. We further automatically removed excessive silence from the start, middle and end of these turns (based on occurrence of F0). This gave us 13 hours and 48 minutes of speech.",
"We phonetically aligned this data and built a speech clustergen statistical speech synthesizer BIBREF9 from all of this data. We resynthesized all of the data and measured the difference between the synthesized data and the original data using Mel Cepstral Distortion, a standard method for automatically measuring quality of speech generation BIBREF10. We then ordered the segments by their generation score and took the top 2000 turns to build a new synthesizer, assuming the better scores corresponded to better alignments, following the techniques of BIBREF7.",
"For speech recognition (ASR) we used Kaldi BIBREF11. As we do not have access to pronunciation lexica for Mapudungun, we had to approximate them with two settings. In the first setting, we make the simple assumption that each character corresponds to a pronunced phoneme. In the second setting, we instead used the generated phonetic lexicon also used in the above-mentioned speech synthesis techniques. The train/dev/test splits are across conversations, as described above.",
"We built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15. We train our systems at the subword level using Byte-Pair Encoding BIBREF16 with a vocabulary of 5000 subwords, shared between the source and target languages. We use five layers for each of the encoder and the decoder, an embedding size of 512, feed forward transformation size of 2048, and eight attention heads. We use dropout BIBREF17 with $0.4$ probability as well as label smoothing set to $0.1$. We train with the Adam optimizer BIBREF18 for up to 200 epochs using learning decay with a patience of six epochs."
],
"extractive_spans": [
"state-of-the-art Transformer architecture",
"Kaldi",
"speech clustergen statistical speech synthesizer"
],
"free_form_answer": "",
"highlighted_evidence": [
"Baseline Results ::: Speech Synthesis\nIn our previous work on building speech systems on found data in 700 languages, BIBREF7, we addressed alignment issues (when audio is not segmented into turn/sentence sized chunks) and correctness issues (when the audio does not match the transcription). We used the same techniques here, as described above.\n\nFor the best quality speech synthesis we need a few hours of phonetically-balanced, single-speaker, read speech. Our first step was to use the start and end points for each turn in the dialogues, and select those of the most frequent speaker, nmlch. This gave us around 18250 segments. We further automatically removed excessive silence from the start, middle and end of these turns (based on occurrence of F0). This gave us 13 hours and 48 minutes of speech.\n\nWe phonetically aligned this data and built a speech clustergen statistical speech synthesizer BIBREF9 from all of this data.",
"For speech recognition (ASR) we used Kaldi BIBREF11.",
"We built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We phonetically aligned this data and built a speech clustergen statistical speech synthesizer BIBREF9 from all of this data. We resynthesized all of the data and measured the difference between the synthesized data and the original data using Mel Cepstral Distortion, a standard method for automatically measuring quality of speech generation BIBREF10. We then ordered the segments by their generation score and took the top 2000 turns to build a new synthesizer, assuming the better scores corresponded to better alignments, following the techniques of BIBREF7.",
"For speech recognition (ASR) we used Kaldi BIBREF11. As we do not have access to pronunciation lexica for Mapudungun, we had to approximate them with two settings. In the first setting, we make the simple assumption that each character corresponds to a pronunced phoneme. In the second setting, we instead used the generated phonetic lexicon also used in the above-mentioned speech synthesis techniques. The train/dev/test splits are across conversations, as described above.",
"We built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15. We train our systems at the subword level using Byte-Pair Encoding BIBREF16 with a vocabulary of 5000 subwords, shared between the source and target languages. We use five layers for each of the encoder and the decoder, an embedding size of 512, feed forward transformation size of 2048, and eight attention heads. We use dropout BIBREF17 with $0.4$ probability as well as label smoothing set to $0.1$. We train with the Adam optimizer BIBREF18 for up to 200 epochs using learning decay with a patience of six epochs."
],
"extractive_spans": [],
"free_form_answer": "For speech synthesis, they build a speech clustergen statistical speech synthesizer BIBREF9. For speech recognition, they use Kaldi BIBREF11. For Machine Translation, they use a Transformer architecture from BIBREF15.",
"highlighted_evidence": [
"We phonetically aligned this data and built a speech clustergen statistical speech synthesizer BIBREF9 from all of this data.",
"For speech recognition (ASR) we used Kaldi BIBREF11",
"We built neural end-to-end machine translation systems between Mapudungun and Spanish in both directions, using state-of-the-art Transformer architecture BIBREF14 with the toolkit of BIBREF15."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"4ce302c5c37bd7dc21eab0b9eb2c9394586dc1b7",
"9b884ade1b4d55dd9a9748060de6068d3bad1247"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such.",
"FLOAT SELECTED: Table 2: Example of an utterance along with the different annotations. We additionally highlight the code-switching annotations ([SPA] indicates Spanish words) as well as pre-normalized transcriptions that indicating non-standard pronunciations ([!1pu’] indicates that the previous 1 word was pronounced as ‘pu’’ instead of ‘pues’)."
],
"extractive_spans": [],
"free_form_answer": "Original transcription was labeled with additional labels in [] brackets with nonstandard pronunciation.",
"highlighted_evidence": [
"In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such.",
"FLOAT SELECTED: Table 2: Example of an utterance along with the different annotations. We additionally highlight the code-switching annotations ([SPA] indicates Spanish words) as well as pre-normalized transcriptions that indicating non-standard pronunciations ([!1pu’] indicates that the previous 1 word was pronounced as ‘pu’’ instead of ‘pues’)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"What are the models used for the baseline of the three NLP tasks?",
"How is non-standard pronunciation identified?"
],
"question_id": [
"5cc2daca2a84ddccba9cdd9449e51bb3f64b3dde",
"f9bf6bef946012dd42835bf0c547c0de9c1d229f"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"Spanish",
"Spanish"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Basic Statistics of our corpus.",
"Table 2: Example of an utterance along with the different annotations. We additionally highlight the code-switching annotations ([SPA] indicates Spanish words) as well as pre-normalized transcriptions that indicating non-standard pronunciations ([!1pu’] indicates that the previous 1 word was pronounced as ‘pu’’ instead of ‘pues’).",
"Table 3: Machine Translation Results"
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What are the models used for the baseline of the three NLP tasks?",
"How is non-standard pronunciation identified?"
] | [
[
"1912.01772-Baseline Results ::: Mapudungun–Spanish Machine Translation-0",
"1912.01772-Baseline Results ::: Speech Synthesis-1",
"1912.01772-Baseline Results ::: Speech Synthesis-2",
"1912.01772-Baseline Results ::: Speech Recognition-0",
"1912.01772-Baseline Results ::: Speech Synthesis-0"
],
[
"1912.01772-The Resource ::: Additional Annotations-0",
"1912.01772-3-Table2-1.png"
]
] | [
"For speech synthesis, they build a speech clustergen statistical speech synthesizer BIBREF9. For speech recognition, they use Kaldi BIBREF11. For Machine Translation, they use a Transformer architecture from BIBREF15.",
"Original transcription was labeled with additional labels in [] brackets with nonstandard pronunciation."
] | 61 |
1908.06941 | Why So Down? The Role of Negative (and Positive) Pointwise Mutual Information in Distributional Semantics | In distributional semantics, the pointwise mutual information ($\mathit{PMI}$) weighting of the cooccurrence matrix performs far better than raw counts. There is, however, an issue with unobserved pair cooccurrences as $\mathit{PMI}$ goes to negative infinity. This problem is aggravated by unreliable statistics from finite corpora which lead to a large number of such pairs. A common practice is to clip negative $\mathit{PMI}$ ($\mathit{\texttt{-} PMI}$) at $0$, also known as Positive $\mathit{PMI}$ ($\mathit{PPMI}$). In this paper, we investigate alternative ways of dealing with $\mathit{\texttt{-} PMI}$ and, more importantly, study the role that negative information plays in the performance of a low-rank, weighted factorization of different $\mathit{PMI}$ matrices. Using various semantic and syntactic tasks as probes into models which use either negative or positive $\mathit{PMI}$ (or both), we find that most of the encoded semantics and syntax come from positive $\mathit{PMI}$, in contrast to $\mathit{\texttt{-} PMI}$ which contributes almost exclusively syntactic information. Our findings deepen our understanding of distributional semantics, while also introducing novel $PMI$ variants and grounding the popular $PPMI$ measure. | {
"paragraphs": [
[
"Dense word vectors (or embeddings) are a key component in modern NLP architectures for tasks such as sentiment analysis, parsing, and machine translation. These vectors can be learned by exploiting the distributional hypothesis BIBREF0, paraphrased by BIBREF1 as “a word is characterized by the company that it keeps”, usually by constructing a cooccurrence matrix over a training corpus, re-weighting it using Pointwise Mutual Information ($\\mathit {PMI}$) BIBREF2, and performing a low-rank factorization to obtain dense vectors.",
"Unfortunately, $\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus. Due to unreliable statistics, this happens very frequently in finite corpora. Many models work around this issue by clipping negative $\\mathit {PMI}$ values at 0, a measure known as Positive $\\mathit {PMI}$ ($\\mathit {PPMI}$), which works very well in practice. An unanswered question is: “What is lost/gained by collapsing the negative $\\mathit {PMI}$ spectrum to 0?”. Understanding which type of information is captured by $\\mathit {\\texttt {-}PMI}$ can help in tailoring models for optimal performance.",
"In this work, we attempt to answer this question by studying the kind of information contained in the negative and positive spectrums of $\\mathit {PMI}$ ($\\mathit {\\texttt {-}PMI}$ and $\\mathit {\\texttt {+}PMI}$). We evaluate weighted factorization of different matrices which use either $\\mathit {\\texttt {-}PMI}$, $\\mathit {\\texttt {+}PMI}$, or both on various semantic and syntactic tasks. Results show that $\\mathit {\\texttt {+}PMI}$ alone performs quite well on most tasks, capturing both semantics and syntax, in contrast to $\\mathit {\\texttt {-}PMI}$, which performs poorly on nearly all tasks, except those that test for syntax. Our main contribution is deepening our understanding of distributional semantics by extending BIBREF1's paraphrase of the distributional hypothesis to “a word is not only characterized by the company that it keeps, but also by the company it rejects”. Our secondary contributions are the proposal of two $PMI$ variants that account for the spectrum of $\\mathit {\\texttt {-}PMI}$, and the justification of the popular $PPMI$ measure.",
"In this paper, we first look at related work ($§$SECREF2), then study $\\mathit {\\texttt {-}PMI}$ and ways of accounting for it ($§$SECREF3), describe experiments ($§$SECREF4), analyze results ($§$SECREF5), and close with ideas for future work ($§$SECREF6)."
],
[
"There is a long history of studying weightings (also known as association measures) of general (not only word-context) cooccurrence matrices; see BIBREF3, BIBREF4 for an overview and BIBREF5 for comparison of different weightings. BIBREF6 show that word vectors derived from $\\mathit {PPMI}$ matrices perform better than alternative weightings for word-context cooccurrence. In the field of collocation extraction, BIBREF7 address the negative infinity issue with $\\mathit {PMI}$ by introducing the normalized $\\mathit {PMI}$ metric. BIBREF8 show theoretically that the popular Skip-gram model BIBREF9 performs implicit factorization of shifted $\\mathit {PMI}$.",
"Recently, work in explicit low-rank matrix factorization of $\\mathit {PMI}$ variants has achieved state of the art results in word embedding. GloVe BIBREF10 performs weighted factorization of the log cooccurrence matrix with added bias terms, but does not account for zero cells. BIBREF11 point out that GloVe's bias terms correlate strongly with unigram log counts, suggesting that GloVe is factorizing a variant of $\\mathit {PMI}$. Their SwiVel model modifies the GloVe objective to use Laplace smoothing and hinge loss for zero counts of the cooccurrence matrix, directly factorizing the $\\mathit {PMI}$ matrix, sidestepping the negative infinity issue. An alternative is to use $\\mathit {PPMI}$ and variants as in BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16. However, it is not clear what is lost by clipping the negative spectrum of $\\mathit {PMI}$, which makes the use of $\\mathit {PPMI}$, though it works well in practice, seem unprincipled.",
"In the study of language acquisition, BIBREF17 argue that indirect negative evidence might play an important role in human acquisition of grammar, but do not link this idea to distributional semantics."
],
[
"PMI: A cooccurrence matrix $M$ is constructed by sliding a symmetric window over the subsampled BIBREF9 training corpus and for each center word $w$ and context word $c$ within the window, incrementing $M_{wc}$. $\\mathit {PMI}$ is then equal to:",
"where * denotes summation over the corresponding index. To deal with negative values, we propose clipped $\\mathit {PMI}$,",
"which is equivalent to $\\mathit {PPMI}$ when $z = 0$.",
"Matrix factorization: LexVec BIBREF15 performs the factorization $M^{\\prime } = WC^\\top $, where $M^{\\prime }$ is any transformation of $M$ (such as $\\mathit {PPMI}$), and $W, C$ are the word and context embeddings respectively. By sliding a symmetric window over the training corpus (window sampling), LexVec performs one Stochastic Gradient Descent (SGD) step every time a $(w,c)$ pair is observed, minimizing",
"Additionally, for every center word $w$, $k$ negative words BIBREF9 are drawn from the unigram context distribution $P_n$ (negative sampling) and SGD steps taken to minimize:",
"Thus the loss function prioritizes the correct approximation of frequently cooccurring pairs and of pairs where either word occurs with high frequency; these are pairs for which we have more reliable statistics.",
"In our experiments, we use LexVec over Singular Value Decomposition (SVD) because a) Empirical results shows it outperforms SVD BIBREF15. b) The weighting of reconstruction errors by statistical confidence is particularly important for $\\mathit {\\texttt {-}PMI}$, where negative cooccurrence between a pair of frequent words is more significant and should be better approximated than that between a pair of rare words. GloVe's matrix factorization is even more unsuitable for our experiments as its loss weighting — a monotonically increasing function of $M_{wc}$ — ignores reconstruction errors of non-cooccurring pairs.",
"Spectrum of PMI: To better understand the distribution of $\\mathit {CPMI}$ values, we plot a histogram of $10^5$ pairs randomly sampled by window sampling and negative sampling in fig:hist, setting $z=-5$. We can clearly see the spectrum of $\\mathit {\\texttt {-}PMI}$ that is collapsed when we use $\\mathit {PPMI}$ ($z=0$). In practice we find that $z=-2$ captures most of the negative spectrum and consistently gives better results than smaller values so we use this value for the rest of this paper. We suspect this is due to the large number of non-cooccurring pairs ($41.7\\%$ in this sample) which end up dominating the loss function when $z$ is too small.",
"Normalization: We also experiment with normalized $\\mathit {PMI}$ ($\\mathit {NPMI}$) BIBREF7:",
"such that $NPMI(w,c) = -1$ when $(w,c)$ never cooccur, $NPMI(w,c) = 0$ when they are independent, and $NPMI(w,c) = 1$ when they always cooccur together. This effectively captures the entire negative spectrum, but has the downside of normalization which discards scale information. In practice we find this works poorly if done symmetrically, so we introduce a variant called $\\mathit {NNEGPMI}$ which only normalizes $\\mathit {\\texttt {-}PMI}$:",
"We also experimented with Laplace smoothing as in BIBREF18 for various pseudocounts but found it to work consistently worse than both $\\mathit {CPMI_z}$ and $\\mathit {NNEGPMI}$ so we omit further discussion in this paper."
],
[
"In order to identify the role that $\\mathit {\\texttt {-}PMI}$ and $\\mathit {\\texttt {+}PMI}$ play in distributional semantics, we train LexVec models that skip SGD steps when target cell values are $>0$ or $\\le 0$, respectively. For example, $-\\mathit {CPMI}_{\\texttt {-}2}$ skips steps when $\\mathit {CPMI}_{\\texttt {-}2}(w,c) > 0$. Similarly, the $\\mathit {\\texttt {+}PPMI}$ model skips SGD steps when $\\mathit {PPMI}(w,c) \\le 0$. We compare these to models that include both negative and positive information to see how the two interact.",
"We use the default LexVec configuration for all $\\mathit {PMI}$ variants: fixed window of size 2, embedding dimension of 300, 5 negative samples, positional contexts, context distribution smoothing of $.75$, learning rate of $.025$, no subword information, and negative distribution power of $.75$. We train on a lowercased, alphanumerical 2015 Wikipedia dump with $3.8$B tokens, discarding tokens with frequency $< 100$, for a vocabulary size of $303,517$ words.",
"For comparison, we include results for a randomly initialized, non-trained embedding to establish task baselines.",
"Semantics: To evaluate word-level semantics, we use the SimLex BIBREF19 and Rare Word (RW) BIBREF20 word similarity datasets, and the Google Semantic (GSem) analogies BIBREF9. We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22.",
"Syntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax. Classifiers for all SentEval probing tasks are multilayer perceptrons with a single hidden layer of 100 units and dropout of $.1$. Our final syntactic task is part-of-speech (POS) tagging using the same BiLSTM-CRF setup as BIBREF23 but using only word embeddings (no hand-engineered features) as input, trained on the WSJ section of the Penn Treebank BIBREF24."
],
[
"All results are shown in tab:senteval.",
"Negative PMI: We observe that using only $\\mathit {\\texttt {-}PMI}$ (rows $\\mathit {\\texttt {-}CPMI_{\\texttt {-}2}}$ and $\\mathit {\\texttt {-}NNEGPMI}$) performs similarly to all other models in POS tagging and both syntactic probing tasks, but very poorly on all semantic tasks, strongly supporting our main claim that $\\mathit {\\texttt {-}PMI}$ mostly encodes syntactic information.",
"Our hypothesis for this is that the grammar that generates language implicitly creates negative cooccurrence and so $\\mathit {\\texttt {-}PMI}$ encodes this syntactic information. Interestingly, this idea creates a bridge between distributional semantics and the argument by BIBREF17 that indirect negative evidence might play an important role in human language acquisition of grammar. Positive PMI: The $\\mathit {\\texttt {+}PPMI}$ model performs as well or better as the full spectrum models on nearly all tasks, clearly indicating that $\\mathit {\\texttt {+}PMI}$ encodes both semantic and syntactic information.",
"Why incorporate -PMI? $\\mathit {\\texttt {+}PPMI}$ only falters on the RW and analogy tasks, and we hypothesize this is where $\\mathit {\\texttt {-}PMI}$ is useful: in the absence of positive information, negative information can be used to improve rare word representations and word analogies. Analogies are solved using nearest neighbor lookups in the vector space, and so accounting for negative cooccurrence effectively repels words with which no positive cooccurrence was observed. In future work, we will explore incorporating $\\mathit {\\texttt {-}PMI}$ only for rare words (where it is most needed).",
"Full spectrum models: The $\\mathit {PPMI}$, $\\mathit {CPMI_{\\texttt {-}2}}$, and $\\mathit {NNEGPMI}$ models perform similarly, whereas the $\\mathit {NPMI}$ model is significantly worst on nearly all semantic tasks. We thus conclude that accounting for scale in the positive spectrum is more important than in the negative spectrum. We hypothesize this is because scale helps to uniquely identify words, which is critical for semantics (results on $WC$ task correlate strongly with performance on semantic tasks), but in syntax, words with the same function should be indistinguishable. Since $\\mathit {\\texttt {+}PMI}$ encodes both semantics and syntax, scale must be preserved, whereas $\\mathit {\\texttt {-}PMI}$ encodes mostly syntax, and so scale information can be discarded.",
"Collapsing the negative spectrum: The $\\mathit {PPMI}$ model, which collapses the negative spectrum to zero, performs almost identically to the $\\mathit {CPMI_{\\texttt {-}2}}$ and $\\mathit {NNEGPMI}$ models that account for the range of negative values. This is justified by 1) Our discussion which shows that $\\mathit {\\texttt {+}PMI}$ is far more informative than $\\mathit {\\texttt {-}PMI}$ and 2) Looking at fig:hist, we see that collapsed values — interval $(-5,0]$ — account for only $11\\%$ of samples compared to $41.7\\%$ for non-collapsed negative values."
],
[
"In this paper, we evaluated existing and novel ways of incorporating $\\mathit {\\texttt {-}PMI}$ into word embedding models based on explicit weighted matrix factorization, and, more importantly, studied the role that $\\mathit {\\texttt {-}PMI}$ and $\\mathit {\\texttt {+}PMI}$ each play in distributional semantics, finding that “a word is not only characterized by the company that it keeps, but also by the company it rejects”. In future work, we wish to further study the link between our work and language acquisition, and explore the fact the $\\mathit {\\texttt {-}PMI}$ is almost purely syntactic to (possibly) subtract syntax from the full spectrum models, studying the frontier (if there is one) between semantics and syntax."
],
[
"This research was partly supported by CAPES and CNPq (projects 312114/2015-0, 423843/2016-8, and 140402/2018-7)."
]
],
"section_name": [
"Introduction",
"Related Work",
"PMI & Matrix Factorization",
"Materials",
"Results",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"95b1dbb9574a2189c7d51f9ea34ba4c61585ae7f",
"aa33ba7db09b674821d1c29a62f3c9eab2a50176"
],
"answer": [
{
"evidence": [
"where * denotes summation over the corresponding index. To deal with negative values, we propose clipped $\\mathit {PMI}$,",
"which is equivalent to $\\mathit {PPMI}$ when $z = 0$.",
"such that $NPMI(w,c) = -1$ when $(w,c)$ never cooccur, $NPMI(w,c) = 0$ when they are independent, and $NPMI(w,c) = 1$ when they always cooccur together. This effectively captures the entire negative spectrum, but has the downside of normalization which discards scale information. In practice we find this works poorly if done symmetrically, so we introduce a variant called $\\mathit {NNEGPMI}$ which only normalizes $\\mathit {\\texttt {-}PMI}$:"
],
"extractive_spans": [],
"free_form_answer": "clipped PMI; NNEGPMI",
"highlighted_evidence": [
"To deal with negative values, we propose clipped $\\mathit {PMI}$,\n\nwhich is equivalent to $\\mathit {PPMI}$ when $z = 0$.",
"In practice we find this works poorly if done symmetrically, so we introduce a variant called $\\mathit {NNEGPMI}$ which only normalizes $\\mathit {\\texttt {-}PMI}$:"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"where * denotes summation over the corresponding index. To deal with negative values, we propose clipped $\\mathit {PMI}$,",
"which is equivalent to $\\mathit {PPMI}$ when $z = 0$.",
"Normalization: We also experiment with normalized $\\mathit {PMI}$ ($\\mathit {NPMI}$) BIBREF7:",
"such that $NPMI(w,c) = -1$ when $(w,c)$ never cooccur, $NPMI(w,c) = 0$ when they are independent, and $NPMI(w,c) = 1$ when they always cooccur together. This effectively captures the entire negative spectrum, but has the downside of normalization which discards scale information. In practice we find this works poorly if done symmetrically, so we introduce a variant called $\\mathit {NNEGPMI}$ which only normalizes $\\mathit {\\texttt {-}PMI}$:"
],
"extractive_spans": [
"clipped $\\mathit {PMI}$",
"$\\mathit {NNEGPMI}$"
],
"free_form_answer": "",
"highlighted_evidence": [
"To deal with negative values, we propose clipped $\\mathit {PMI}$,\n\nwhich is equivalent to $\\mathit {PPMI}$ when $z = 0$.",
"Normalization: We also experiment with normalized $\\mathit {PMI}$ ($\\mathit {NPMI}$) BIBREF7:\n\nsuch that $NPMI(w,c) = -1$ when $(w,c)$ never cooccur, $NPMI(w,c) = 0$ when they are independent, and $NPMI(w,c) = 1$ when they always cooccur together. This effectively captures the entire negative spectrum, but has the downside of normalization which discards scale information. In practice we find this works poorly if done symmetrically, so we introduce a variant called $\\mathit {NNEGPMI}$ which only normalizes $\\mathit {\\texttt {-}PMI}$:"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"507dafad8e06564e512b498c99da34481c6ce283",
"9716be852f370bf32e57aa0209ff686111849266"
],
"answer": [
{
"evidence": [
"Semantics: To evaluate word-level semantics, we use the SimLex BIBREF19 and Rare Word (RW) BIBREF20 word similarity datasets, and the Google Semantic (GSem) analogies BIBREF9. We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22.",
"Syntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax. Classifiers for all SentEval probing tasks are multilayer perceptrons with a single hidden layer of 100 units and dropout of $.1$. Our final syntactic task is part-of-speech (POS) tagging using the same BiLSTM-CRF setup as BIBREF23 but using only word embeddings (no hand-engineered features) as input, trained on the WSJ section of the Penn Treebank BIBREF24."
],
"extractive_spans": [
"Word Content (WC) probing task",
"Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22.",
"Syntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Semantics: To evaluate word-level semantics, we use the SimLex BIBREF19 and Rare Word (RW) BIBREF20 word similarity datasets, and the Google Semantic (GSem) analogies BIBREF9. We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22.",
"Syntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax. Classifiers for all SentEval probing tasks are multilayer perceptrons with a single hidden layer of 100 units and dropout of $.1$. Our final syntactic task is part-of-speech (POS) tagging using the same BiLSTM-CRF setup as BIBREF23 but using only word embeddings (no hand-engineered features) as input, trained on the WSJ section of the Penn Treebank BIBREF24."
],
"extractive_spans": [
"SimLex",
"Rare Word",
"Google Semantic",
"Semantic Textual Similarity",
"Word Content (WC) probing",
"Google Syntactic analogies",
"Depth",
"Top Constituent",
"part-of-speech (POS) tagging"
],
"free_form_answer": "",
"highlighted_evidence": [
"Semantics: To evaluate word-level semantics, we use the SimLex BIBREF19 and Rare Word (RW) BIBREF20 word similarity datasets, and the Google Semantic (GSem) analogies BIBREF9. We evaluate sentence-level semantics using averaged bag of vectors (BoV) representations on the Semantic Textual Similarity (STSB) task BIBREF21 and Word Content (WC) probing task (identify from a list of words which is contained in the sentence representation) from SentEval BIBREF22.",
"Syntax: Similarly, we use the Google Syntactic analogies (GSyn) BIBREF9 to evaluate word-level syntactic information, and Depth (Dep) and Top Constituent (TopC) (of the input sentence's constituent parse tree) probing tasks from SentEval BIBREF22 for sentence-level syntax.",
"Our final syntactic task is part-of-speech (POS) tagging using the same BiLSTM-CRF setup as BIBREF23 but using only word embeddings (no hand-engineered features) as input, trained on the WSJ section of the Penn Treebank BIBREF24."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4d85f7302434fada47e35881a0d20bfc7e019341",
"c35f4745b86eb3bed0d5d9b1d7569e93ed01055a"
],
"answer": [
{
"evidence": [
"Why incorporate -PMI? $\\mathit {\\texttt {+}PPMI}$ only falters on the RW and analogy tasks, and we hypothesize this is where $\\mathit {\\texttt {-}PMI}$ is useful: in the absence of positive information, negative information can be used to improve rare word representations and word analogies. Analogies are solved using nearest neighbor lookups in the vector space, and so accounting for negative cooccurrence effectively repels words with which no positive cooccurrence was observed. In future work, we will explore incorporating $\\mathit {\\texttt {-}PMI}$ only for rare words (where it is most needed)."
],
"extractive_spans": [],
"free_form_answer": "It may lead to poor rare word representations and word analogies.",
"highlighted_evidence": [
"$\\mathit {\\texttt {+}PPMI}$ only falters on the RW and analogy tasks, and we hypothesize this is where $\\mathit {\\texttt {-}PMI}$ is useful: in the absence of positive information, negative information can be used to improve rare word representations and word analogies."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"b1efaf190fd1aba2ea2faae32bf55b754f520ec5",
"cf5581e23f2ed826ac280d0a871afa46b102ebbb"
],
"answer": [
{
"evidence": [
"Unfortunately, $\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus. Due to unreliable statistics, this happens very frequently in finite corpora. Many models work around this issue by clipping negative $\\mathit {PMI}$ values at 0, a measure known as Positive $\\mathit {PMI}$ ($\\mathit {PPMI}$), which works very well in practice. An unanswered question is: “What is lost/gained by collapsing the negative $\\mathit {PMI}$ spectrum to 0?”. Understanding which type of information is captured by $\\mathit {\\texttt {-}PMI}$ can help in tailoring models for optimal performance."
],
"extractive_spans": [
"$\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"Unfortunately, $\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus. Due to unreliable statistics, this happens very frequently in finite corpora. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Unfortunately, $\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus. Due to unreliable statistics, this happens very frequently in finite corpora. Many models work around this issue by clipping negative $\\mathit {PMI}$ values at 0, a measure known as Positive $\\mathit {PMI}$ ($\\mathit {PPMI}$), which works very well in practice. An unanswered question is: “What is lost/gained by collapsing the negative $\\mathit {PMI}$ spectrum to 0?”. Understanding which type of information is captured by $\\mathit {\\texttt {-}PMI}$ can help in tailoring models for optimal performance."
],
"extractive_spans": [],
"free_form_answer": "A finite corpora may entirely omit rare word combinations",
"highlighted_evidence": [
"Unfortunately, $\\mathit {PMI}(w,c)$ goes to negative infinity when the word-context pair $(w,c)$ does not appear in the training corpus."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What novel PMI variants are introduced?",
"What semantic and syntactic tasks are used as probes?",
"What are the disadvantages to clipping negative PMI?",
"Why are statistics from finite corpora unreliable?"
],
"question_id": [
"6b9b9e5d154cb963f6d921093539490daa5ebbae",
"bc4dca3e1e83f3b4bbb53a31557fc5d8971603b2",
"d46c0ea1ba68c649cc64d2ebb6af20202a74a3c7",
"6844683935d0d8f588fa06530f5068bf3e1ed0c0"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: CPMI -5 histogram (bucket width equal to .2) of 105 sampled pairs using window sampling and negative sampling. Number of samples in interval: [−5,−5] = 41695, (−5, 0] = 11001, [−2, 0] = 10759, (0,∞) = 47304",
"Table 1: SimLex and RW word similarity: Spearman rank correlation. STSB: Pearson correlation. GSem/GSyn word analogy, POS tagging and WC, Dep, TopC probing tasks: % accuracy. Best result for each column in bold, second best underlined."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png"
]
} | [
"What novel PMI variants are introduced?",
"What are the disadvantages to clipping negative PMI?",
"Why are statistics from finite corpora unreliable?"
] | [
[
"1908.06941-PMI & Matrix Factorization-2",
"1908.06941-PMI & Matrix Factorization-1",
"1908.06941-PMI & Matrix Factorization-8",
"1908.06941-PMI & Matrix Factorization-9"
],
[
"1908.06941-Results-3"
],
[
"1908.06941-Introduction-1"
]
] | [
"clipped PMI; NNEGPMI",
"It may lead to poor rare word representations and word analogies.",
"A finite corpora may entirely omit rare word combinations"
] | 63 |
1904.01548 | Understanding language-elicited EEG data by predicting it from a fine-tuned language model | Electroencephalography (EEG) recordings of brain activity taken while participants read or listen to language are widely used within the cognitive neuroscience and psycholinguistics communities as a tool to study language comprehension. Several time-locked stereotyped EEG responses to word-presentations -- known collectively as event-related potentials (ERPs) -- are thought to be markers for semantic or syntactic processes that take place during comprehension. However, the characterization of each individual ERP in terms of what features of a stream of language trigger the response remains controversial. Improving this characterization would make ERPs a more useful tool for studying language comprehension. We take a step towards better understanding the ERPs by fine-tuning a language model to predict them. This new approach to analysis shows for the first time that all of the ERPs are predictable from embeddings of a stream of language. Prior work has only found two of the ERPs to be predictable. In addition to this analysis, we examine which ERPs benefit from sharing parameters during joint training. We find that two pairs of ERPs previously identified in the literature as being related to each other benefit from joint training, while several other pairs of ERPs that benefit from joint training are suggestive of potential relationships. Extensions of this analysis that further examine what kinds of information in the model embeddings relate to each ERP have the potential to elucidate the processes involved in human language comprehension. | {
"paragraphs": [
[
"The cognitive processes involved in human language comprehension are complex and only partially identified. According to the dual-stream model of speech comprehension BIBREF1 , sound waves are first converted to phoneme-like features and further processed by a ventral stream that maps those features onto words and semantic structures, and a dorsal stream that (among other things) supports audio-short term memory. The mapping of words onto meaning is thought to be subserved by widely distributed regions of the brain that specialize in particular modalities — for example visual aspects of the word banana reside in the occipital lobe of the brain and are activated when the word banana is heard BIBREF2 — and the different representation modalities are thought to be integrated into a single coherent latent representation in the anterior temporal lobe BIBREF3 . While this part of meaning representation in human language comprehension is somewhat understood, much less is known about how the meanings of words are integrated together to form the meaning of sentences and discourses. One tool researchers use to study the integration of meaning across words is electroencephelography (EEG), which measures the electrical activity of large numbers of neurons acting in concert. EEG has the temporal resolution necessary to study the processes involved in meaning integration, and certain stereotyped electrical responses to word presentations, known as event-related potentials (ERPs), have been identified with some of the processes thought to contribute to comprehension.",
"",
"In this work, we consider six ERP components that have been associated in the cognitive neuroscience and psycholinguistics literature with language processing and which we analyze in the data from BIBREF0 (see Figure FIGREF1 for spatial and temporal definitions of these ERP components). Three of these — the N400, EPNP, and PNP responses — are primarily considered markers for semantic processing, while the other three — the P600, ELAN, and LAN responses — are primarily considered markers for syntactic processing. However, the neat division of the ERP responses into either semantic or syntactic categories is controversial. The N400 response has been very well studied (for an overview see BIBREF4 ) and it is well established that it is associated with semantic complexity, but the features of language that trigger the other ERP responses we consider here are poorly understood. We propose to use a neural network pretrained as a language model to probe what features of language drive these ERP responses, and in turn to probe what features of language mediate the cognitive processes that underlie human language comprehension, and especially the integration of meaning across words."
],
[
"While a full discussion of each ERP component and the features of language thought to trigger each are beyond the scope of this document (for reviews see e.g. BIBREF0 , BIBREF2 , BIBREF4 , BIBREF5 , and BIBREF6 ), we introduce some basic features of ERP components to help in the discussion later. ERP components are electrical potential responses measured with respect to a baseline that are triggered by an event (in our case the presentation of a new word to a participant in an experiment). The name of each ERP component reflects whether the potential is positive or negative relative to the baseline. The N400 is so-named because it is Negative relative to a baseline (the baseline is typically recorded just before a word is presented at an electrode that is not affected by the ERP response) and because it peaks in magnitude at about 400ms after a word is presented to a participant in an experiment. The P600 is Positive relative to a baseline and peaks around 600ms after a word is presented to a participant (though its overall duration is much longer and less specific in time than the N400). The post-N400 positivity is so-named because it is part of a biphasic response; it is a positivity that occurs after the negativity associated with the N400. The early post-N400 positivity (EPNP) is also part of a biphasic response, but the positivity has an eariler onset than the standard PNP. Finally, the LAN and ELAN are the left-anterior negativity and early left-anterior negativity respectively. These are named for their timing, spatial distribution on the scalp, and direction of difference from the baseline. It is important to note that ERP components can potentially cancel and mask each other, and that it is difficult to precisely localize the neural activity that causes the changes in electrical potential at the electrodes where those changes are measured."
],
[
"This work is most closely related to the paper from which we get the ERP data: BIBREF0 . In that work, the authors relate the surprisal of a word, i.e. the (negative log) probability of the word appearing in its context, to each of the ERP signals we consider here. The authors do not directly train a model to predict ERPs. Instead, models of the probability distribution of each word in context are used to compute a surprisal for each word, which is input into a mixed effects regression along with word frequency, word length, word position in the sentence, and sentence position in the experiment. The effect of the surprisal is assessed using a likelihood-ratio test. In BIBREF7 , the authors take an approach similar to BIBREF0 . The authors compare the explanatory power of surprisal (as computed by an LSTM or a Recurrent Neural Network Grammar (RNNG) language model) to a measure of syntactic complexity they call “distance\" that counts the number of parser actions in the RNNG language model. The authors find that surprisal (as predicted by the RNNG) and distance are both significant factors in a mixed effects regression which predicts the P600, while the surprisal as computed by an LSTM is not. Unlike BIBREF0 and BIBREF7 , we do not use a linking function (e.g. surprisal) to relate a language model to ERPs. We thus lose the interpretability provided by the linking function, but we are able to predict a significant proportion of the variance for all of the ERP components, where prior work could not. We interpret our results through characterization of the ERPs in terms of how they relate to each other and to eye-tracking data rather than through a linking function. The authors in BIBREF8 also use a recurrent neural network to predict neural activity directly. In that work the authors predict magnetoencephalography (MEG) activity, a close cousin to EEG, recorded while participants read a chapter of Harry Potter and the Sorcerer’s Stone BIBREF9 . Their approach to characterization of processing at each MEG sensor location is to determine whether it is best predicted by the context vector of the recurrent network (prior to the current word being processed), the embedding of the current word, or the probability of the current word given the context. In future work we also intend to add these types of studies to the ERP predictions."
],
[
"In this work we find that all six of the ERP components from BIBREF0 can be predicted above chance by a model which has been pretrained using a language modeling objective and then directly trained to predict the components. This is in contrast to prior work which has successfully linked language models to the N400 BIBREF0 and P600 BIBREF7 but not the other ERP components. We also note that contrary to BIBREF7 , we find that an LSTM does contain information that can be used to predict EEG data, and in particular that it can predict the P600. We speculate that the analysis used in BIBREF7 did not find reliable effects because the language models were related to the EEG data through functions chosen a priori (the surprisal, and the `distance' metric). These functions, though interpretable, might be interpretable at the cost of losing much of the information in the representations learned by the network.",
"In addition, we show through our multitask learning analysis that information is shared between ERP components, and between ERP components and behavioral data. Although these relationships must be viewed with caution until they can be verified across multiple datasets and with more variation in neural network architectures, here we consider some potential reasons for our findings. The broad point we wish to make is that by better understanding which ERP components share information with each other and with behavioral data through the type of analysis we present here (multitask learning) or other means, we can better understand what drives each ERP component and in turn the processes involved in human language comprehension."
],
[
"We have shown that ERP components can be predicted from neural networks pretrained as language models and fine-tuned to directly predict those components. To the best of our knowledge, prior work has not successfully used statistical models to predict all of these components. Furthermore, we have shown that multitask learning benefits the prediction of ERP components and can suggest how components relate to each other. At present, these joint-training benefit relationships are only suggestive, but if these relationships ultimately lead to insights about what drives each ERP component, then the components become more useful tools for studying human language comprehension. By using multitask learning as a method of characterization, we have found some expected relationships (LAN+P600 and ELAN+P600) and several more surprising relationships. We believe that this is exactly the kind of finding that makes multitask learning an interesting exploratory technique in this area. Additionally, we have shown that information can be shared between heterogeneous types of data (eye-tracking, self-paced reading, and ERP components) in the domain of human language processing prediction, and in particular between behavioral and neural data. Given the small datasets associated with human language processing, using heterogeneous data is a potentially major advantage of a multitask approach. In future work, we will further explore what information is encoded into the model representations when neural and behavioral data are used to train neural networks, and how these representations differ from the representations in a model trained on language alone."
],
[
"We thank our reviewers for their valuable feedback. This work is supported in part by National Institutes of Health grant number U01NS098969."
],
[
"Here we present a visualization (Figure FIGREF21 ) of the results presented in Table TABREF9 of the main paper, and a visualization (Figure FIGREF22 ) of a more complete set of results from which the information in Table TABREF16 of the main paper is drawn. We also show supplemental results for variants of our primary analysis on multitask learning with eye-tracking, self-paced reading time and ERP data. In the variants we modify the input representation to our decoder network to see whether the relationships between the behavioral data and neural activity appear to be consistent with different choices of encoder architectures. Additional (and more varied) choices or architectures are left to future work. The results in Table TABREF23 reflect using only the forward-encoder (rather than the bi-LSTM) in the encoder network, while the results in Table TABREF24 reflect using only the word embeddings (i.e. bypassing the LSTM entirely). While the results are clearly worse for each of these choices of architecture than for using a bi-LSTM encoder, the relationships between the behavioral data and the ERP signals is qualitatively similar. Finally, TABREF25 shows the Pearson correlation coefficient between different measures. We note that the patterns of correlation are different than the patterns of which measures benefit from joint training with each other."
]
],
"section_name": [
"Introduction",
"Background",
"Related Work",
"Discussion",
"Conclusion",
"Acknowledgments",
"Appendix"
]
} | {
"answers": [
{
"annotation_id": [
"53b0d82f8678345281c0933f02177008735f5270",
"728f3097d08983cb6e2082980c9b8d7ddf3c0c2e"
],
"answer": [
{
"evidence": [
"This work is most closely related to the paper from which we get the ERP data: BIBREF0 . In that work, the authors relate the surprisal of a word, i.e. the (negative log) probability of the word appearing in its context, to each of the ERP signals we consider here. The authors do not directly train a model to predict ERPs. Instead, models of the probability distribution of each word in context are used to compute a surprisal for each word, which is input into a mixed effects regression along with word frequency, word length, word position in the sentence, and sentence position in the experiment. The effect of the surprisal is assessed using a likelihood-ratio test. In BIBREF7 , the authors take an approach similar to BIBREF0 . The authors compare the explanatory power of surprisal (as computed by an LSTM or a Recurrent Neural Network Grammar (RNNG) language model) to a measure of syntactic complexity they call “distance\" that counts the number of parser actions in the RNNG language model. The authors find that surprisal (as predicted by the RNNG) and distance are both significant factors in a mixed effects regression which predicts the P600, while the surprisal as computed by an LSTM is not. Unlike BIBREF0 and BIBREF7 , we do not use a linking function (e.g. surprisal) to relate a language model to ERPs. We thus lose the interpretability provided by the linking function, but we are able to predict a significant proportion of the variance for all of the ERP components, where prior work could not. We interpret our results through characterization of the ERPs in terms of how they relate to each other and to eye-tracking data rather than through a linking function. The authors in BIBREF8 also use a recurrent neural network to predict neural activity directly. In that work the authors predict magnetoencephalography (MEG) activity, a close cousin to EEG, recorded while participants read a chapter of Harry Potter and the Sorcerer’s Stone BIBREF9 . Their approach to characterization of processing at each MEG sensor location is to determine whether it is best predicted by the context vector of the recurrent network (prior to the current word being processed), the embedding of the current word, or the probability of the current word given the context. In future work we also intend to add these types of studies to the ERP predictions.",
"Discussion"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Whole Method and Results sections) Self-paced reading times widely benefit ERP prediction, while eye-tracking data seems to have more limited benefit to just the ELAN, LAN, and PNP ERP components.\nSelect:\n- ELAN, LAN\n- PNP ERP",
"highlighted_evidence": [
"In future work we also intend to add these types of studies to the ERP predictions.\n\nDiscussion"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"4e25fa02c5c2c1ad932d7cbcf3f9b10a1dff4cd2",
"f6ae6f40133edf278e914860c93254ca32a8da18"
],
"answer": [
{
"evidence": [
"This work is most closely related to the paper from which we get the ERP data: BIBREF0 . In that work, the authors relate the surprisal of a word, i.e. the (negative log) probability of the word appearing in its context, to each of the ERP signals we consider here. The authors do not directly train a model to predict ERPs. Instead, models of the probability distribution of each word in context are used to compute a surprisal for each word, which is input into a mixed effects regression along with word frequency, word length, word position in the sentence, and sentence position in the experiment. The effect of the surprisal is assessed using a likelihood-ratio test. In BIBREF7 , the authors take an approach similar to BIBREF0 . The authors compare the explanatory power of surprisal (as computed by an LSTM or a Recurrent Neural Network Grammar (RNNG) language model) to a measure of syntactic complexity they call “distance\" that counts the number of parser actions in the RNNG language model. The authors find that surprisal (as predicted by the RNNG) and distance are both significant factors in a mixed effects regression which predicts the P600, while the surprisal as computed by an LSTM is not. Unlike BIBREF0 and BIBREF7 , we do not use a linking function (e.g. surprisal) to relate a language model to ERPs. We thus lose the interpretability provided by the linking function, but we are able to predict a significant proportion of the variance for all of the ERP components, where prior work could not. We interpret our results through characterization of the ERPs in terms of how they relate to each other and to eye-tracking data rather than through a linking function. The authors in BIBREF8 also use a recurrent neural network to predict neural activity directly. In that work the authors predict magnetoencephalography (MEG) activity, a close cousin to EEG, recorded while participants read a chapter of Harry Potter and the Sorcerer’s Stone BIBREF9 . Their approach to characterization of processing at each MEG sensor location is to determine whether it is best predicted by the context vector of the recurrent network (prior to the current word being processed), the embedding of the current word, or the probability of the current word given the context. In future work we also intend to add these types of studies to the ERP predictions.",
"Discussion"
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Whole Method and Results sections) The primary dataset we use is the ERP data collected and computed by Frank et al. (2015), and we also use behavioral data (eye-tracking data and self-paced reading times) from Frank et al. (2013) which were collected on the same set of 205 sentences.\nSelect:\n- ERP data collected and computed by Frank et al. (2015)\n- behavioral data (eye-tracking data and self-paced reading times) from Frank et al. (2013)",
"highlighted_evidence": [
"In future work we also intend to add these types of studies to the ERP predictions.\n\nDiscussion"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This work is most closely related to the paper from which we get the ERP data: BIBREF0 . In that work, the authors relate the surprisal of a word, i.e. the (negative log) probability of the word appearing in its context, to each of the ERP signals we consider here. The authors do not directly train a model to predict ERPs. Instead, models of the probability distribution of each word in context are used to compute a surprisal for each word, which is input into a mixed effects regression along with word frequency, word length, word position in the sentence, and sentence position in the experiment. The effect of the surprisal is assessed using a likelihood-ratio test. In BIBREF7 , the authors take an approach similar to BIBREF0 . The authors compare the explanatory power of surprisal (as computed by an LSTM or a Recurrent Neural Network Grammar (RNNG) language model) to a measure of syntactic complexity they call “distance\" that counts the number of parser actions in the RNNG language model. The authors find that surprisal (as predicted by the RNNG) and distance are both significant factors in a mixed effects regression which predicts the P600, while the surprisal as computed by an LSTM is not. Unlike BIBREF0 and BIBREF7 , we do not use a linking function (e.g. surprisal) to relate a language model to ERPs. We thus lose the interpretability provided by the linking function, but we are able to predict a significant proportion of the variance for all of the ERP components, where prior work could not. We interpret our results through characterization of the ERPs in terms of how they relate to each other and to eye-tracking data rather than through a linking function. The authors in BIBREF8 also use a recurrent neural network to predict neural activity directly. In that work the authors predict magnetoencephalography (MEG) activity, a close cousin to EEG, recorded while participants read a chapter of Harry Potter and the Sorcerer’s Stone BIBREF9 . Their approach to characterization of processing at each MEG sensor location is to determine whether it is best predicted by the context vector of the recurrent network (prior to the current word being processed), the embedding of the current word, or the probability of the current word given the context. In future work we also intend to add these types of studies to the ERP predictions."
],
"extractive_spans": [
"the ERP data: BIBREF0"
],
"free_form_answer": "",
"highlighted_evidence": [
"This work is most closely related to the paper from which we get the ERP data: BIBREF0 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"Which two pairs of ERPs from the literature benefit from joint training?",
"What datasets are used?"
],
"question_id": [
"7d2f812cb345bb3ab91eb8cbbdeefd4b58f65569",
"bd6dc38a9ac8d329114172194b0820766458dacc"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: The electrodes from which each event-related potential was recorded in the data from Frank et al. (2015) (after figure 3 in (Frank et al., 2015)). The bottom portion of the figure shows a top-down schematic of the electrode locations with the nose facing towards the top of the page. Each ERP is the mean potential from all of the indicated electrodes during a specific time-window, creating a single scalar value per ERP per word. Overlapping circles indicate multiple ERPs recorded from the same electrode. The ELAN is measured from 125-175ms after stimulus onset, the LAN from 300-400ms, the N400 from 300ms-500ms, the EPNP from 400-600ms, the P600 from 500-700ms, and the PNP from 600-700ms.",
"Figure 2: The model uses an encoder based on the architecture and regularization in Merity et al. (2017) and pretrained by Howard and Ruder (2018). Within this architecture 2 independent 3-layer LSTM models encode a sentence. The context-embeddings output from each encoder are then concatenated together to give a single representation to each word in the sentence. These concatenated context-embeddings are fed into a causal-convolution, which learns a function to combine each pair of context-representations into a pair-embedding. A rectified linear unit (ReLU) non-linearity is applied to the pair-embedding, after which independent linear layers map the pairembedding along with the log-probability of a word and the word-length to a prediction of each ERP or behavioral signal.",
"Table 1: Proportion of variance explained (POVE) for each of the ERP components (mean of 100 training runs). The second column in each cell shows which ERP components in addition to the target ERP component were included in training. All combinations of training signals were explored. Shown is the best combination for each ERP target as well as every combination which is (i) significantly different from training on the target component alone, (ii) not significantly different from the best training combination, and (iii) uses no more than the number of signals used by the best combination. The N400 is predicted best when only the N400 signal is included in training. All values are significantly different from 0. GROUP A refers to (PNP, ELAN, LAN, P600) and GROUP B refers to (EPNP, ELAN, LAN, P600).",
"Figure 3: The mean squared error (MSE) for prediction of each of the ERP signals during each epoch of training (mean of 100 training runs). The first 2 epochs have been omitted for clarity. During the first 20 epochs (lavender background), only the decoder parameters are modified. During the next 20 epochs (light blue background), the parameters in the final layer of the encoder are also modified. During the last 20 epochs (pink background), all of the parameters are modified. Note that in this model architecture, information can be shared between ERP signals even when only the decoder is modified. The figure shows the MSE when separate models are trained for each ERP independently (a), the MSE when a single model is trained on all ERPs jointly (b), and the difference between these two scenarios (c). The top row in each column shows the MSE on the training data while the bottom row shows the MSE on the validation data. In the bottom row right, the dotted vertical lines indicate the epoch at which the minimum MSE is reached in the lower of the independent or joint training. The LAN, EPNP, and PNP all show modest benefits from joint training before overfitting sets in (the minimum value occurs in the joint training scenario), while all ERP signals other than the N400 show reduced overfitting in joint training.",
"Table 2: Proportion of variance explained (POVE) for each of the ERP components (mean of 100 training runs). +ERP indicates the best combination of ERP training signals for the target ERP component, + READ indicates the inclusion of self-paced reading times, +EYE indicates the inclusion of eye-tracking data, and bold font indicates a significant difference from training on the target component alone.",
"Figure 4: The proportion of variance explained for prediction of each of the ERP signals (mean of 100 training runs). The target ERP is indicated by color; each group of bars shows performance for a different target ERP. The top bar in each group shows the proportion of variance explained when the model is trained using only the target ERP. The bottom bar in each group shows the maximum proportion of variance explained over all combinations of training ERPs (or in the case of the N400, the second best). Also shown in each group are any training combinations that (i) used no more than the number of ERP signals used by the combination that achieved the maximum, and (ii) which were not significantly different from the maximum. Bars are statistically different from each other if a black dot on one bar is connected by a contiguous vertical line to a white dot on the other bar. The bars in the N400 group are not significantly different from each other. The N400 signal is best predicted when the model is trained on just that signal. In every other group, there is at least one ERP that, when combined with the target ERP during training, improves the prediction of the target ERP. The results suggest that these pairs are related: (LAN, P600), (LAN, EPNP), (LAN, PNP), (ELAN, N400), (ELAN, EPNP), (ELAN, PNP), (ELAN, P600), (EPNP, P600).",
"Table 3: Proportion of variance explained for each of the ERP components when using only the forward direction of the encoder (mean of 100 training runs). +ERP indicates the best combination of ERP training signals for the target ERP component, + READ indicates the inclusion of self-paced reading times, +EYE indicates the inclusion of eye-tracking data, and bold font indicates a significant difference from training on the target component alone.",
"Figure 5: The proportion of variance explained for prediction of each of the ERP signals (mean of 100 training runs). The target ERP is indicated by color; each group of bars shows performance for a different target ERP. The top bar in each group shows the proportion of variance explained when the model is trained using only the target ERP. Moving down, the next bar in each group, labeled ERP shows the proportion of variance explained by the best combination of ERP signals for the target ERP. The other bars in each group moving from top to bottom show training variations that use behavioral data with either just the target ERP, or with the best combination of ERP signals. READ denotes self-paced reading data, and EYE denotes all four eye-tracking measures (in this analysis we use right-bounded pass time, gaze duration, go-past time, and first-fixation duration). Pairs of bars are significantly different from each other (paired t-test, false discovery rate ¡ 0.01) if a black dot on one bar is connected to a white dot on the other bar by a contiguous vertical line. Self-paced reading time benefits prediction of all target ERP components except the N400. In the case of the ELAN, LAN, and PNP, self-paced reading time also has marginal benefit compared to the best combination of ERP training signals. Eye-tracking data benefits prediction of the ELAN, P600, and PNP components.",
"Table 4: Proportion of variance explained for each of the ERP components when using only the word embeddings as input to the decoder and bypassing the LSTM entirely (mean of 100 training runs). +ERP indicates the best combination of ERP training signals for the target ERP component, + READ indicates the inclusion of self-paced reading times, +EYE indicates the inclusion of eye-tracking data, and bold font indicates a significant difference from training on the target component alone.",
"Table 5: Raw Pearson’s correlation coefficients (computed on content words after the standardization and participant-averaging) between each neural and behavioral measure and each other measure. FIX indicates firstfixation time, PASS indicates first-pass time, GO indicates go-past time, RIGHT indicates right-bounded reading time, and READ indicates self-paced reading. Many of the measures are highly correlated, but the pattern of correlations is different from the pattern of benefits that we find during joint-training. In particular we note that the N400 is correlated with the other ERP signals, and yet we do not see benefit in prediction of the N400 when jointly training a model to predict it and other signals."
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"6-Table1-1.png",
"7-Figure3-1.png",
"7-Table2-1.png",
"13-Figure4-1.png",
"13-Table3-1.png",
"14-Figure5-1.png",
"15-Table4-1.png",
"15-Table5-1.png"
]
} | [
"Which two pairs of ERPs from the literature benefit from joint training?",
"What datasets are used?"
] | [
[
"1904.01548-Related Work-0"
],
[
"1904.01548-Related Work-0"
]
] | [
"Answer with content missing: (Whole Method and Results sections) Self-paced reading times widely benefit ERP prediction, while eye-tracking data seems to have more limited benefit to just the ELAN, LAN, and PNP ERP components.\nSelect:\n- ELAN, LAN\n- PNP ERP",
"Answer with content missing: (Whole Method and Results sections) The primary dataset we use is the ERP data collected and computed by Frank et al. (2015), and we also use behavioral data (eye-tracking data and self-paced reading times) from Frank et al. (2013) which were collected on the same set of 205 sentences.\nSelect:\n- ERP data collected and computed by Frank et al. (2015)\n- behavioral data (eye-tracking data and self-paced reading times) from Frank et al. (2013)"
] | 65 |
1606.03676 | External Lexical Information for Multilingual Part-of-Speech Tagging | Morphosyntactic lexicons and word vector representations have both proven useful for improving the accuracy of statistical part-of-speech taggers. Here we compare the performances of four systems on datasets covering 16 languages, two of these systems being feature-based (MEMMs and CRFs) and two of them being neural-based (bi-LSTMs). We show that, on average, all four approaches perform similarly and reach state-of-the-art results. Yet better performances are obtained with our feature-based models on lexically richer datasets (e.g. for morphologically rich languages), whereas neural-based results are higher on datasets with less lexical variability (e.g. for English). These conclusions hold in particular for the MEMM models relying on our system MElt, which benefited from newly designed features. This shows that, under certain conditions, feature-based approaches enriched with morphosyntactic lexicons are competitive with respect to neural methods. | {
"paragraphs": [
[
"Part-of-speech tagging is now a classic task in natural language processing, for which many systems have been developed or adapted for a large variety of languages. Its aim is to associate each “word” with a morphosyntactic tag, whose granularity can range from a simple morphosyntactic category, or part-of-speech (hereafter PoS), to finer categories enriched with morphological features (gender, number, case, tense, mood, etc.).",
"The use of machine learning algorithms trained on manually annotated corpora has long become the standard way to develop PoS taggers. A large variety of algorithms have been used, such as (in approximative chronological order) bigram and trigram hidden Markov models BIBREF0 , BIBREF1 , BIBREF2 , decision trees BIBREF3 , BIBREF4 , maximum entropy Markov models (MEMMs) BIBREF5 and Conditional Random Fields (CRFs) BIBREF6 , BIBREF7 . With such machine learning algorithms, it is possible to build PoS taggers for any language, provided adequate training data is available.",
"As a complement to annotated corpora, it has previously been shown that external lexicons are valuable sources of information, in particular morphosyntactic lexicons, which provide a large inventory of (word, PoS) pairs. Such lexical information can be used in the form of constraints at tagging time BIBREF8 , BIBREF9 or during the training process as additional features combined with standard features extracted from the training corpus BIBREF10 , BIBREF11 , BIBREF12 .",
"In recent years, a different approach to modelling lexical information and integrating it into natural language processing systems has emerged, namely the use of vector representations for words or word sequences BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 . Such representations, which are generally extracted from large amounts of raw text, have proved very useful for numerous tasks including PoS tagging, in particular when used in recurrent neural networks (RNNs) and more specifically in mono- or bi-directional, word-level and/or character-level long short-term memory networks (LSTMs) BIBREF19 , BIBREF16 , BIBREF17 , BIBREF20 .",
"Both approaches to representing lexical properties and to integrating them into a PoS tagger improve tagging results. Yet they rely on resources of different natures. The main advantage of word vectors is that they are built in an unsupervised way, only requiring large amounts of raw textual data. They also encode finer-grained information than usual morphosyntactic lexicons, most of which do not include any quantitative data, not even simple frequency information. Conversely, lexical resources often provide information about scarcely attested words, for which corpus-based approaches such as word vector representations are of limited relevance. Moreover, morphological or morphosyntactic lexicons already exist for a number of languages, including less-resourced langauges for which it might be difficult to obtain the large amounts of raw data necessary to extract word vector representations.",
"Our main goal is therefore to compare the respective impact of external lexicons and word vector representations on the accuracy of PoS models. This question has already been investigated for 6 languages by BIBREF18 using the state-of-the-art CRF-based tagging system MarMoT. The authors found that their best-performing word-vector-based PoS tagging models outperform their models that rely on morphosyntactic resources (lexicons or morphological analysers). In this paper, we report on larger comparison, carried out in a larger multilingual setting and comparing different tagging models. Using different 16 datasets, we compare the performances of two feature-based models enriched with external lexicons and of two LSTM-based models enriched with word vector representations. A secondary goal of our work is to compare the relative improvements linked to the use of external lexical information in the two feature-based models, which use different models (MEMM vs. CRF) and feature sets.",
"More specifically, our starting point is the MElt system BIBREF12 , an MEMM tagging system. We first briefly describe this system and the way we adapted it by integrating our own set of corpus-based and lexical features. We then introduce the tagging models we have trained for 16 different languages using our adapted version of MElt. These models are trained on the Universal Dependencies (v1.2) corpus set BIBREF21 , complemented by morphosyntactic lexicons. We compare the accuracy of our models with the scores obtained by the CRF-based system MarMoT BIBREF22 , BIBREF18 , retrained on the same corpora and the same external morphosyntactic lexicons. We also compare our results to those obtained by the best bidirectional LSTM models described by BIBREF20 , which both make use of Polyglot word vector representations published by BIBREF23 . We will show that an optimised enrichment of feature-based models with morphosyntactic lexicon results in significant accuracy gains. The macro-averaged accuracy of our enriched MElt models is above that of enriched MarMoT models and virtually identical to that of LSTMs enriched with word vector representations. More precisely, per-language results indicate that lexicons provide more useful information for languages with a high lexical variability (such as morphologically rich languages), whereas word vectors are more informative for languages with a lower lexical variability (such as English)."
],
[
"MElt BIBREF12 is a tagging system based on maximum entropy Markov models (MEMM) BIBREF5 , a class of discriminative models that are suitable for sequence labelling BIBREF5 . The basic set of features used by MElt is given in BIBREF12 . It is a superset of the feature sets used by BIBREF5 and BIBREF24 and includes both local standard features (for example the current word itself and its prefixes and suffixes of length 1 to 4) and contextual standard features (for example the tag just assigned to the preceding word). In particular, with respect to Ratnaparkhi's feature set, MElt's basic feature set lifts the restriction that local standard features used to analyse the internal composition of the current word should only apply to rare words.",
"One of the advantages of feature-based models such as MEMMs and CRFs is that complementary information can be easily added in the form of additional features. This was investigated for instance by BIBREF25 , whose best-performing model for PoS tagging dialogues was obtained with a version of MElt extended with dialogue-specific features. Yet the motivation of MElt's developers was first and foremost to investigate the best way to integrate lexical information extracted from large-scale morphosyntactic lexical resources into their models, on top of the training data BIBREF12 . They showed that performances are better when this external lexical information is integrated in the form of additional lexical features than when the external lexicon is used as constraints at tagging time. These lexical features can also be divided into local lexical features (for example the list of possible tags known to the external lexicon for the current word) and contextual lexical features (for example the list of possible tags known to the external lexicon for surrounding words). In particular, lexical contextual features provide a means to model the right context of the current word, made of words that have not yet been tagged by the system but for which the lexicon often provides a list of possible tags. Moreover, tagging accuracy for out-of-vocabulary (OOV) words is improved, as a result of the fact that words unknown to the training corpus might be known to the external lexicon.",
"Despite a few experiments published with MElt on languages other than French BIBREF12 , BIBREF40 , BIBREF41 , the original feature set used by MElt (standard and lexical features) was designed and tested mostly on this language, by building and evaluating tagging models on a variant of the French TreeBank. Since our goal was to carry out experiments in a multilingual setting, we have decided to design our own set of features, using the standard MElt features as a starting point. With respect to the original MElt feature set, we have added new ones, such as prefixes and suffixes of the following word, as well as a hybrid contextual feature obtained by concatenating the tag predicted for the preceding word and the tag(s) provided by the external lexicon for the following word.",
"In order to select the best performing feature set, we carried out a series of experiments using the multilingual dataset provided during the SPMRL parsing shared task BIBREF42 . This included discarding useless or harmful features and selecting the maximal length of the prefixes and suffixes to be used as features, both for the current word and for the following word.",
"We incorporated in MElt the best performing feature set, described in Table TABREF1 . All models discussed in this paper are based on this feature set."
],
[
"We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. All UD1.2 corpora use a common tag set, the 17 universal PoS tags, which is an extension of the tagset proposed by BIBREF43 .",
"As our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse."
],
[
"We generate our external lexicons using the set of source lexicons listed in Table TABREF3 . Since external lexical information is exploited via features, there is no need for the external lexicons and the annotated corpora to use the same PoS inventory. Therefore, for each language, we simply extracted from the corresponding lexicon the PoS of each word based on its morphological tags, by removing all information provided except for its coarsest-level category. We also added entries for punctuations when the source lexicons did not contain any.",
"We also performed experiments in which we retained the full original tags provided by the lexicons, with all morphological features included. On average, results were slightly better than those presented in the paper, although not statistically significantly. Moreover, the granularity of tag inventories in the lexicons is diverse, which makes it difficult to draw general conclusions about results based on full tags. This is why we only report results based on (coarse) PoS extracted from the original lexicons."
],
[
"In order to assess the respective contributions of external lexicons and word vector representations, we first compared the results of the three above-mentioned systems when trained without such additional lexical information. Table TABREF11 provides the results of MElt and MarMoT retrained on UD1.2 corpora, together with the results publised on the same corpora by BIBREF20 , using their best model not enhanced by external word vector representations —i.e. the model they call INLINEFORM0 , which is a bidirectional LSTM that combines both word and character embeddings.",
"These results show that Plank et al.'s (2016) bi-LSTM performs extremely well, surpassed by MarMoT on only 3 out of 16 datasets (Czech, French and Italian), and by MElt only once (Indonesian)."
],
[
"Table TABREF13 provides the results of four systems enriched with lexical information. The feature-based systems MElt and MarMoT, respectively based on MEMMs and CRFs, are extended with the lexical information provided by our morphosyntactic lexicons. This extension takes the form of additional features, as described in Section SECREF2 for MElt. The results reported by BIBREF20 for their bidirectional LSTM when initialised with Polyglot embeddings trained on full wikipedias are also included, together with their new system FREQBIN, also initialised with Polyglot embeddings. FREQBIN trains bi-LSTMs to predict for each input word both a PoS and a label that represents its log frequency in the training data. As they word it, “the idea behind this model is to make the representation predictive for frequency, which encourages the model not to share representations between common and rare words, thus benefiting the handling of rare tokens.”",
"The results, which are also displayed in Figures FIGREF14 and FIGREF15 , show that all systems reach very similar results on average, although discrepancies can be observed from one dataset to another, on which we shall comment shortly. The best performing system in terms of macro-average is MElt (96.60%). Both bi-LSTM systems reach the same score (96.58%), the difference with MElt's results being non significant, whereas MarMoT is only 0.14% behind (96.46%). Given the better baseline scores of the neural approaches, these results show that the benefit of using external lexicons in the feature-based models MElt and MarMoT are much higher than those using Polyglot word vector representations as initialisations for bi-LSTMs.",
"Yet these very similar overall results reflect a different picture when focusing on OOV tagging accuracy. The best models for OOV tagging accuracy are, by far, FREQBIN models, which are beaten by MarMoT and by MElt only once each (on English and Danish respectively). The comparison on OOV tagging between MElt and MarMoT shows that MElt performs better on average than MarMoT, despite the fact that MarMoT's baseline results were better than those reached by MElt. This shows that the information provided by external morphosyntactic lexicons is better exploited by MElt's lexical features than by those used by MarMoT. On the other hand, the comparison of both bi-LSTM-based approaches confirm that the FREQBIN models is better by over 10% absolute on OOV tagging accuracy (94.28% vs. 83.59%), with 65% lower error rate.",
"One of the important differences between the lexical information provided by an external lexicon and word vectors built from raw corpora, apart from the very nature of the lexical information provided, is the coverage and accuracy of this lexical information on rare words. All words in a morphosyntactic lexicon are associated with information of a same granularity and quality, which is not the case with word representations such as provided by Polyglot. Models that take advantage of external lexicons should therefore perform comparatively better on datasets containing a higher proportion of rarer words, provided the lexicons' coverage is high. In order to confirm this intuition, we have used a lexical richness metric based on the type/token ratio. Since this ratio is well-known for being sensitive to corpus length, we normalised it by computing it over the 60,000 first tokens of each training set. When this normalised type/token ratio is plotted against the difference between the results of MElt and both bi-LSTM-based models, the expected correlation is clearly visible (see Figure FIGREF16 ). This explains why MElt obtains better results on the morphologically richer Slavic datasets (average normalised type/token ratio: 0.28, average accuracy difference: 0.32 compared to both bi-LSTM+Polyglot and FREQBIN+Polyglot) and, at the other end of the spectrum, significantly worse results on the English dataset (normalised type/token ratio: 0.15, average accuracy difference: -0.56 compared to bi-LSTM+Polyglot, -0.57 compared to FREQBIN+Polyglot)."
],
[
"Two main conclusions can be drawn from our comparative results. First, feature-based tagging models adequately enriched with external morphosyntactic lexicons perform, on average, as well as bi-LSTMs enriched with word embeddings. Per-language results show that the best accuracy levels are reached by feature-based models, and in particular by our improved version of the MEMM-based system MElt, on datasets with high lexical variability (in short, for morphologically rich languages), whereas neural-based results perform better on datatsets with lower lexical variability (e.g. for English).",
"We have only compared the contribution of morphosyntactic lexicons to feature-based models (MEMMs, CRFs) and that of word vector representations to bi-LSTM-based models as reported by BIBREF20 . As mentioned above, work on the contribution of word vector representations to feature-based approaches has been carried out by BIBREF18 . However, the exploitation of existing morphosyntactic or morphological lexicons in neural models is a less studied question. Improvements over the state of the art might be achieved by integrating lexical information both from an external lexicon and from word vector representations into tagging models.",
"In that regard, further work will be required to understand which class of models perform the best. An option would be to integrate feature-based models such as a CRF with an LSTM-based layer, following recent proposals such as the one proposed by BIBREF45 for named entity recognition."
]
],
"section_name": [
"Introduction",
"MElt",
"Corpora",
"Lexicons",
"Baseline models",
"Models enriched with external lexical information",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"8e332b0027680efd19503bdeee7c4f78520d790d",
"bc6f5889e8eb92e03d5767c5458beb5249c7da71"
],
"answer": [
{
"evidence": [
"We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. All UD1.2 corpora use a common tag set, the 17 universal PoS tags, which is an extension of the tagset proposed by BIBREF43 .",
"As our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse."
],
"extractive_spans": [],
"free_form_answer": "Universal Dependencies v1.2 treebanks for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German,\nIndonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish, and Swedish",
"highlighted_evidence": [
"We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. ",
"We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. All UD1.2 corpora use a common tag set, the 17 universal PoS tags, which is an extension of the tagset proposed by BIBREF43 ."
],
"extractive_spans": [
"Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2"
],
"free_form_answer": "",
"highlighted_evidence": [
"We carried out our experiments on the Universal Dependencies v1.2 treebanks BIBREF21 , hereafter UD1.2, from which morphosyntactically annotated corpora can be trivially extracted. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"d9135203a92ded14d260a7d551b7a447c8b7c910"
]
},
{
"annotation_id": [
"4fe73c15d0aacea8353f58678afd7e2ff29c96ef",
"99016e58e6cac16f86f65bf98c0eba8135891c46"
],
"answer": [
{
"evidence": [
"As our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse."
],
"extractive_spans": [
"Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish"
],
"free_form_answer": "",
"highlighted_evidence": [
"We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As our goal is to study the impact of lexical information for PoS tagging, we have restricted our experiments to UD1.2 corpora that cover languages for which we have morphosyntactic lexicons at our disposal, and for which BIBREF20 provide results. We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish. Although this language list contains only one non-Indo-European (Indonesian), four major Indo-European sub-families are represented (Germanic, Romance, Slavic, Indo-Iranian). Overall, the 16 languages considered in our experiments are typologically, morphologically and syntactically fairly diverse."
],
"extractive_spans": [
"Bulgarian",
"Croatian",
"Czech",
"Danish",
"English",
"French",
"German",
"Indonesian",
"Italian",
"Norwegian",
"Persian",
"Polish",
"Portuguese",
"Slovenian",
"Spanish ",
"Swedish"
],
"free_form_answer": "",
"highlighted_evidence": [
" We considered UD1.2 corpora for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German, Indonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish and Swedish."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"d9135203a92ded14d260a7d551b7a447c8b7c910"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"which datasets did they experiment with?",
"which languages are explored?"
],
"question_id": [
"3ddff6b707767c3dd54d7104fe88b628765cae58",
"0a5ffe4697913a57fda1fd5a188cd5ed59bdc5c7"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Table 1: Feature set used by our MElt models. The current word is wi = w1i . . . w ni i . Previously assigned tags for the two previous words are ti−2 and ti−1. The tag to be predicted for the current word is ti, which can be assigned any tag T in the tagset. The lex function applied to a word returns the set of all tags known to the lexicon for this word, or the singleton {_unk_} if the word is unknown to the lexicon. Boolean functions used by the local standard features have self-explanatory names.",
"Table 2: Information about the morphosyntactic lexicons used as external sources of lexical information in our MElt and MarMoT models. The number of entries and tagset sizes refers to the morphosyntactic lexicons we extracted and used in our models, not to the original resources.",
"Table 3: Overall accuracy (in %) of baseline systems, i.e. MElt and MarMoT models trained without external lexicons, and Plank et al.’s (2016) ~c + ~w models, which do not make use of Polyglot embeddings. Best scores are highlighted for each corpus.",
"Table 4: Accuracy (in %) of the feature-based systems MElt and MarMoT as well as the two best LSTM-based systems by Plank et al. (2016) on UD1.2 datasets, which all use the 17 “universal PoS tags”. MElt and MarMoT models integrate the external lexicons listed in Table 2, whereas bidirectional LSTM-based systems rely on Polyglot word embeddings. Best scores overall and on OOV words are highlighted for each corpus.",
"Figure 1: Graphical visualisation of the overall tagging accuracies for all four types of enriched models. Detailed results are given in Table 4. Languages are sorted by increasing MElt’s overall tagging scores.",
"Figure 2: Graphical visualisation of the OOV tagging accuracies for all types of models enriched with external lexicons. Detailed results are given in Table 4. Languages are sorted by increasing MElt’s OOV tagging scores.",
"Figure 3: Difference between the tagging accuracy of lexicon-enhanced MElt models and each of the two types of Polyglot-enhanced neural bi-LSTM models plotted against training sets’ normalised token/type ratio."
],
"file": [
"7-Table1-1.png",
"8-Table2-1.png",
"10-Table3-1.png",
"11-Table4-1.png",
"11-Figure1-1.png",
"12-Figure2-1.png",
"12-Figure3-1.png"
]
} | [
"which datasets did they experiment with?"
] | [
[
"1606.03676-Corpora-1",
"1606.03676-Corpora-0"
]
] | [
"Universal Dependencies v1.2 treebanks for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German,\nIndonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish, and Swedish"
] | 66 |
1909.04002 | The Trumpiest Trump? Identifying a Subject's Most Characteristic Tweets | The sequence of documents produced by any given author varies in style and content, but some documents are more typical or representative of the source than others. We quantify the extent to which a given short text is characteristic of a specific person, using a dataset of tweets from fifteen celebrities. Such analysis is useful for generating excerpts of high-volume Twitter profiles, and understanding how representativeness relates to tweet popularity. We first consider the related task of binary author detection (is x the author of text T?), and report a test accuracy of 90.37% for the best of five approaches to this problem. We then use these models to compute characterization scores among all of an author's texts. A user study shows human evaluators agree with our characterization model for all 15 celebrities in our dataset, each with p-value < 0.05. We use these classifiers to show surprisingly strong correlations between characterization scores and the popularity of the associated texts. Indeed, we demonstrate a statistically significant correlation between this score and tweet popularity (likes/replies/retweets) for 13 of the 15 celebrities in our study. | {
"paragraphs": [
[
"Social media platforms, particularly microblogging services such as Twitter, have become increasingly popular BIBREF0 as a means to express thoughts and opinions. Twitter users emit tweets about a wide variety of topics, which vary in the extent to which they reflect a user's personality, brand and interests. This observation motivates the question we consider here, of how to quantify the degree to which tweets are characteristic of their author?",
"People who are familiar with a given author appear to be able to make such judgments confidently. For example, consider the following pair of tweets written by US President Donald Trump, at the extreme sides of our characterization scores (0.9996 vs. 0.0013) for him:",
"",
"Tweet 1: Thank you for joining us at the Lincoln Memorial tonight- a very special evening! Together, we are going to MAKE AMERICA GREAT AGAIN!",
"Tweet 2: “The bend in the road is not the end of the road unless you refuse to take the turn.\" - Anonymous",
"Although both these tweets are from the same account, we assert that Tweet 1 sounds more characteristic of Donald Trump than Tweet 2. We might also guess that the first is more popular than second. Indeed, Tweet 1 received 155,000 likes as opposed to only 234 for Tweet 2.",
"Such an author characterization score has many possible applications. With the ability to identify the most/least characteristic tweets from a person, we can generate reduced excerpts for high-volume Twitter profiles. Similarly, identifying the least characteristic tweets can highlight unusual content or suspicious activity. A run of sufficiently unrepresentative tweets might be indicative that a hacker has taken control of a user's account.",
"But more fundamentally, our work provides the necessary tool to study the question of how “characteristic-ness\" or novelty are related to tweet popularity. Do tweets that are more characteristic of the user get more likes, replies and retweets? Is such a relationship universal, or does it depend upon the personality or domain of the author? Twitter users with a large follower base can employ our methods to understand how characteristic a new potential tweet sounds, and obtain an estimate of how popular it is likely to become.",
"To answer these questions, we formally define the problem of author representativeness testing, and model the task as a binary classification problem. Our primary contributions in this paper include:",
"Five approaches to authorship verification: As a proxy for the question of representativeness testing (which has no convincing source of ground truth without extensive human annotation), we consider the task of distinguishing tweets written by a given author from others they did not write. We compare five distinct computational approaches to such binary tweet classification (user vs. non-user). Our best model achieves a test accuracy of 90.37% over a dataset of 15 Twitter celebrities. We use the best performing model to compute a score (the probability of authorship), which quantifies how characteristic of the user a given tweet is.",
"Human evaluation study: To verify that our results are in agreement with human judgment of how `characteristic' a tweet is, we ask human evaluators which of a pair of tweets sounds more characteristic of the given celebrity. The human evaluators are in agreement with our model 70.40% of the time, significant above the $0.05$ level for each of our 15 celebrities.",
"Correlation analysis for popularity: Our characterization score exhibits strikingly high absolute correlation with popularity (likes, replies and retweets), despite the fact that tweet text is the only feature used to train the classifier which yields these scores.",
"For 13 of the 15 celebrities in our dataset, we observe a statistically significant correlation between characterization score and popularity. Figure FIGREF4 shows the relation between tweet score and tweet popularity for Donald Trump and Justin Bieber respectively. The figure shows that the sign of this association differs for various celebrities, reflecting whether their audience seeks novelty or reinforcement.",
"Iterative sampling for class imbalance: Our task requires distinguishing a user's tweets (perhaps 1,000 positive training examples) from the sea of all other user's tweets (implying billions of possible negative training examples). We present an iterative sampling technique to exploit this class imbalance, which improves the test accuracy for negative examples by 2.62%."
],
[
"We formally define the author representativeness problem as follows:",
"Input: A Twitter author $U$ and the collection of their tweets, and a new tweet $T$.",
"Problem: Compute $\\textrm {score}(T, U)$, the probability that $T$ was written by $U$. This score quantifies how characteristic of writer $U$, tweet $T$ is."
],
[
"In order to obtain this representativeness score, we model our task as a classification problem, where we seek to distinguish tweets from $U$ against tweets from all other users.",
"By modeling this as a binary classification problem, it becomes possible to quantify how characteristic of a writer a tweet is, as a probability implied by its distance from the decision boundary. Thus, we obtain a characterization score between 0 and 1 for each tweet.",
"Challenges: In training a classifier to distinguish between user and non-user tweets, we should ideally have an equal amount of examples of both classes. User tweets are simply all the tweets from that user's Twitter account, and measure perhaps in the thousands. Indeed, the number of tweets per user per day is limited to 2400 per day by current Twitter policy (https://help.twitter.com/en/rules-and-policies/twitter-limits). The negative examples consist of all tweets written by other Twitter users, a total of approximately 500 million per day (https://business.twitter.com). Thus there is an extreme class imbalance between user and non-user tweets. Moreover, the nature of language used on Twitter does not conform to formal syntactic or semantic rules. The sentences tend to be highly unstructured, and the vocabulary is not restricted to a particular dictionary."
],
[
"For the binary classification task described in Section SECREF6, we term tweets from $U$ as positive examples, and tweets from other users as negative examples.",
"Positive examples: We take tweets written by 15 celebrities from various domains, from 01-Jan-2008 to 01-Dec-2018, as positive examples. Properties of these Twitter celebrities are provided in Table TABREF10.",
"Negative examples: We have collected 1% of tweets from Twitter's daily feed using the Twitter API (https://developer.twitter.com/en/docs.html) to use as negative examples.",
"Preprocessing and Filtering: We have preprocessed and filtered the data to remove tweets that are unrepresentative or too short for analysis. All text has been converted to lowercase, and stripped of punctuation marks and URLs. This is because our approaches are centered around word usage. However, in future models, punctuation may prove effective as a feature. Further, we restrict analysis to English language tweets containing no attached images. We select only tweets which are more than 10 words long, and contain at least 5 legitimate (dictionary) English words. We define an unedited transfer of an original tweet as a retweet, and remove these from our dataset. Since comments on retweets are written by the user themselves, we retain these in our dataset.",
"We note that celebrity Twitter accounts can be handled by PR agencies, in addition to the owner themselves. Because our aim is to characterize Twitter profiles as entities, we have not attempted to distinguish between user-written and agency-written tweets. However, this is an interesting direction for future research.",
"We use a train-test split of 70-30% on the positive examples, and generate negative training and test sets of the same sizes for each user, by randomly sampling from the large set of negative examples."
],
[
"The challenge of author identification has a long history in NLP. PAN 2013 BIBREF1 introduced the question: “Given a set of documents by the same author, is an additional (out-of-set) document also by that author?” The corpus is comprised of text pieces from textbooks, newspaper articles, and fiction. Submissions to PAN 2014 BIBREF2 also model authorship verification as binary classification, by using non-author documents as negative examples. The best submission BIBREF3 in PAN 2013 uses the General Impostors (GI) method, which is a modification of the Impostors Method BIBREF4. The best submission BIBREF5 in PAN 2014 presents a modification of the GI method. These methods are based on the impostors framework BIBREF6.",
"compressionveenman used compression distance as a document representation, for authorship verification in PAN 2013. worddist present a global feature extraction approach and achieve state-of-the-art accuracy for the PAN 2014 corpus. The best submission BIBREF7 in PAN 2015 BIBREF8 uses a character-level RNN model for author identification, in which each author is represented as a sub-model, and the recurrent layer is shared by all sub-models. This is useful if the number of authors is fixed, and the problem is modeled as multi-class classification. deeplearningauthid also approach multi-class author identification, using deep learning for feature extraction, and forensic using hierarchical clustering.",
"intrinsicauthorverification propose an intrinsic profile-based verification method that uses latent semantic indexing (LSI), which is effective for longer texts. authoneclass and limiteddata explore methods for authorship verification for larger documents such as essays and novels. emailauthid and email2 explore author identification for emails, and taskguided for scientific papers. azarbonyad2015time make use of temporal changes in word usage to identify authors of tweets and emails. unigramsandbigrams, featureeval, and unstructured evaluate the utility of various features for this task. authidusingtextsampling proposes text sampling to address the lack of text samples of undisputed authorship, to produce a desirable distribution over classes.",
"koppel2009computational compare methods for variants of the authorship attribution problem. bhargava2013stylometric apply stylometric analysis to tweets to determine the author. lopez2015discriminative propose a document representation capturing discriminative and subprofile-specific information of terms. rocha2016authorship review methods for authorship attribution for social media forensics. peng2016bit use bit-level n-grams for determining authorship for online news. peng2016astroturfing apply this method to detect astroturfing on social media. theophilo2019needle employ deep learning specifically for authorship attribution of short messages."
],
[
"suh2010want leverages features such as URL, number of hashtags, number of followers and followees etc. in a generalized linear model, to predict the number of retweets. naveed2011bad extend this approach to perform content-based retweet prediction using several features including sentiments, emoticons, punctuations etc. bandari2012pulse apply the same approach for regression as well as classification, to predict the number of retweets specifically for news articles. zaman2014bayesian present a Bayesian model for retweet prediction using early retweet times, retweets of other tweets, and the user's follower graph. tan2014effect analyze whether different wording of a tweet by the same author affects its popularity. SEISMIC BIBREF9 and PSEISMIC BIBREF10 are statistical methods to predict the final number of retweets. zhang2018predicting approach retweet prediction as a multi-class classification problem, and present a feature-weighted model, where weights are computed using information gain."
],
[
"Various methods to handle imbalanced datasets have been described by kotsiantis2006handling. These include undersampling BIBREF11, oversampling, and feature selection BIBREF12 at the data level. However, due to random undersampling, potentially useful samples can be discarded, while random oversampling poses the risk of overfitting. This problem can be handled at the algorithmic level as well: the threshold method BIBREF13 produces several classifiers by varying the threshold of the classifier score. One-class classification can be performed using a divide-and-conquer approach, to iteratively build rules to cover new training instances BIBREF14. Cost-sensitive learning BIBREF15 uses unequal misclassification costs to address the class imbalance problem."
],
[
"As described in Section SECREF6, we build classification models to distinguish between user and non-user tweets. We have explored five distinct approaches to build such models."
],
[
"This approach is inspired from Kolmogorov complexity BIBREF16, which argues that the compressibility of a text reflects the quality of the underlying model. We use the Lempel-Ziv-Welch (LZW) compression algorithm BIBREF17 to approximate Kolmogorov complexity by dynamically building a dictionary to encode word patterns from the training corpus. The longest occurring pattern match present in the dictionary is used to encode the text.",
"We hypothesize that the length of a tweet $T$ from user $U$, compressed using a dictionary built from positive examples, will be less than the length of the same tweet compressed using a dictionary built from negative examples.",
"We use the following setup to classify test tweets for each Twitter user in our dataset:",
"Build an encoding dictionary using positive examples ($\\textrm {train}_{\\textrm {pos}}$), and an encoding dictionary using negative examples ($\\textrm {train}_{\\textrm {neg}}$).",
"Encode the new tweet $T$ using both these dictionaries, to obtain $T_{\\textrm {pos}} = \\textrm {encode}_{\\textrm {pos}}(T)$ and $T_{\\textrm {neg}} = \\textrm {encode}_{\\textrm {neg}}(T)$ respectively.",
"If the length of $T_{\\textrm {pos}}$ is less than that of $T_{\\textrm {neg}}$, classify $T$ as positive; else, classify it as negative.",
"This gives us the class label for each new tweet $T$. In addition, we compute the characterization score of tweet $T$ with respect to user $U$, as described in Equation DISPLAY_FORM18.",
"Thus the shorter the length of the encoded tweet, the more characteristic of the user $T$ is."
],
[
"We hypothesize that each user writes about topics with a particular probability distribution, and that each tweet reflects the probability distribution over these topics. We train a topic model using Latent Dirichlet Allocation (LDA) BIBREF18 on a large corpus of tweets, and use this topic model to compute topic distributions for individual tweets. We then use these values as features. We experiment with two types of classifiers: Logistic Regression (LR), and Multi Linear Perceptron (MLP) of size $(5, 5, 5)$. We represent each tweet as a distribution over $n=500$ topics.",
"The characterization score of a tweet $T$ is given by the classifier's confidence that $T$ belongs to the positive class."
],
[
"We hypothesize that a Twitter user can be characterized by usage of words and their frequencies in tweets, and model this using n-gram frequencies.",
"We use the following setup to classify test tweets for each Twitter user in our dataset:",
"Build a frequency dictionary of all n-grams in positive examples ($\\textrm {train}_{\\textrm {pos}}$), and a frequency dictionary of all n-grams in negative examples ($\\textrm {train}_{\\textrm {neg}}$).",
"Compute the average probability of all n-gram sequences in the new tweet $T$ using both these dictionaries, to obtain $\\textrm {prob}_{\\textrm {pos}}(T)$ and $\\textrm {prob}_{\\textrm {neg}}(T)$ respectively. Here, we use add-one smoothing and conditional backoff to compute these probability values.",
"If $\\textrm {prob}_{\\textrm {pos}}(T)$ is greater than $\\textrm {prob}_{\\textrm {neg}}(T)$, classify $T$ as positive; else, classify it as negative.",
"The characterization score of tweet $T$ is given by the average n-gram probability computed using the frequency dictionary of $\\textrm {train}_{\\textrm {pos}}$. We experiment with $n = 1$ (unigrams) and $n = 2$ (bigrams)."
],
[
"We hypothesize that if we obtain latent representations of tweets as documents, tweets from the same author will cluster together, and will be differentiable from tweets from others. To that end, we use the following setup:",
"We obtain representations of tweets as document embeddings. We experiment with two types of document embeddings: FastText BIBREF19 (embedding size = 100) and BERT-Base, uncased BIBREF20 (embedding size = 768).",
"We then use these embeddings as features to train a classification model. We experiment with two types of classifiers: Logistic Regression (LR) and Multi Linear Perceptron (MLP) of size $(5, 5, 5)$.",
"The characterization score of tweet $T$ is given by the classifier's confidence that $T$ belongs to the positive class.",
"Iterative sampling: As described in Section SECREF6, there exists an extreme class imbalance for this binary classification task, in that the number of negative examples is far more than the number of positive examples. Here, we explore an iterative sampling technique to address this problem. We train our classifier for multiple iterations, coupling the same $\\textrm {train}_{\\textrm {pos}}$ with a new randomly sampled $\\textrm {train}_{\\textrm {neg}}$ set in each iteration.",
"We conduct this experiment for all users with the best performing model for this approach, i.e. we use BERT embeddings as features, and MLP for classification. We train this classifier for 40 iterations, and compare the model's performance when we use the same set of negative examples vs. when we randomly sample new negative examples in each iteration.",
"Figure FIGREF28 shows the mean train and test accuracy for all users over 40 iterations. As expected, the training accuracy is higher if we do not sample, as the model gets trained on the same data repeatedly in each iteration. However, if we perform random sampling, the model is exposed to a larger number of negative examples, which results in a higher test accuracy (+ 1.08%), specifically for negative test examples (+ 2.62%)."
],
[
"In this approach, we tokenize each tweet, and obtain embeddings for each token. We then sequentially give these embeddings as input to a classifier.",
"We use a pretrained model (BERT-Base, Uncased: 12-layer, 768-hidden, 12-heads, 110M parameters) to generate token embeddings of size 768, and pass these to a Long Short Term Memory (LSTM) BIBREF21 classifier. We use an LSTM layer with 768 units with dropout and recurrent dropout ratio 0.2, followed by a dense layer with sigmoid activation. We train this model using the Adam optimizer BIBREF22 and binary cross-entropy loss, with accuracy as the training metric."
],
[
"Table TABREF24 presents the user-wise test accuracy of the five approaches under the specified configurations. Note that the test set contains an equal number of positive and negative examples for each author.",
"Other baselines that we attempted to compare against include the best submissions to the PAN 2013 and 2014 author verification challenge: pan2013winner and slightlymodifiedimpostor, which are variants of the Impostors Method. This challenge employed significantly longer documents (with an average of 1039, 845, and 4393 words per document for articles, essays and novels respectively, as opposed to an average of 19 words per tweet) and significantly fewer documents per author (an average of 3.2, 2.6 and 1 document/s per author, as opposed to an average of 6738 tweets per user). Our experiments with the authorship verification classifier BIBREF23 showed that the Impostors Method is prohibitively expensive on larger corpora, and also performed too inaccurately on short texts to provide a meaningful baseline.",
"For 13 of the 15 users in our dataset, Approach SECREF29 (token embeddings followed by sequential modeling) has the highest accuracy. This model correctly identifies the author of 90.37% of all tweets in our study, and will be used to define the characterization score for our subsequent studies."
],
[
"To verify whether human evaluators are in agreement with our characterization model, we conducted a user study using MTurk BIBREF24."
],
[
"For each user in our dataset, we build a set of 20 tweet pairs, with one tweet each from the 50 top-scoring and bottom-scoring tweets written by the user. We ask the human evaluator to choose which tweet sounds more characteristic of the user. To validate that the MTurk worker knows enough about the Twitter user to pick a characteristic tweet, we use a qualification test containing a basic set of questions about the Twitter user. We were unable to find equal numbers of Turkers familiar with each subject, so our number of evaluators $n$ differs according to author."
],
[
"Table TABREF33 describes the results obtained in the user study: the mean and standard deviation of percentage of answers in agreement with our model, the p-value, and the number of MTurk workers who completed each task. We find that the average agreement of human evaluators with our model is 70.40% over all 15 users in our dataset.",
"For each of the 15 celebrities, the human evaluators agree with our model above a significance level of 0.05, and in 13 of 15 cases above a level of $10^{-5}$. This makes clear our scores are measuring what we intend to be measuring."
],
[
"We now explore the relationship between characterization score and tweet popularity for each of the users in our dataset. To analyze this relationship, we perform the following procedure for each author $U$:",
"Sort all tweets written by $U$ in ascending order of characterization score.",
"Bucket the sorted tweets by percentile score (1 to 100).",
"For each bucket, calculate the mean number of likes, replies, and retweets.",
"Compute the correlation of this mean and the percentile score.",
"The Pearson correlation coefficients (r-values) are listed in Table TABREF39. The users at the top (Trump, Bachchan, Modi) all display very strong positive correlation. We name this group UPC (Users with Positive Correlation), and the group of users at the bottom (Grande, Bieber, Kardashian) as UNC (Users with Negative Correlation)."
],
[
"For users with positive correlation, the higher the tweet's characterization score, the more popular it becomes, i.e. the more likes, replies, and retweets it receives. In contrast, for users with negative correlation, the higher the tweet score, the less popular it becomes.",
"Figure FIGREF41 shows the plot of log mean number of likes per bucket vs. tweet score percentile, for users with the highest positive correlation. Similarly, Figure FIGREF42 shows the plot of log mean number of likes per bucket vs. tweet score percentile, for users with the highest negative correlation.",
"One may question whether these results are due to temporal effects: user's popularity vary with time, and perhaps the model's more characteristic tweets simply reflect periods of authorship. Figures FIGREF41 and FIGREF42 disprove this hypothesis. Here the color of each point denotes the year for which most tweets are present in the corresponding bucket. Since the distribution of colors over time is not clustered, we infer that the observed result is not an artifact of temporal effects. In both cases, there is a strong trend in tweet popularity based on tweet score. We note that the plots are presented on the log scale, meaning the trends here are exponential."
],
[
"We present examples of the most and least characteristic tweets for celebrities from three categories, along with their corresponding characterization scores computed using Approach SECREF29."
]
],
"section_name": [
"Introduction",
"Problem Formulation",
"Problem Formulation ::: Methodology",
"Problem Formulation ::: Data",
"Related work ::: Author identification and verification",
"Related work ::: Predicting tweet popularity",
"Related work ::: Training with imbalanced datasets",
"Approaches to authorship verification",
"Approaches to authorship verification ::: Approach 1: Compression",
"Approaches to authorship verification ::: Approach 2: Topic modeling",
"Approaches to authorship verification ::: Approach 3: n-gram probability",
"Approaches to authorship verification ::: Approach 4: Document embeddings",
"Approaches to authorship verification ::: Approach 5: Token embeddings and sequential modeling",
"Approaches to authorship verification ::: Results and Comparison",
"User study",
"User study ::: Setup",
"User study ::: Results",
"Mapping with popularity ::: Correlation",
"Mapping with popularity ::: Interpretation",
"Mapping with popularity ::: Qualitative Analysis"
]
} | {
"answers": [
{
"annotation_id": [
"52d801477ad431d9a73db132dff174c5c9e7e8b0",
"b71c86ec9bff98758b4ab8535f72689ee206267c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"
],
"extractive_spans": [],
"free_form_answer": "Amitabh Bachchan, Ariana Grande, Barack Obama, Bill Gates, Donald Trump,\nEllen DeGeneres, J K Rowling, Jimmy Fallon, Justin Bieber, Kevin Durant, Kim Kardashian, Lady Gaga, LeBron James,Narendra Modi, Oprah Winfrey",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"
],
"extractive_spans": [],
"free_form_answer": "Celebrities from varioius domains - Acting, Music, Politics, Business, TV, Author, Sports, Modeling. ",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"What kind of celebrities do they obtain tweets from?"
],
"question_id": [
"4d28c99750095763c81bcd5544491a0ba51d9070"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Figure 1: Plot of log mean number of likes against tweet score percentile for Donald Trump and Justin Bieber. Node color denotes the year for which the maximum number of tweets are present in each percentile bucket, demonstrating that this is not merely a temporal correlation.",
"Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)",
"Figure 2: Mean accuracy of the BERT + MLP classifier for all users over 40 iterations",
"Table 2: Test accuracy (%) of five approaches to classify user vs. non-user tweets (The best performing approach is shown in bold for each user) [Note that for each user, the test set contains an equal number of positive and negative examples.]",
"Table 3: MTurk user study results: For each of these 15 celebrities, human evaluators support our representativeness scores with a significance level above 0.05. (p-values < 10−5 are shown in bold.)",
"Table 4: Pearson correlation coefficients between mean popularity measure and percentile, for each user (Coefficients with p-value < 0.01 are shown in bold color). Green values exhibit significant positive correlation, and red values significant negative correlation.",
"Figure 3: Log mean likes vs. percentile for users of positive correlation (The color denotes the year for which maximum tweets are present in the percentile bucket).",
"Figure 4: Log mean likes vs. percentile for users of negative correlation (The color denotes the year for which maximum tweets are present in the percentile bucket)."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"6-Figure2-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"8-Figure3-1.png",
"8-Figure4-1.png"
]
} | [
"What kind of celebrities do they obtain tweets from?"
] | [
[
"1909.04002-3-Table1-1.png"
]
] | [
"Celebrities from varioius domains - Acting, Music, Politics, Business, TV, Author, Sports, Modeling. "
] | 68 |
1911.03343 | Negated LAMA: Birds cannot fly | Pretrained language models have achieved remarkable improvements in a broad range of natural language processing tasks, including question answering (QA). To analyze pretrained language model performance on QA, we extend the LAMA (Petroni et al., 2019) evaluation framework by a component that is focused on negation. We find that pretrained language models are equally prone to generate facts ("birds can fly") and their negation ("birds cannot fly"). This casts doubt on the claim that pretrained language models have adequately learned factual knowledge. | {
"paragraphs": [
[
"Pretrained language models like Transformer-XL BIBREF1, ELMo BIBREF2 and BERT BIBREF3 have emerged as universal tools that capture a diverse range of linguistic and factual knowledge.",
"Recently, BIBREF0 introduced LAMA (LAnguage Model Analysis) to investigate to what extent pretrained language models have the capacity to recall factual knowledge without the use of fine-tuning. The training objective of pretrained language models is to predict masked tokens in a sequence. With this “fill-in-the-blank” scheme, question answering tasks can be reformulated as cloze statements. For example, “Who developed the theory of relativity?” is reformulated as “The theory of relativity was developed by [MASK].”. This setup allows for unsupervised open domain question answering. BIBREF0 find that, on this task, pretrained language models outperform supervised baselines using traditional knowledge bases with access to oracle knowledge.",
"This work analyzes the understanding of pretrained language models of factual and commonsense knowledge stored in negated statements. To this end, we introduce the negated LAMA dataset. We construct it by simply inserting negation elements (e.g., “not”) in LAMA cloze statement (e.g., “The theory of relativity was not developed by [MASK].”). In our experiments, we query the pretrained language models with both original LAMA and negated LAMA statements and compare their predictions in terms of rank correlation and overlap of top predictions. We find that the predicted filler words often have high overlap. Thus, negating a cloze statement does not change the predictions in many cases – but of course it should as our example “birds can fly” vs. “birds cannot fly” shows. We identify and analyze a subset of cloze statements where predictions are different. We find that BERT handles negation best among pretrained language models, but it still fails badly on most negated statements."
],
[
"A cloze statement is generated from a subject-relation-object triple from a knowledge base and from a templatic statement for the relation that contains variables X and Y for subject and object (e.g, “X was born in Y”). We then substitute the subject for X and MASK for Y. The triples are chosen such that Y is always a single-token answer.",
"LAMA covers different sources: The Google-RE set covers the three relations “place of birth”, “date of birth” and “place of death”. T-REx BIBREF4 consists of a subset of Wikidata triples covering 41 relations. ConceptNet BIBREF5 combines 16 commonsense relationships between words and/or phrases. The underlying Open Mind Common Sense corpus provides matching statements to query the language model. SQuAD BIBREF6 is a standard question answering dataset. LAMA contains a subset of 305 context-insensitive questions and provides manually reformulated cloze-style statements to query the model.",
"We created negated versions of Google-RE, T-REx and SQuAD by manually inserting a negation element in each template or statement. We did the same for a subset of ConceptNet that is easy to negate. We selected this subset by filtering for sentence length and extracting common queries."
],
[
"We use the source code provided by BIBREF0 and BIBREF7 and evaluate using Transformer-XL large (Txl), ELMo original (Eb), ELMo 5.5B (E5B), BERT-base (Bb) and BERT-large (Bl)."
],
[
"Table TABREF1 compares the predictions of original LAMA and negated LAMA. As true answers of the negated statements are highly ambiguous, our measures are spearman rank correlation and overlap in rank 1 predictions between the original and negated dataset. Table TABREF4 gives examples of BERT-large predictions.",
"We observe rank correlations of more than 0.85 in most cases and a high overlap in first ranked predictions like: “Birds can fly.” and “Birds cannot fly.”. BERT has slightly better results than the other models.",
"Our interpretation of the results is that BERT mostly did not learn the meaning of negation. The impressive results in QA suggest that pretrained language models are able to memorize aspects of specific facts; but, apparently, they ignore negation markers in many cases and rely on the co-occurrence of the subject with the original relation only. One reason for the poor performance we observe probably is that negated statements occur much less in training corpora than positive statements.",
"A key problem is that the LAMA setup does not allow to refrain from giving an answer. Generally, prediction probabilities drop in the negated statements, which would suggest the existence of a threshold to filter answers. But a closer look at the probabilities of correct and incorrect predictions shows that they fall into the same range. No common threshold can be found.",
"Given that negation has little effect on most queries, it is interesting to look at the small number of queries where pretrained language models make correct predictions, i.e., they solve the cloze task as a human subject would do. We give two examples of such patterns. The pattern “X did not die in Y” always results in the generic top ranked predictions: “battle”, “office”, “prison” whereas the original pattern is likely to rank cities first. This seems appropriate since a statement of the form, say, “X did not die in New York” is rare in text corpora, but statements characterizing the situation in which the death occurred (“he did not die in prison”) sound more natural. For the template “X was born in Y”, cities are predicted. In contrast, for “X was not born in Y”, countries are predicted. Both times it refers to a more specific statement, more likely to occur in the training corpus. People would refer more often to a person being born in a city and not born in a country, giving you in both cases more precise information."
],
[
"Pretrained embeddings have pushed baselines on a variety of question answering datasets BIBREF8, BIBREF9. Generally, the pretrained models are fine-tuned to the specific task BIBREF10, BIBREF3 but recent work has applied the models without the fine-tuning step BIBREF11, BIBREF0.",
"There is a wide range of literature analyzing linguistic knowledge stored in pretrained embeddings BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19.",
"Concerning negation the following papers are of interest: BIBREF20 analyze the grammatical knowledge captured by BERT. In a case study, they test for correct licensing environments for negative polarity items. They study a set of classifiers distinguishing between grammatically correct and incorrect sentences. We take a different approach by focusing on factual knowledge stored in negated statements. Grammatically correct statements can still be factually false (e.g., “General relativity, Newton develop”).",
"BIBREF21 investigate the understanding of function words – among them negation particles – using an entailment- and classification-based approach. They analyze the ability of different model architectures and training objectives to capture knowledge of single sentences. The models are fine-tuned to the task of interest. We on the other hand question to what extend factual knowledge present in negated statements is indirectly acquired during pretraining.",
"BIBREF22 defines three psycholinguistic diagnostics for language models and applies them in a case study to BERT. Negation is examined using a dataset of 72 simple sentences querying for category membership. A supplementary dataset of 16 sentences queries again for the \"to be\" relation only but including more natural sentence structure. Our work covers 51,329 negated statements covering a wide range of topics and relations. In the SQuAD based dataset we cover more natural language in terms of context and relation. In contrast to BIBREF22, we do not see a reliable preference of the true completions to false in the more natural negated statements.",
"BIBREF23 test for comprehension of minimally modified statements in an adversarial setup while trying to keep the overall semantics the same. We try to maximize the change in semantics and invert meaning."
],
[
"We show that pretrained language models have problems handling negation. Output predictions for the original LAMA query and the negated statement are highly correlated.",
"Even though this elegant approach of querying a language model without fine-tuning allows for truly open domain question answering, promoting an answer no matter what is not always the better solution. Refraining from giving an answer can be more appropriate, making knowledge graphs currently a more reliable choice for question answering."
]
],
"section_name": [
"Introduction",
"Data",
"Models",
"Results",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"54a8a5e97bb25bd9edeb98798a2d931a0eec308e",
"6d7363024b880b83e5d4071cc05847f606317738"
],
"answer": [
{
"evidence": [
"This work analyzes the understanding of pretrained language models of factual and commonsense knowledge stored in negated statements. To this end, we introduce the negated LAMA dataset. We construct it by simply inserting negation elements (e.g., “not”) in LAMA cloze statement (e.g., “The theory of relativity was not developed by [MASK].”). In our experiments, we query the pretrained language models with both original LAMA and negated LAMA statements and compare their predictions in terms of rank correlation and overlap of top predictions. We find that the predicted filler words often have high overlap. Thus, negating a cloze statement does not change the predictions in many cases – but of course it should as our example “birds can fly” vs. “birds cannot fly” shows. We identify and analyze a subset of cloze statements where predictions are different. We find that BERT handles negation best among pretrained language models, but it still fails badly on most negated statements."
],
"extractive_spans": [
"To this end, we introduce the negated LAMA dataset. We construct it by simply inserting negation elements (e.g., “not”) in LAMA cloze statement"
],
"free_form_answer": "",
"highlighted_evidence": [
"This work analyzes the understanding of pretrained language models of factual and commonsense knowledge stored in negated statements. To this end, we introduce the negated LAMA dataset. We construct it by simply inserting negation elements (e.g., “not”) in LAMA cloze statement (e.g., “The theory of relativity was not developed by [MASK].”)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This work analyzes the understanding of pretrained language models of factual and commonsense knowledge stored in negated statements. To this end, we introduce the negated LAMA dataset. We construct it by simply inserting negation elements (e.g., “not”) in LAMA cloze statement (e.g., “The theory of relativity was not developed by [MASK].”). In our experiments, we query the pretrained language models with both original LAMA and negated LAMA statements and compare their predictions in terms of rank correlation and overlap of top predictions. We find that the predicted filler words often have high overlap. Thus, negating a cloze statement does not change the predictions in many cases – but of course it should as our example “birds can fly” vs. “birds cannot fly” shows. We identify and analyze a subset of cloze statements where predictions are different. We find that BERT handles negation best among pretrained language models, but it still fails badly on most negated statements."
],
"extractive_spans": [],
"free_form_answer": "Create the negated LAMA dataset and query the pretrained language models with both original LAMA and negated LAMA statements and compare their predictions.",
"highlighted_evidence": [
". To this end, we introduce the negated LAMA dataset. We construct it by simply inserting negation elements (e.g., “not”) in LAMA cloze statement (e.g., “The theory of relativity was not developed by [MASK].”). In our experiments, we query the pretrained language models with both original LAMA and negated LAMA statements and compare their predictions in terms of rank correlation and overlap of top predictions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"zero"
],
"paper_read": [
"no"
],
"question": [
"How did they extend LAMA evaluation framework to focus on negation?"
],
"question_id": [
"78292bc57ee68fdb93ed45430d80acca25a9e916"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Table 1: Mean spearman rank correlation and the mean percentage of overlap in first ranked predictions between the original cloze template queries and the negated statement for Transformer-XL large (Txl), ELMo original (Eb), ELMo 5.5B (E5B), BERT-base (Bb) and BERT-large (Bl).",
"Table 2: Examples of generation for BERT-large for (A) Google-RE, (B) T-REx, (C) ConceptNet, (D) SQuAD. The last column reports the top three tokens generated together with the associated log probability (in brackets)."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png"
]
} | [
"How did they extend LAMA evaluation framework to focus on negation?"
] | [
[
"1911.03343-Introduction-2"
]
] | [
"Create the negated LAMA dataset and query the pretrained language models with both original LAMA and negated LAMA statements and compare their predictions."
] | 69 |
1712.00991 | Mining Supervisor Evaluation and Peer Feedback in Performance Appraisals | Performance appraisal (PA) is an important HR process to periodically measure and evaluate every employee's performance vis-a-vis the goals established by the organization. A PA process involves purposeful multi-step multi-modal communication between employees, their supervisors and their peers, such as self-appraisal, supervisor assessment and peer feedback. Analysis of the structured data and text produced in PA is crucial for measuring the quality of appraisals and tracking actual improvements. In this paper, we apply text mining techniques to produce insights from PA text. First, we perform sentence classification to identify strengths, weaknesses and suggestions of improvements found in the supervisor assessments and then use clustering to discover broad categories among them. Next we use multi-class multi-label classification techniques to match supervisor assessments to predefined broad perspectives on performance. Finally, we propose a short-text summarization technique to produce a summary of peer feedback comments for a given employee and compare it with manual summaries. All techniques are illustrated using a real-life dataset of supervisor assessment and peer feedback text produced during the PA of 4528 employees in a large multi-national IT company. | {
"paragraphs": [
[
"Performance appraisal (PA) is an important HR process, particularly for modern organizations that crucially depend on the skills and expertise of their workforce. The PA process enables an organization to periodically measure and evaluate every employee's performance. It also provides a mechanism to link the goals established by the organization to its each employee's day-to-day activities and performance. Design and analysis of PA processes is a lively area of research within the HR community BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .",
"The PA process in any modern organization is nowadays implemented and tracked through an IT system (the PA system) that records the interactions that happen in various steps. Availability of this data in a computer-readable database opens up opportunities to analyze it using automated statistical, data-mining and text-mining techniques, to generate novel and actionable insights / patterns and to help in improving the quality and effectiveness of the PA process BIBREF4 , BIBREF5 , BIBREF6 . Automated analysis of large-scale PA data is now facilitated by technological and algorithmic advances, and is becoming essential for large organizations containing thousands of geographically distributed employees handling a wide variety of roles and tasks.",
"A typical PA process involves purposeful multi-step multi-modal communication between employees, their supervisors and their peers. In most PA processes, the communication includes the following steps: (i) in self-appraisal, an employee records his/her achievements, activities, tasks handled etc.; (ii) in supervisor assessment, the supervisor provides the criticism, evaluation and suggestions for improvement of performance etc.; and (iii) in peer feedback (aka INLINEFORM0 view), the peers of the employee provide their feedback. There are several business questions that managers are interested in. Examples:",
"In this paper, we develop text mining techniques that can automatically produce answers to these questions. Since the intended users are HR executives, ideally, the techniques should work with minimum training data and experimentation with parameter setting. These techniques have been implemented and are being used in a PA system in a large multi-national IT company.",
"The rest of the paper is organized as follows. Section SECREF2 summarizes related work. Section SECREF3 summarizes the PA dataset used in this paper. Section SECREF4 applies sentence classification algorithms to automatically discover three important classes of sentences in the PA corpus viz., sentences that discuss strengths, weaknesses of employees and contain suggestions for improving her performance. Section SECREF5 considers the problem of mapping the actual targets mentioned in strengths, weaknesses and suggestions to a fixed set of attributes. In Section SECREF6 , we discuss how the feedback from peers for a particular employee can be summarized. In Section SECREF7 we draw conclusions and identify some further work."
],
[
"We first review some work related to sentence classification. Semantically classifying sentences (based on the sentence's purpose) is a much harder task, and is gaining increasing attention from linguists and NLP researchers. McKnight and Srinivasan BIBREF7 and Yamamoto and Takagi BIBREF8 used SVM to classify sentences in biomedical abstracts into classes such as INTRODUCTION, BACKGROUND, PURPOSE, METHOD, RESULT, CONCLUSION. Cohen et al. BIBREF9 applied SVM and other techniques to learn classifiers for sentences in emails into classes, which are speech acts defined by a verb-noun pair, with verbs such as request, propose, amend, commit, deliver and nouns such as meeting, document, committee; see also BIBREF10 . Khoo et al. BIBREF11 uses various classifiers to classify sentences in emails into classes such as APOLOGY, INSTRUCTION, QUESTION, REQUEST, SALUTATION, STATEMENT, SUGGESTION, THANKING etc. Qadir and Riloff BIBREF12 proposes several filters and classifiers to classify sentences on message boards (community QA systems) into 4 speech acts: COMMISSIVE (speaker commits to a future action), DIRECTIVE (speaker expects listener to take some action), EXPRESSIVE (speaker expresses his or her psychological state to the listener), REPRESENTATIVE (represents the speaker's belief of something). Hachey and Grover BIBREF13 used SVM and maximum entropy classifiers to classify sentences in legal documents into classes such as FACT, PROCEEDINGS, BACKGROUND, FRAMING, DISPOSAL; see also BIBREF14 . Deshpande et al. BIBREF15 proposes unsupervised linguistic patterns to classify sentences into classes SUGGESTION, COMPLAINT.",
"There is much work on a closely related problem viz., classifying sentences in dialogues through dialogue-specific categories called dialogue acts BIBREF16 , which we will not review here. Just as one example, Cotterill BIBREF17 classifies questions in emails into the dialogue acts of YES_NO_QUESTION, WH_QUESTION, ACTION_REQUEST, RHETORICAL, MULTIPLE_CHOICE etc.",
"We could not find much work related to mining of performance appraisals data. Pawar et al. BIBREF18 uses kernel-based classification to classify sentences in both performance appraisal text and product reviews into classes SUGGESTION, APPRECIATION, COMPLAINT. Apte et al. BIBREF6 provides two algorithms for matching the descriptions of goals or tasks assigned to employees to a standard template of model goals. One algorithm is based on the co-training framework and uses goal descriptions and self-appraisal comments as two separate perspectives. The second approach uses semantic similarity under a weak supervision framework. Ramrakhiyani et al. BIBREF5 proposes label propagation algorithms to discover aspects in supervisor assessments in performance appraisals, where an aspect is modelled as a verb-noun pair (e.g. conduct training, improve coding)."
],
[
"In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19."
],
[
"The PA corpus contains several classes of sentences that are of interest. In this paper, we focus on three important classes of sentences viz., sentences that discuss strengths (class STRENGTH), weaknesses of employees (class WEAKNESS) and suggestions for improving her performance (class SUGGESTION). The strengths or weaknesses are mostly about the performance in work carried out, but sometimes they can be about the working style or other personal qualities. The classes WEAKNESS and SUGGESTION are somewhat overlapping; e.g., a suggestion may address a perceived weakness. Following are two example sentences in each class.",
"STRENGTH:",
"WEAKNESS:",
"SUGGESTION:",
"Several linguistic aspects of these classes of sentences are apparent. The subject is implicit in many sentences. The strengths are often mentioned as either noun phrases (NP) with positive adjectives (Excellent technology leadership) or positive nouns (engineering strength) or through verbs with positive polarity (dedicated) or as verb phrases containing positive adjectives (delivers innovative solutions). Similarly for weaknesses, where negation is more frequently used (presentations are not his forte), or alternatively, the polarities of verbs (avoid) or adjectives (poor) tend to be negative. However, sometimes the form of both the strengths and weaknesses is the same, typically a stand-alone sentiment-neutral NP, making it difficult to distinguish between them; e.g., adherence to timing or timely closure. Suggestions often have an imperative mood and contain secondary verbs such as need to, should, has to. Suggestions are sometimes expressed using comparatives (better process compliance). We built a simple set of patterns for each of the 3 classes on the POS-tagged form of the sentences. We use each set of these patterns as an unsupervised sentence classifier for that class. If a particular sentence matched with patterns for multiple classes, then we have simple tie-breaking rules for picking the final class. The pattern for the STRENGTH class looks for the presence of positive words / phrases like takes ownership, excellent, hard working, commitment, etc. Similarly, the pattern for the WEAKNESS class looks for the presence of negative words / phrases like lacking, diffident, slow learner, less focused, etc. The SUGGESTION pattern not only looks for keywords like should, needs to but also for POS based pattern like “a verb in the base form (VB) in the beginning of a sentence”.",
"We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation."
],
[
"We also explored whether a sentiment analyzer can be used as a baseline for identifying the class labels STRENGTH and WEAKNESS. We used an implementation of sentiment analyzer from TextBlob to get a polarity score for each sentence. Table TABREF13 shows the distribution of positive, negative and neutral sentiments across the 3 class labels STRENGTH, WEAKNESS and SUGGESTION. It can be observed that distribution of positive and negative sentiments is almost similar in STRENGTH as well as SUGGESTION sentences, hence we can conclude that the information about sentiments is not much useful for our classification problem."
],
[
"After identifying sentences in each class, we can now answer question (1) in Section SECREF1 . From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns. We repeat this for the 9160 sentences with predicted label WEAKNESS or SUGGESTION as a single class. Tables TABREF15 and TABREF16 show a few representative clusters in strengths and in weaknesses, respectively. We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. Carrot2 Lingo discovered 167 clusters and also assigned labels to these clusters. We then generated 167 clusters using CLUTO as well. CLUTO does not generate cluster labels automatically, hence we used 5 most frequent words within the cluster as its labels. Table TABREF19 shows the largest 5 clusters by both the algorithms. It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. Also, it was observed that there is some correspondence between noun clusters and sentence clusters. E.g. the nouns cluster motivation expertise knowledge talent skill (Table TABREF15 ) corresponds to the CLUTO sentence cluster skill customer management knowledge team (Table TABREF19 ). But overall, users found the nouns clusters to be more meaningful than the sentence clusters."
],
[
"In many organizations, PA is done from a predefined set of perspectives, which we call attributes. Each attribute covers one specific aspect of the work done by the employees. This has the advantage that we can easily compare the performance of any two employees (or groups of employees) along any given attribute. We can correlate various performance attributes and find dependencies among them. We can also cluster employees in the workforce using their supervisor ratings for each attribute to discover interesting insights into the workforce. The HR managers in the organization considered in this paper have defined 15 attributes (Table TABREF20 ). Each attribute is essentially a work item or work category described at an abstract level. For example, FUNCTIONAL_EXCELLENCE covers any tasks, goals or activities related to the software engineering life-cycle (e.g., requirements analysis, design, coding, testing etc.) as well as technologies such as databases, web services and GUI.",
"In the example in Section SECREF4 , the first sentence (which has class STRENGTH) can be mapped to two attributes: FUNCTIONAL_EXCELLENCE and BUILDING_EFFECTIVE_TEAMS. Similarly, the third sentence (which has class WEAKNESS) can be mapped to the attribute INTERPERSONAL_EFFECTIVENESS and so forth. Thus, in order to answer the second question in Section SECREF1 , we need to map each sentence in each of the 3 classes to zero, one, two or more attributes, which is a multi-class multi-label classification problem.",
"We manually tagged the same 2000 sentences in Dataset D1 with attributes, where each sentence may get 0, 1, 2, etc. up to 15 class labels (this is dataset D2). This labelled dataset contained 749, 206, 289, 207, 91, 223, 191, 144, 103, 80, 82, 42, 29, 15, 24 sentences having the class labels listed in Table TABREF20 in the same order. The number of sentences having 0, 1, 2, or more than 2 attributes are: 321, 1070, 470 and 139 respectively. We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2.",
"Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3 ",
"It can be observed that INLINEFORM0 would be undefined if INLINEFORM1 is empty and similarly INLINEFORM2 would be undefined when INLINEFORM3 is empty. Hence, overall precision and recall are computed by averaging over all the instances except where they are undefined. Instance-level F-measure can not be computed for instances where either precision or recall are undefined. Therefore, overall F-measure is computed using the overall precision and recall."
],
[
"The PA system includes a set of peer feedback comments for each employee. To answer the third question in Section SECREF1 , we need to create a summary of all the peer feedback comments about a given employee. As an example, following are the feedback comments from 5 peers of an employee.",
"The individual sentences in the comments written by each peer are first identified and then POS tags are assigned to each sentence. We hypothesize that a good summary of these multiple comments can be constructed by identifying a set of important text fragments or phrases. Initially, a set of candidate phrases is extracted from these comments and a subset of these candidate phrases is chosen as the final summary, using Integer Linear Programming (ILP). The details of the ILP formulation are shown in Table TABREF36 . As an example, following is the summary generated for the above 5 peer comments.",
"humble nature, effective communication, technical expertise, always supportive, vast knowledge",
"",
"Following rules are used to identify candidate phrases:",
"Various parameters are used to evaluate a candidate phrase for its importance. A candidate phrase is more important:",
"A complete list of parameters is described in detail in Table TABREF36 .",
"There is a trivial constraint INLINEFORM0 which makes sure that only INLINEFORM1 out of INLINEFORM2 candidate phrases are chosen. A suitable value of INLINEFORM3 is used for each employee depending on number of candidate phrases identified across all peers (see Algorithm SECREF6 ). Another set of constraints ( INLINEFORM4 to INLINEFORM5 ) make sure that at least one phrase is selected for each of the leadership attributes. The constraint INLINEFORM6 makes sure that multiple phrases sharing the same headword are not chosen at a time. Also, single word candidate phrases are chosen only if they are adjectives or nouns with lexical category noun.attribute. This is imposed by the constraint INLINEFORM7 . It is important to note that all the constraints except INLINEFORM8 are soft constraints, i.e. there may be feasible solutions which do not satisfy some of these constraints. But each constraint which is not satisfied, results in a penalty through the use of slack variables. These constraints are described in detail in Table TABREF36 .",
"The objective function maximizes the total importance score of the selected candidate phrases. At the same time, it also minimizes the sum of all slack variables so that the minimum number of constraints are broken.",
" INLINEFORM0 : No. of candidate phrases INLINEFORM1 : No. of phrases to select as part of summary",
" INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 ",
" INLINEFORM0 and INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 ",
" INLINEFORM0 (For determining number of phrases to select to include in summary) "
],
[
"We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries."
],
[
"In this paper, we presented an analysis of the text generated in Performance Appraisal (PA) process in a large multi-national IT company. We performed sentence classification to identify strengths, weaknesses and suggestions for improvements found in the supervisor assessments and then used clustering to discover broad categories among them. As this is non-topical classification, we found that SVM with ADWS kernel BIBREF18 produced the best results. We also used multi-class multi-label classification techniques to match supervisor assessments to predefined broad perspectives on performance. Logistic Regression classifier was observed to produce the best results for this topical classification. Finally, we proposed an ILP-based summarization technique to produce a summary of peer feedback comments for a given employee and compared it with manual summaries.",
"The PA process also generates much structured data, such as supervisor ratings. It is an interesting problem to compare and combine the insights from discovered from structured data and unstructured text. Also, we are planning to automatically discover any additional performance attributes to the list of 15 attributes currently used by HR."
]
],
"section_name": [
"Introduction",
"Related Work",
"Dataset",
"Sentence Classification",
"Comparison with Sentiment Analyzer",
"Discovering Clusters within Sentence Classes",
"PA along Attributes",
"Summarization of Peer Feedback using ILP",
"Evaluation of auto-generated summaries",
"Conclusions and Further Work"
]
} | {
"answers": [
{
"annotation_id": [
"7f6a815f8f51511a931d8e13ead1e40ad721feb2",
"c30b0ffa5e6fd198ba935ab3bac74c0b9947ccdf"
],
"answer": [
{
"evidence": [
"We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries.",
"FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms"
],
"extractive_spans": [],
"free_form_answer": "LSA, TextRank, LexRank and ILP-based summary.",
"highlighted_evidence": [
"Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries.",
"For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package.",
"FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms",
"We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries."
],
"extractive_spans": [],
"free_form_answer": "LSA, TextRank, LexRank",
"highlighted_evidence": [
"FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms",
"For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. ",
"Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"9261c75cdbab1716a6a65ead7840a17cc99879d6",
"f5f231bf7d3b1e4faaea26a0d6ad784a18034946"
],
"answer": [
{
"evidence": [
"We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries."
],
"extractive_spans": [
"ROUGE BIBREF22 unigram score"
],
"free_form_answer": "",
"highlighted_evidence": [
"The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries."
],
"extractive_spans": [
"ROUGE"
],
"free_form_answer": "",
"highlighted_evidence": [
"The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"a486d70407d80d066c232860f6829f89ff1fe0c7",
"d7c11d33c7ed6e527a4a05fd1e524432ebf80096"
],
"answer": [
{
"evidence": [
"After identifying sentences in each class, we can now answer question (1) in Section SECREF1 . From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns. We repeat this for the 9160 sentences with predicted label WEAKNESS or SUGGESTION as a single class. Tables TABREF15 and TABREF16 show a few representative clusters in strengths and in weaknesses, respectively. We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. Carrot2 Lingo discovered 167 clusters and also assigned labels to these clusters. We then generated 167 clusters using CLUTO as well. CLUTO does not generate cluster labels automatically, hence we used 5 most frequent words within the cluster as its labels. Table TABREF19 shows the largest 5 clusters by both the algorithms. It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. Also, it was observed that there is some correspondence between noun clusters and sentence clusters. E.g. the nouns cluster motivation expertise knowledge talent skill (Table TABREF15 ) corresponds to the CLUTO sentence cluster skill customer management knowledge team (Table TABREF19 ). But overall, users found the nouns clusters to be more meaningful than the sentence clusters."
],
"extractive_spans": [
"CLUTO",
"Carrot2 Lingo"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"After identifying sentences in each class, we can now answer question (1) in Section SECREF1 . From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns. We repeat this for the 9160 sentences with predicted label WEAKNESS or SUGGESTION as a single class. Tables TABREF15 and TABREF16 show a few representative clusters in strengths and in weaknesses, respectively. We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. Carrot2 Lingo discovered 167 clusters and also assigned labels to these clusters. We then generated 167 clusters using CLUTO as well. CLUTO does not generate cluster labels automatically, hence we used 5 most frequent words within the cluster as its labels. Table TABREF19 shows the largest 5 clusters by both the algorithms. It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. Also, it was observed that there is some correspondence between noun clusters and sentence clusters. E.g. the nouns cluster motivation expertise knowledge talent skill (Table TABREF15 ) corresponds to the CLUTO sentence cluster skill customer management knowledge team (Table TABREF19 ). But overall, users found the nouns clusters to be more meaningful than the sentence clusters."
],
"extractive_spans": [
"simple clustering algorithm which uses the cosine similarity between word embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
"From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5d56998c502970dd27438187fdb3fc27342e315c",
"9853f46bd162894e48724a416244a5606694ede6"
],
"answer": [
{
"evidence": [
"Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3",
"We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation.",
"FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.",
"FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2."
],
"extractive_spans": [
"Precision",
"Recall",
"F-measure",
"accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . ",
"The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. ",
"FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.",
"FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3"
],
"extractive_spans": [
"Precision, Recall and F-measure"
],
"free_form_answer": "",
"highlighted_evidence": [
"Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"54e8e9d76e2d4773288b7866ef1e63d2e9fc4806",
"7e9284c41621be71672ac4b9adfa51bf8e9a6e78"
],
"answer": [
{
"evidence": [
"We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation.",
"FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1."
],
"extractive_spans": [],
"free_form_answer": "Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK and Pattern-based",
"highlighted_evidence": [
"Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation.",
"FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.",
"FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2.",
"We manually tagged the same 2000 sentences in Dataset D1 with attributes, where each sentence may get 0, 1, 2, etc. up to 15 class labels (this is dataset D2). This labelled dataset contained 749, 206, 289, 207, 91, 223, 191, 144, 103, 80, 82, 42, 29, 15, 24 sentences having the class labels listed in Table TABREF20 in the same order. The number of sentences having 0, 1, 2, or more than 2 attributes are: 321, 1070, 470 and 139 respectively. We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2.",
"We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation."
],
"extractive_spans": [],
"free_form_answer": "Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK, Pattern-based approach",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.",
"FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2.",
"We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2.",
"We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"69712b5d98639fcc4ec25879e3378acafa74b3c6",
"abf56629a5ac0ec3cc7964e80354b8eede0f85b1"
],
"answer": [
{
"evidence": [
"In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19."
],
"extractive_spans": [
"15.5"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19."
],
"extractive_spans": [
"average:15.5"
],
"free_form_answer": "",
"highlighted_evidence": [
"The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"e1b21d7447853f3b33474c3c7bcfb72119b18f79",
"ef85e2122dc359674fb6bfb7467994012ee76e79"
],
"answer": [
{
"evidence": [
"In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19."
],
"extractive_spans": [
"26972"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus of supervisor assessment has 26972 sentences. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19."
],
"extractive_spans": [
"26972 sentences"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus of supervisor assessment has 26972 sentences."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
"",
""
],
"question": [
"What summarization algorithms did the authors experiment with?",
"What evaluation metrics were used for the summarization task?",
"What clustering algorithms were used?",
"What evaluation metrics are looked at for classification tasks?",
"What methods were used for sentence classification?",
"What is the average length of the sentences?",
"What is the size of the real-life dataset?"
],
"question_id": [
"443d2448136364235389039cbead07e80922ec5c",
"aa6d956c2860f58fc9baea74c353c9d985b05605",
"4c18081ae3b676cc7831403d11bc070c10120f8e",
"fb3d30d59ed49e87f63d3735b876d45c4c6b8939",
"197b276d0610ebfacd57ab46b0b29f3033c96a40",
"e025061e199b121f2ac8f3d9637d9bf987d65cd5",
"61652a3da85196564401d616d251084a25ab4596"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.",
"Table 2. Results of TextBlob sentiment analyzer on the dataset D1",
"Table 5. Largest 5 sentence clusters within 12742 STRENGTH sentences",
"Table 6. Strengths, Weaknesses and Suggestions along Performance Attributes",
"Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2.",
"Table 9. Comparative performance of various summarization algorithms"
],
"file": [
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table5-1.png",
"8-Table6-1.png",
"8-Table7-1.png",
"12-Table9-1.png"
]
} | [
"What summarization algorithms did the authors experiment with?",
"What methods were used for sentence classification?"
] | [
[
"1712.00991-12-Table9-1.png",
"1712.00991-Evaluation of auto-generated summaries-0"
],
[
"1712.00991-8-Table7-1.png",
"1712.00991-Sentence Classification-5",
"1712.00991-PA along Attributes-2",
"1712.00991-5-Table1-1.png"
]
] | [
"LSA, TextRank, LexRank",
"Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK, Pattern-based approach"
] | 70 |
2003.04642 | A Framework for Evaluation of Machine Reading Comprehension Gold Standards | Machine Reading Comprehension (MRC) is the task of answering a question over a paragraph of text. While neural MRC systems gain popularity and achieve noticeable performance, issues are being raised with the methodology used to establish their performance, particularly concerning the data design of gold standards that are used to evaluate them. There is but a limited understanding of the challenges present in this data, which makes it hard to draw comparisons and formulate reliable hypotheses. As a first step towards alleviating the problem, this paper proposes a unifying framework to systematically investigate the present linguistic features, required reasoning and background knowledge and factual correctness on one hand, and the presence of lexical cues as a lower bound for the requirement of understanding on the other hand. We propose a qualitative annotation schema for the first and a set of approximative metrics for the latter. In a first application of the framework, we analyse modern MRC gold standards and present our findings: the absence of features that contribute towards lexical ambiguity, the varying factual correctness of the expected answers and the presence of lexical cues, all of which potentially lower the reading comprehension complexity and quality of the evaluation data. | {
"paragraphs": [
[
"There is a recent spark of interest in the task of Question Answering (QA) over unstructured textual data, also referred to as Machine Reading Comprehension (MRC). This is mostly due to wide-spread success of advances in various facets of deep learning related research, such as novel architectures BIBREF0, BIBREF1 that allow for efficient optimisation of neural networks consisting of multiple layers, hardware designed for deep learning purposes and software frameworks BIBREF2, BIBREF3 that allow efficient development and testing of novel approaches. These factors enable researchers to produce models that are pre-trained on large scale corpora and provide contextualised word representations BIBREF4 that are shown to be a vital component towards solutions for a variety of natural language understanding tasks, including MRC BIBREF5. Another important factor that led to the recent success in MRC-related tasks is the widespread availability of various large datasets, e.g., SQuAD BIBREF6, that provide sufficient examples for optimising statistical models. The combination of these factors yields notable results, even surpassing human performance BIBREF7.",
"MRC is a generic task format that can be used to probe for various natural language understanding capabilities BIBREF8. Therefore it is crucially important to establish a rigorous evaluation methodology in order to be able to draw reliable conclusions from conducted experiments. While increasing effort is put into the evaluation of novel architectures, such as keeping the evaluation data from public access to prevent unintentional overfitting to test data, performing ablation and error studies and introducing novel metrics BIBREF9, surprisingly little is done to establish the quality of the data itself. Additionally, recent research arrived at worrisome findings: the data of those gold standards, which is usually gathered involving a crowd-sourcing step, suffers from flaws in design BIBREF10 or contains overly specific keywords BIBREF11. Furthermore, these gold standards contain “annotation artefacts”, cues that lead models into focusing on superficial aspects of text, such as lexical overlap and word order, instead of actual language understanding BIBREF12, BIBREF13. These weaknesses cast some doubt on whether the data can reliably evaluate the reading comprehension performance of the models they evaluate, i.e. if the models are indeed being assessed for their capability to read.",
"Figure FIGREF3 shows an example from HotpotQA BIBREF14, a dataset that exhibits the last kind of weakness mentioned above, i.e., the presence of unique keywords in both the question and the passage (in close proximity to the expected answer).",
"An evaluation methodology is vital to the fine-grained understanding of challenges associated with a single gold standard, in order to understand in greater detail which capabilities of MRC models it evaluates. More importantly, it allows to draw comparisons between multiple gold standards and between the results of respective state-of-the-art models that are evaluated on them.",
"In this work, we take a step back and propose a framework to systematically analyse MRC evaluation data, typically a set of questions and expected answers to be derived from accompanying passages. Concretely, we introduce a methodology to categorise the linguistic complexity of the textual data and the reasoning and potential external knowledge required to obtain the expected answer. Additionally we propose to take a closer look at the factual correctness of the expected answers, a quality dimension that appears under-explored in literature.",
"We demonstrate the usefulness of the proposed framework by applying it to precisely describe and compare six contemporary MRC datasets. Our findings reveal concerns about their factual correctness, the presence of lexical cues that simplify the task of reading comprehension and the lack of semantic altering grammatical modifiers. We release the raw data comprised of 300 paragraphs, questions and answers richly annotated under the proposed framework as a resource for researchers developing natural language understanding models and datasets to utilise further.",
"To the best of our knowledge this is the first attempt to introduce a common evaluation methodology for MRC gold standards and the first across-the-board qualitative evaluation of MRC datasets with respect to the proposed categories."
],
[
"We define the task of machine reading comprehension, the target application of the proposed methodology as follows: Given a paragraph $P$ that consists of tokens (words) $p_1, \\ldots , p_{n_P}$ and a question $Q$ that consists of tokens $q_1 \\ldots q_{n_Q}$, the goal is to retrieve an answer $A$ with tokens $a_1 \\ldots a_{n_A}$. $A$ is commonly constrained to be one of the following cases BIBREF15, illustrated in Figure FIGREF9:",
"Multiple choice, where the goal is to predict $A$ from a given set of choices $\\mathcal {A}$.",
"Cloze-style, where $S$ is a sentence, and $A$ and $Q$ are obtained by removing a sequence of words such that $Q = S - A$. The task is to fill in the resulting gap in $Q$ with the expected answer $A$ to form $S$.",
"Span, where is a continuous subsequence of tokens from the paragraph ($A \\subseteq P$). Flavours include multiple spans as the correct answer or $A \\subseteq Q$.",
"Free form, where $A$ is an unconstrained natural language string.",
"A gold standard $G$ is composed of $m$ entries $(Q_i, A_i, P_i)_{i\\in \\lbrace 1,\\ldots , m\\rbrace }$.",
"The performance of an approach is established by comparing its answer predictions $A^*_{i}$ on the given input $(Q_i, T_i)$ (and $\\mathcal {A}_i$ for the multiple choice setting) against the expected answer $A_i$ for all $i\\in \\lbrace 1,\\ldots , m\\rbrace $ under a performance metric. Typical performance metrics are exact match (EM) or accuracy, i.e. the percentage of exactly predicted answers, and the F1 score – the harmonic mean between the precision and the recall of the predicted tokens compared to expected answer tokens. The overall F1 score can either be computed by averaging the F1 scores for every instance or by first averaging the precision and recall and then computing the F1 score from those averages (macro F1). Free-text answers, meanwhile, are evaluated by means of text generation and summarisation metrics such as BLEU BIBREF16 or ROUGE-L BIBREF17."
],
[
"In this section we describe a methodology to categorise gold standards according to linguistic complexity, required reasoning and background knowledge, and their factual correctness. Specifically, we use those dimensions as high-level categories of a qualitative annotation schema for annotating question, expected answer and the corresponding context. We further enrich the qualitative annotations by a metric based on lexical cues in order to approximate a lower bound for the complexity of the reading comprehension task. By sampling entries from each gold standard and annotating them, we obtain measurable results and thus are able to make observations about the challenges present in that gold standard data."
],
[
"We are interested in different types of the expected answer. We differentiate between Span, where an answer is a continuous span taken from the passage, Paraphrasing, where the answer is a paraphrase of a text span, Unanswerable, where there is no answer present in the context, and Generated, if it does not fall into any of the other categories. It is not sufficient for an answer to restate the question or combine multiple Span or Paraphrasing answers to be annotated as Generated. It is worth mentioning that we focus our investigations on answerable questions. For a complementary qualitative analysis that categorises unanswerable questions, the reader is referred to Yatskar2019.",
"Furthermore, we mark a sentence as Supporting Fact if it contains evidence required to produce the expected answer, as they are used further in the complexity analysis."
],
[
"An important factor for the quality of a benchmark is its factual correctness, because on the one hand, the presence of factually wrong or debatable examples introduces an upper bound for the achievable performance of models on those gold standards. On the other hand, it is hard to draw conclusions about the correctness of answers produced by a model that is evaluated on partially incorrect data.",
"One way by which developers of modern crowd-sourced gold standards ensure quality is by having the same entry annotated by multiple workers BIBREF18 and keeping only those with high agreement. We investigate whether this method is enough to establish a sound ground truth answer that is unambiguously correct. Concretely we annotate an answer as Debatable when the passage features multiple plausible answers, when multiple expected answers contradict each other, or an answer is not specific enough with respect to the question and a more specific answer is present. We annotate an answer as Wrong when it is factually wrong and a correct answer is present in the context."
],
[
"It is important to understand what types of reasoning the benchmark evaluates, in order to be able to accredit various reasoning capabilities to the models it evaluates. Our proposed reasoning categories are inspired by those found in scientific question answering literature BIBREF19, BIBREF20, as research in this area focuses on understanding the required reasoning capabilities. We include reasoning about the Temporal succession of events, Spatial reasoning about directions and environment, and Causal reasoning about the cause-effect relationship between events. We further annotate (multiple-choice) answers that can only be answered By Exclusion of every other alternative.",
"We further extend the reasoning categories by operational logic, similar to those required in semantic parsing tasks BIBREF21, as solving those tasks typically requires “multi-hop” reasoning BIBREF14, BIBREF22. When an answer can only be obtained by combining information from different sentences joined by mentioning a common entity, concept, date, fact or event (from here on called entity), we annotate it as Bridge. We further annotate the cases, when the answer is a concrete entity that satisfies a Constraint specified in the question, when it is required to draw a Comparison of multiple entities' properties or when the expected answer is an Intersection of their properties (e.g. “What do Person A and Person B have in common?”)",
"We are interested in the linguistic reasoning capabilities probed by a gold standard, therefore we include the appropriate category used by Wang2019. Specifically, we annotate occurrences that require understanding of Negation, Quantifiers (such as “every”, “some”, or “all”), Conditional (“if ...then”) statements and the logical implications of Con-/Disjunction (i.e. “and” and “or”) in order to derive the expected answer.",
"Finally, we investigate whether arithmetic reasoning requirements emerge in MRC gold standards as this can probe for reasoning that is not evaluated by simple answer retrieval BIBREF23. To this end, we annotate the presence of of Addition and Subtraction, answers that require Ordering of numerical values, Counting and Other occurrences of simple mathematical operations.",
"An example can exhibit multiple forms of reasoning. Notably, we do not annotate any of the categories mentioned above if the expected answer is directly stated in the passage. For example, if the question asks “How many total points were scored in the game?” and the passage contains a sentence similar to “The total score of the game was 51 points”, it does not require any reasoning, in which case we annotate it as Retrieval."
],
[
"Worthwhile knowing is whether the information presented in the context is sufficient to answer the question, as there is an increase of benchmarks deliberately designed to probe a model's reliance on some sort of background knowledge BIBREF24. We seek to categorise the type of knowledge required. Similar to Wang2019, on the one hand we annotate the reliance on factual knowledge, that is (Geo)political/Legal, Cultural/Historic, Technical/Scientific and Other Domain Specific knowledge about the world that can be expressed as a set of facts. On the other hand, we denote Intuitive knowledge requirements, which is challenging to express as a set of facts, such as the knowledge that a parenthetic numerical expression next to a person's name in a biography usually denotes his life span."
],
[
"Another dimension of interest is the evaluation of various linguistic capabilities of MRC models BIBREF25, BIBREF26, BIBREF27. We aim to establish which linguistic phenomena are probed by gold standards and to which degree. To that end, we draw inspiration from the annotation schema used by Wang2019, and adapt it around lexical semantics and syntax.",
"More specifically, we annotate features that introduce variance between the supporting facts and the question. With regard to lexical semantics, we focus on the use of redundant words that do not alter the meaning of a sentence for the task of retrieving the expected answer (Redundancy), requirements on the understanding of words' semantic fields (Lexical Entailment) and the use of Synonyms and Paraphrases with respect to the question wording. Furthermore we annotate cases where supporting facts contain Abbreviations of concepts introduced in the question (and vice versa) and when a Dative case substitutes the use of a preposition (e.g. “I bought her a gift” vs “I bought a gift for her”). Regarding syntax, we annotate changes from passive to active Voice, the substitution of a Genitive case with a preposition (e.g. “of”) and changes from nominal to verbal style and vice versa (Nominalisation).",
"We recognise features that add ambiguity to the supporting facts, for example when information is only expressed implicitly by using an Ellipsis. As opposed to redundant words, we annotate Restrictivity and Factivity modifiers, words and phrases whose presence does change the meaning of a sentence with regard to the expected answer, and occurrences of intra- or inter-sentence Coreference in supporting facts (that is relevant to the question). Lastly, we mark ambiguous syntactic features, when their resolution is required in order to obtain the answer. Concretely, we mark argument collection with con- and disjunctions (Listing) and ambiguous Prepositions, Coordination Scope and Relative clauses/Adverbial phrases/Appositions."
],
[
"Finally, we want to approximate the presence of lexical cues that might simplify the reading required in order to arrive at the answer. Quantifying this allows for more reliable statements about and comparison of the complexity of gold standards, particularly regarding the evaluation of comprehension that goes beyond simple lexical matching. We propose the use of coarse metrics based on lexical overlap between question and context sentences. Intuitively, we aim to quantify how much supporting facts “stand out” from their surrounding passage context. This can be used as proxy for the capability to retrieve the answer BIBREF10. Specifically, we measure (i) the number of words jointly occurring in a question and a sentence, (ii) the length of the longest n-gram shared by question and sentence and (iii) whether a word or n-gram from the question uniquely appears in a sentence.",
"The resulting taxonomy of the framework is shown in Figure FIGREF10. The full catalogue of features, their description, detailed annotation guideline as well as illustrating examples can be found in Appendix ."
],
[
"We select contemporary MRC benchmarks to represent all four commonly used problem definitions BIBREF15. In selecting relevant datasets, we do not consider those that are considered “solved”, i.e. where the state of the art performance surpasses human performance, as is the case with SQuAD BIBREF28, BIBREF7. Concretely, we selected gold standards that fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations, and bucket them according to the answer selection styles as described in Section SECREF4 We randomly draw one from each bucket and add two randomly drawn datasets from the candidate pool. This leaves us with the datasets described in Table TABREF19. For a more detailed description, we refer to Appendix ."
],
[
"We randomly select 50 distinct question, answer and passage triples from the publicly available development sets of the described datasets. Training, development and the (hidden) test set are drawn from the same distribution defined by the data collection method of the respective dataset. For those collections that contain multiple questions over a single passage, we ensure that we are sampling unique paragraphs in order to increase the variety of investigated texts.",
"The samples were annotated by the first author of this paper, using the proposed schema. In order to validate our findings, we further take 20% of the annotated samples and present them to a second annotator (second author). Since at its core, the annotation is a multi-label task, we report the inter-annotator agreement by computing the (micro-averaged) F1 score, where we treat the first annotator's labels as gold. Table TABREF21 reports the agreement scores, the overall (micro) average F1 score of the annotations is 0.82, which means that on average, more than two thirds of the overall annotated labels were agreed on by both annotators. We deem this satisfactory, given the complexity of the annotation schema."
],
[
"We present a concise view of the annotation results in Figure FIGREF23. The full annotation results can be found in Appendix . We centre our discussion around the following main points:"
],
[
"As observed in Figure FIGREF23 the gold standards feature a high degree of Redundancy, peaking at 76% of the annotated HotpotQA samples and synonyms and paraphrases (labelled Synonym), with ReCoRd samples containing 58% of them, likely to be attributed to the elaborating type of discourse of the dataset sources (encyclopedia and newswire). This is, however, not surprising, as it is fairly well understood in the literature that current state-of-the-art models perform well on distinguishing relevant words and phrases from redundant ones BIBREF32. Additionally, the representational capability of synonym relationships of word embeddings has been investigated and is well known BIBREF33. Finally, we observe the presence of syntactic features, such as ambiguous relative clauses, appositions and adverbial phrases, (RelAdvApp 40% in HotpotQA and ReCoRd) and those introducing variance, concretely switching between verbal and nominal styles (e.g. Nominalisation 10% in HotpotQA) and from passive to active voice (Voice, 8% in HotpotQA).",
"Syntactic features contributing to variety and ambiguity that we did not observe in our samples are the exploitation of verb symmetry, the use of dative and genitive cases or ambiguous prepositions and coordination scope (respectively Symmetry, Dative, Genitive, Prepositions, Scope). Therefore we cannot establish whether models are capable of dealing with those features by evaluating them on those gold standards."
],
[
"We identify three common sources that surface in different problems regarding an answer's factual correctness, as reported in Figure FIGREF23 and illustrate their instantiations in Table TABREF31:",
"Design Constraints: Choosing the task design and the data collection method introduces some constraints that lead to factually debatable examples. For example, a span might have been arbitrarily selected from multiple spans that potentially answer a question, but only a single continuous answer span per question is allowed by design, as observed in the NewsQA and MsMarco samples (32% and 34% examples annotated as Debatable with 16% and 53% thereof exhibiting arbitrary selection, respectively). Sometimes, when additional passages are added after the annotation step, they can by chance contain passages that answer the question more precisely than the original span, as seen in HotpotQA (16% Debatable samples, 25% of them due to arbitrary selection). In the case of MultiRC it appears to be inconsistent, whether multiple correct answer choices are expected to be correct in isolation or in conjunction (28% Debatable with 29% of them exhibiting this problem). This might provide an explanation to its relatively weak human baseline performance of 84% F1 score BIBREF31.",
"Weak Quality assurance: When the (typically crowd-sourced) annotations are not appropriately validated, incorrect examples will find their way into the gold standards. This typically results in factually wrong expected answers (i.e. when a more correct answer is present in the context) or a question is expected to be Unanswerable, but is actually answerable from the provided context. The latter is observed in MsMarco (83% of examples annotated as Wrong) and NewsQA, where 60% of the examples annotated as Wrong are Unanswerable with an answer present.",
"Arbitrary Precision: There appears to be no clear guideline on how precise the answer is expected to be, when the passage expresses the answer in varying granularities. We annotated instances as Debatable when the expected answer was not the most precise given the context (44% and 29% of Debatable instances in NewsQA and MultiRC, respectively)."
],
[
"We took interest in whether any of the benchmarks contain what we call distracting lexical features (or distractors): grammatical modifiers that alter the semantics of a sentence for the final task of answering the given question while preserving a similar lexical form. An example of such features are cues for (double) Negation (e.g., “no”, “not”), which when introduced in a sentence, reverse its meaning. Other examples include modifiers denoting Restrictivity, Factivity and Reasoning (such as Monotonicity and Conditional cues). Examples of question-answer pairs containing a distractor are shown in Table FIGREF37.",
"We posit that the presence of such distractors would allow for evaluating reading comprehension beyond potential simple word matching. However, we observe no presence of such features in the benchmarks (beyond Negation in DROP, ReCoRd and HotpotQA, with 4%, 4% and 2% respectively). This results in gold standards that clearly express the evidence required to obtain the answer, lacking more challenging, i.e., distracting, sentences that can assess whether a model can truly understand meaning."
],
[
"In the Figure FIGREF23 we observe that Operational and Arithmetic reasoning moderately (6% to 8% combined) appears “in the wild”, i.e. when not enforced by the data design as is the case with HotpotQA (80% Operations combined) or DROP (68% Arithmetic combined). Causal reasoning is (exclusively) present in MultiRC (32%), whereas Temporal and Spatial reasoning requirements seem to not naturally emerge in gold standards. In ReCoRd, a fraction of 38% questions can only be answered By Exclusion of every other candidate, due to the design choice of allowing questions where the required information to answer them is not fully expressed in the accompanying paragraph.",
"Therefore, it is also a little surprising to observe that ReCoRd requires external resources with regard to knowledge, as seen in Figure FIGREF23. MultiRC requires technical or more precisely basic scientific knowledge (6% Technical/Scientific), as a portion of paragraphs is extracted from elementary school science textbooks BIBREF31. Other benchmarks moderately probe for factual knowledge (0% to 4% across all categories), while Intuitive knowledge is required to derive answers in each gold standard.",
"It is also worth pointing out, as done in Figure FIGREF23, that although MultiRC and MsMarco are not modelled as a span selection problem, their samples still contain 50% and 66% of answers that are directly taken from the context. DROP contains the biggest fraction of generated answers (60%), due to the requirement of arithmetic operations.",
"To conclude our analysis, we observe similar distributions of linguistic features and reasoning patterns, except where there are constraints enforced by dataset design, annotation guidelines or source text choice. Furthermore, careful consideration of design choices (such as single-span answers) is required, to avoid impairing the factual correctness of datasets, as pure crowd-worker agreement seems not sufficient in multiple cases."
],
[
"We used the scores assigned by our proposed set of metrics (discussed in Section SECREF11 Dimensions of Interest: Complexity) to predict the supporting facts in the gold standard samples (that we included in our manual annotation). Concretely, we used the following five features capturing lexical overlap: (i) the number of words occurring in sentence and question, (ii) the length of the longest n-gram shared by sentence and question, whether a (iii) uni- and (iv) bigram from the question is unique to a sentence, and (v) the sentence index, as input to a logistic regression classifier. We optimised on each sample leaving one example for evaluation. We compute the average Precision, Recall and F1 score by means of leave-one-out validation with every sample entry. The averaged results after 5 runs are reported in Table TABREF41.",
"We observe that even by using only our five features based lexical overlap, the simple logistic regression baseline is able to separate out the supporting facts from the context to a varying degree. This is in line with the lack of semantics-altering grammatical modifiers discussed in the qualitative analysis section above. The classifier performs best on DROP (66% F1) and MultiRC (40% F1), which means that lexical cues can considerably facilitate the search for the answer in those gold standards. On MultiRC, yadav2019quick come to a similar conclusion, by using a more sophisticated approach based on overlap between question, sentence and answer choices.",
"Surprisingly, the classifier is able to pick up a signal from supporting facts even on data that has been pruned against lexical overlap heuristics by populating the context with additional documents that have high overlap scores with the question. This results in significantly higher scores than when guessing randomly (HotpotQA 26% F1, and MsMarco 11% F1). We observe similar results in the case the length of the question leaves few candidates to compute overlap with $6.3$ and $7.3$ tokens on average for MsMarco and NewsQA (26% F1), compared to $16.9$ tokens on average for the remaining four dataset samples.",
"Finally, it is worth mentioning that although the queries in ReCoRd are explicitly independent from the passage, the linear classifier is still capable of achieving 34% F1 score in predicting the supporting facts.",
"However, neural networks perform significantly better than our admittedly crude baseline (e.g. 66% F1 for supporting facts classification on HotpotQA BIBREF14), albeit utilising more training examples, and a richer sentence representation. This facts implies that those neural models are capable of solving more challenging problems than simple “text matching” as performed by the logistic regression baseline. However, they still circumvent actual reading comprehension as the respective gold standards are of limited suitability to evaluate this BIBREF34, BIBREF35. This suggests an exciting future research direction, that is categorising the scale between text matching and reading comprehension more precisely and respectively positioning state-of-the-art models thereon."
],
[
"Although not as prominent as the research on novel architecture, there has been steady progress in critically investigating the data and evaluation aspects of NLP and machine learning in general and MRC in particular."
],
[
"The authors of the AddSent algorithm BIBREF11 show that MRC models trained and evaluated on the SQuAD dataset pay too little attention to details that might change the semantics of a sentence, and propose a crowd-sourcing based method to generate adversary examples to exploit that weakness. This method was further adapted to be fully automated BIBREF36 and applied to different gold standards BIBREF35. Our proposed approach differs in that we aim to provide qualitative justifications for those quantitatively measured issues."
],
[
"Another line of research establishes sane baselines to provide more meaningful context to the raw performance scores of evaluated models. When removing integral parts of the task formulation such as question, the textual passage or parts thereof BIBREF37 or restricting model complexity by design in order to suppress some required form of reasoning BIBREF38, models are still able to perform comparably to the state-of-the-art. This raises concerns about the perceived benchmark complexity and is related to our work in a broader sense as one of our goals is to estimate the complexity of benchmarks."
],
[
"Beyond MRC, efforts similar to ours that pursue the goal of analysing the evaluation of established datasets exist in Natural Language Inference BIBREF13, BIBREF12. Their analyses reveal the existence of biases in training and evaluation data that can be approximated with simple majority-based heuristics. Because of these biases, trained models fail to extract the semantics that are required for the correct inference. Furthermore, a fair share of work was done to reveal gender bias in coreference resolution datasets and models BIBREF39, BIBREF40, BIBREF41."
],
[
"Finally, related to our framework are works that introduce annotation categories for gold standards evaluation. Concretely, we build our annotation framework around linguistic features that were introduced in the GLUE suite BIBREF42 and the reasoning categories introduced in the WorldTree dataset BIBREF19. A qualitative analysis complementary to ours, with focus on the unanswerability patterns in datasets that feature unanswerable questions was done by Yatskar2019."
],
[
"In this paper, we introduce a novel framework to characterise machine reading comprehension gold standards. This framework has potential applications when comparing different gold standards, considering the design choices for a new gold standard and performing qualitative error analyses for a proposed approach.",
"Furthermore we applied the framework to analyse popular state-of-the-art gold standards for machine reading comprehension: We reveal issues with their factual correctness, show the presence of lexical cues and we observe that semantics-altering grammatical modifiers are missing in all of the investigated gold standards. Studying how to introduce those modifiers into gold standards and observing whether state-of-the-art MRC models are capable of performing reading comprehension on text containing them, is a future research goal.",
"A future line of research is to extend the framework to be able to identify the different types of exploitable cues such as question or entity typing and concrete overlap patterns. This will allow the framework to serve as an interpretable estimate of reading comprehension complexity of gold standards. Finally, investigating gold standards under this framework where MRC models outperform the human baseline (e.g. SQuAD) will contribute to a deeper understanding of the seemingly superb performance of deep learning approaches on them."
]
],
"section_name": [
"Introduction",
"Framework for MRC Gold Standard Analysis ::: Problem definition",
"Framework for MRC Gold Standard Analysis ::: Dimensions of Interest",
"Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Problem setting",
"Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Factual Correctness",
"Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Required Reasoning",
"Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Knowledge",
"Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Linguistic Complexity",
"Framework for MRC Gold Standard Analysis ::: Dimensions of Interest ::: Complexity",
"Application of the Framework ::: Candidate Datasets",
"Application of the Framework ::: Annotation Task",
"Application of the Framework ::: Qualitative Analysis",
"Application of the Framework ::: Qualitative Analysis ::: Linguistic Features",
"Application of the Framework ::: Qualitative Analysis ::: Factual Correctness",
"Application of the Framework ::: Qualitative Analysis ::: Semantics-altering grammatical modifiers",
"Application of the Framework ::: Qualitative Analysis ::: Other",
"Application of the Framework ::: Quantitative Results ::: Lexical overlap",
"Related Work",
"Related Work ::: Adversarial Evaluation",
"Related Work ::: Sanity Baselines",
"Related Work ::: Benchmark evaluation in NLP",
"Related Work ::: Annotation Taxonomies",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"6b1ddb5fd4b9401c5fcbfa7e87677c3d56c86c92",
"ffa72118bb51465f5067003ae4133daa36752284"
],
"answer": [
{
"evidence": [
"In this paper, we introduce a novel framework to characterise machine reading comprehension gold standards. This framework has potential applications when comparing different gold standards, considering the design choices for a new gold standard and performing qualitative error analyses for a proposed approach."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"This framework has potential applications when comparing different gold standards, considering the design choices for a new gold standard and performing qualitative error analyses for a proposed approach."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"683d1f25ed8fb0fc3704c5ba6d685a4d4c9a63a2",
"8b27ec9187856a551b6a7aabad54adc0e08edc08"
],
"answer": [
{
"evidence": [
"We recognise features that add ambiguity to the supporting facts, for example when information is only expressed implicitly by using an Ellipsis. As opposed to redundant words, we annotate Restrictivity and Factivity modifiers, words and phrases whose presence does change the meaning of a sentence with regard to the expected answer, and occurrences of intra- or inter-sentence Coreference in supporting facts (that is relevant to the question). Lastly, we mark ambiguous syntactic features, when their resolution is required in order to obtain the answer. Concretely, we mark argument collection with con- and disjunctions (Listing) and ambiguous Prepositions, Coordination Scope and Relative clauses/Adverbial phrases/Appositions."
],
"extractive_spans": [
"Restrictivity ",
"Factivity ",
"Coreference "
],
"free_form_answer": "",
"highlighted_evidence": [
"We recognise features that add ambiguity to the supporting facts, for example when information is only expressed implicitly by using an Ellipsis. As opposed to redundant words, we annotate Restrictivity and Factivity modifiers, words and phrases whose presence does change the meaning of a sentence with regard to the expected answer, and occurrences of intra- or inter-sentence Coreference in supporting facts (that is relevant to the question). Lastly, we mark ambiguous syntactic features, when their resolution is required in order to obtain the answer. Concretely, we mark argument collection with con- and disjunctions (Listing) and ambiguous Prepositions, Coordination Scope and Relative clauses/Adverbial phrases/Appositions.",
"We recognise features that add ambiguity to the supporting facts, for example when information is only expressed implicitly by using an Ellipsis. As opposed to redundant words, we annotate Restrictivity and Factivity modifiers, words and phrases whose presence does change the meaning of a sentence with regard to the expected answer, and occurrences of intra- or inter-sentence Coreference in supporting facts (that is relevant to the question). Lastly, we mark ambiguous syntactic features, when their resolution is required in order to obtain the answer. Concretely, we mark argument collection with con- and disjunctions (Listing) and ambiguous Prepositions, Coordination Scope and Relative clauses/Adverbial phrases/Appositions."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Furthermore we applied the framework to analyse popular state-of-the-art gold standards for machine reading comprehension: We reveal issues with their factual correctness, show the presence of lexical cues and we observe that semantics-altering grammatical modifiers are missing in all of the investigated gold standards. Studying how to introduce those modifiers into gold standards and observing whether state-of-the-art MRC models are capable of performing reading comprehension on text containing them, is a future research goal."
],
"extractive_spans": [
"semantics-altering grammatical modifiers"
],
"free_form_answer": "",
"highlighted_evidence": [
"We reveal issues with their factual correctness, show the presence of lexical cues and we observe that semantics-altering grammatical modifiers are missing in all of the investigated gold standards."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"6a130524aaba99647747e379b78c50be78d44936",
"de1e567df6b51c2e2ce8070cc84a3fb30952fd92"
],
"answer": [
{
"evidence": [
"We select contemporary MRC benchmarks to represent all four commonly used problem definitions BIBREF15. In selecting relevant datasets, we do not consider those that are considered “solved”, i.e. where the state of the art performance surpasses human performance, as is the case with SQuAD BIBREF28, BIBREF7. Concretely, we selected gold standards that fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations, and bucket them according to the answer selection styles as described in Section SECREF4 We randomly draw one from each bucket and add two randomly drawn datasets from the candidate pool. This leaves us with the datasets described in Table TABREF19. For a more detailed description, we refer to Appendix ."
],
"extractive_spans": [
"fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations"
],
"free_form_answer": "",
"highlighted_evidence": [
"Concretely, we selected gold standards that fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations, and bucket them according to the answer selection styles as described in Section SECREF4"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Summary of selected datasets"
],
"extractive_spans": [],
"free_form_answer": "MSMARCO, HOTPOTQA, RECORD, MULTIRC, NEWSQA, and DROP.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Summary of selected datasets"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"57c0e35bf3a19c862797c35795939e09c7c40071",
"a09cbcc42d7bb56d3dab33c9b98430e1c12b170e"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Figure 3: The hierarchy of categories in our proposed annotation framework. Abstract higher-level categories are presented in bold while actual annotation features are shown in italics.",
"The resulting taxonomy of the framework is shown in Figure FIGREF10. The full catalogue of features, their description, detailed annotation guideline as well as illustrating examples can be found in Appendix ."
],
"extractive_spans": [
"The resulting taxonomy of the framework is shown in Figure FIGREF10"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 3: The hierarchy of categories in our proposed annotation framework. Abstract higher-level categories are presented in bold while actual annotation features are shown in italics.",
"The resulting taxonomy of the framework is shown in Figure FIGREF10."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Figure 3: The hierarchy of categories in our proposed annotation framework. Abstract higher-level categories are presented in bold while actual annotation features are shown in italics.",
"In this section we describe a methodology to categorise gold standards according to linguistic complexity, required reasoning and background knowledge, and their factual correctness. Specifically, we use those dimensions as high-level categories of a qualitative annotation schema for annotating question, expected answer and the corresponding context. We further enrich the qualitative annotations by a metric based on lexical cues in order to approximate a lower bound for the complexity of the reading comprehension task. By sampling entries from each gold standard and annotating them, we obtain measurable results and thus are able to make observations about the challenges present in that gold standard data."
],
"extractive_spans": [
"FIGREF10"
],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 3: The hierarchy of categories in our proposed annotation framework. Abstract higher-level categories are presented in bold while actual annotation features are shown in italics.",
"In this section we describe a methodology to categorise gold standards according to linguistic complexity, required reasoning and background knowledge, and their factual correctness. Specifically, we use those dimensions as high-level categories of a qualitative annotation schema for annotating question, expected answer and the corresponding context. We further enrich the qualitative annotations by a metric based on lexical cues in order to approximate a lower bound for the complexity of the reading comprehension task. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Have they made any attempt to correct MRC gold standards according to their findings? ",
"What features are absent from MRC gold standards that can result in potential lexical ambiguity?",
"What modern MRC gold standards are analyzed?",
"How does proposed qualitative annotation schema looks like?"
],
"question_id": [
"5c88d601e8fca96bffebfa9ef22331ecf31c6d75",
"71bd5db79635d48a0730163a9f2e8ef19a86cd66",
"9ecde59ffab3c57ec54591c3c7826a9188b2b270",
"005cca3c8ab6c3a166e315547a2259020f318ffb"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: While initially this looks like a complex question that requires the synthesis of different information across multiple documents, the keyword “2010” appears in the question and only in the sentence that answers it, considerably simplifying the search. Full example with 10 passages can be seen in Appendix D.",
"Figure 3: The hierarchy of categories in our proposed annotation framework. Abstract higher-level categories are presented in bold while actual annotation features are shown in italics.",
"Table 1: Summary of selected datasets",
"Table 2: Inter-Annotator agreement F1 scores, averaged for each dataset",
"Figure 4: Annotation results",
"Table 4: (Average) Precision, Recall and F1 score within the 95% confidence interval of a linear classifier optimised on lexical features for the task of predicting supporting facts",
"Figure 5: Example of semantics altering lexical features",
"Table 5: Detailed Answer Type results. We calculate percentages relative to the number of examples in the sample.",
"Table 6: Detailed results for the annotation of factual correctness.",
"Table 7: Detailed results for the annotation of factual correctness. We calculate percentages relative to the number of examples that were annotated to be not unanswerable.",
"Table 8: Detailed reasoning results. We calculate percentages relative to the number of examples that are not unanswerable, i.e. require reasoning to obtain the answer according to our definition.",
"Table 9: Detailed linguistic feature results. We calculate percentages relative to the number of examples that were annotated to contain supporting facts."
],
"file": [
"1-Figure1-1.png",
"3-Figure3-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Figure4-1.png",
"7-Table4-1.png",
"7-Figure5-1.png",
"17-Table5-1.png",
"17-Table6-1.png",
"17-Table7-1.png",
"18-Table8-1.png",
"18-Table9-1.png"
]
} | [
"What modern MRC gold standards are analyzed?"
] | [
[
"2003.04642-5-Table1-1.png",
"2003.04642-Application of the Framework ::: Candidate Datasets-0"
]
] | [
"MSMARCO, HOTPOTQA, RECORD, MULTIRC, NEWSQA, and DROP."
] | 73 |
2001.09215 | An Iterative Approach for Identifying Complaint Based Tweets in Social Media Platforms | Twitter is a social media platform where users express opinions over a variety of issues. Posts offering grievances or complaints can be utilized by private/ public organizations to improve their service and promptly gauge a low-cost assessment. In this paper, we propose an iterative methodology which aims to identify complaint based posts pertaining to the transport domain. We perform comprehensive evaluations along with releasing a novel dataset for the research purposes. | {
"paragraphs": [
[
"With the advent of social media platforms, increasing user base address their grievances over these platforms, in the form of complaints. According to BIBREF0, complaint is considered to be a basic speech act used to express negative mismatch between the expectation and reality. Transportation and its related logistics industries are the backbones of every economy. Many transport organizations rely on complaints gathered via these platforms to improve their services, hence understanding these are important for: (1) linguists to identify human expressions of criticism and (2) organizations to improve their query response time and address concerns effectively.",
"Presence of inevitable noise, sparse content along with rephrased and structurally morphed instances of posts, make the task at hand difficult BIBREF1. Previous works BIBREF2 in the domain of complaint extraction have focused on static datasets only. These are not robust to changes in the trends reflected, information flow and linguistic variations. We propose an iterative, semi-supervised approach for identification of complaint based tweets, having the ability to be replicated for stream of information flow. The preference of a semi-supervised approach over supervised ones is due to the stated reasons: (a) the task of isolating the training set, make supervised tasks less attractive and impractical and (b) imbalance between the subjective and objective classes lead to poor performance."
],
[
"We aimed to mimic the presence of sparse/noisy content distribution, mandating the need to curate a novel dataset via specific lexicons. We scraped 500 random posts from recognized transport forum. A pool of 50 uni/bi-grams was created based on tf-idf representations, extracted from the posts, which was further pruned by annotators. Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset. In spite of the sparse nature of these posts, the lexical characteristics act as information cues.",
"Figure FIGREF4 pictorially represents our methodology. Our approach required an initial set of informative tweets for which we employed two human annotators annotating a random sub-sample of the original dataset. From the 1500 samples, 326 were marked as informative and 1174 as non informative ($\\kappa =0.81$), discriminated on this criteria: Is the tweet addressing any complaint or raising grievances about modes of transport or services/ events associated with transportation such as traffic; public or private transport?. An example tweet marked as informative: No, metro fares will be reduced ???, but proper fare structure needs to presented right, it's bad !!!.",
"We utilized tf-idf for the identification of initial seed phrases from the curated set of informative tweets. 50 terms having the highest tf-idf scores were passed through the complete dataset and based on sub-string match, the transport relevant tweets were identified. The redundant tweets were filtered based on the cosine similarity score. Implicit information indicators were identified based on domain relevance score, a metric used to gauge the coverage of n-gram (1,2,3) when evaluated against a randomly created pool of posts.",
"We collected a pool of 5000 randomly sampled tweets different from the data collection period. The rationale behind having such a metric was to discard commonly occurring n-grams normalized by random noise and include ones which are of lexical importance. We used terms associated with high domain relevance score (threshold determined experimentally) as seed phrases for the next set of iterations. The growing dictionary augments the collection process. The process ran for 4 iterations providing us 7200 transport relevant tweets as no new lexicons were identified. In order to identify linguistic signals associated with the complaint posts, we randomly sampled a set of 2000 tweets which was used as training set, manually annotated into distinct labels: complaint relevant (702) and complaint non-relevant (1298) ($\\kappa =0.79$). We employed these features on our dataset.",
"Linguistic markers. To capture linguistic aspects of complaints, we utilized Bag of Words, count of POS tags and Word2vec clusters.",
"Sentiment markers. We used quantified score based on the ratio of tokens mentioned in the following lexicons: MPQA, NRC, VADER and Stanford.",
"Information specific markers. These account for a set of handcrafted features associated with complaint, we used the stated markers (a) Text-Meta Data, this includes the count of URL's, hashtags, user mentions, special symbols and user mentions, used to enhance retweet impact; (b) Request Identification, we employed the model presented in BIBREF3 to identify if a specific tweet assertion is a request; (c) Intensifiers, we make use of feature set derived from the number of words starting with capital letters and the repetition of special symbols (exclamation, questions marks) within the same post; (d) Politeness Markers, we utilize the politeness score of the tweet extracted from the model presented in BIBREF3; (e) Pronoun Variation, these have the ability to reveal the personal involvement or intensify involvement. We utilize the frequency of pronoun types $\\lbrace \\textit {first, second, third, demonstrative and indefinite}$} using pre-defined dictionaries.",
"From the pool of 7200 transport relevant tweets, we sampled 3500 tweets which were used as the testing set. The results are reported in TableTABREF5 with 10 fold cross-validation. With increasing the number of iterations, the pool of seed phrases gets refined and augments the selection of transport relevant tweets. The proposed pipeline is tailored to identify complaint relevant tweets in a noisy scenario."
],
[
"Table TABREF5 reflects that the BOW model provided the best results, both in terms of accuracy and F1-score. The best result achieved by a sentiment model was the Stanford Sentiment ($0.63$ F1-score), with others within the same range and linguistic-based features collectively giving the best performance."
],
[
"In this paper, we presented a novel semi-supervised pipeline along with a novel dataset for identification of complaint based posts in the transport domain. The proposed methodology can be expanded for other fields by altering the lexicons used for the creation of information cues. There are limitations to this analysis; we do not use neural networks which mandate a large volume of data. In the future, we aim to identify demographic features for identification of complaint based posts on social media platforms."
]
],
"section_name": [
"Introduction",
"Proposed Methodology",
"Results",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"58c761cb6bb1e80da59daca608756d3d836023e6",
"d785df7930ed349e47a6d3df4e06a79ab5764a54"
],
"answer": [
{
"evidence": [
"We aimed to mimic the presence of sparse/noisy content distribution, mandating the need to curate a novel dataset via specific lexicons. We scraped 500 random posts from recognized transport forum. A pool of 50 uni/bi-grams was created based on tf-idf representations, extracted from the posts, which was further pruned by annotators. Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset. In spite of the sparse nature of these posts, the lexical characteristics act as information cues."
],
"extractive_spans": [
"$19,300$",
"added 2500 randomly sampled tweets"
],
"free_form_answer": "",
"highlighted_evidence": [
"Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We aimed to mimic the presence of sparse/noisy content distribution, mandating the need to curate a novel dataset via specific lexicons. We scraped 500 random posts from recognized transport forum. A pool of 50 uni/bi-grams was created based on tf-idf representations, extracted from the posts, which was further pruned by annotators. Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset. In spite of the sparse nature of these posts, the lexical characteristics act as information cues."
],
"extractive_spans": [
"$19,300$ tweets"
],
"free_form_answer": "",
"highlighted_evidence": [
"We aimed to mimic the presence of sparse/noisy content distribution, mandating the need to curate a novel dataset via specific lexicons. We scraped 500 random posts from recognized transport forum. A pool of 50 uni/bi-grams was created based on tf-idf representations, extracted from the posts, which was further pruned by annotators. Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset. In spite of the sparse nature of these posts, the lexical characteristics act as information cues."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"c339c6e7103fdb33c32ace624a5ff6d34595b475",
"e5ea830de850f824017eab177ab009ae8f5a3681"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Figure FIGREF4 pictorially represents our methodology. Our approach required an initial set of informative tweets for which we employed two human annotators annotating a random sub-sample of the original dataset. From the 1500 samples, 326 were marked as informative and 1174 as non informative ($\\kappa =0.81$), discriminated on this criteria: Is the tweet addressing any complaint or raising grievances about modes of transport or services/ events associated with transportation such as traffic; public or private transport?. An example tweet marked as informative: No, metro fares will be reduced ???, but proper fare structure needs to presented right, it's bad !!!."
],
"extractive_spans": [],
"free_form_answer": "English language",
"highlighted_evidence": [
"An example tweet marked as informative: No, metro fares will be reduced ???, but proper fare structure needs to presented right, it's bad !!!."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"How many tweets were collected?",
"What language is explored in this paper?"
],
"question_id": [
"bcc0cd4e262f2db4270429ab520971bcf39414cf",
"f641f561ad2ea2794a52e4e4bdd62e1f353ab797"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"social media",
"social media"
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: Pictorial representation of the proposed pipeline.",
"Table 1: Performance of various linguistic, sentiment and information specific features on our dataset. Classifier utilized Logistic Regression (Elastic Net regularization), as it gave the best performance as compared to its counterparts."
],
"file": [
"2-Figure1-1.png",
"2-Table1-1.png"
]
} | [
"What language is explored in this paper?"
] | [
[
"2001.09215-Proposed Methodology-1"
]
] | [
"English language"
] | 74 |
1701.04056 | Dialog Context Language Modeling with Recurrent Neural Networks | In this work, we propose contextual language models that incorporate dialog level discourse information into language modeling. Previous works on contextual language model treat preceding utterances as a sequence of inputs, without considering dialog interactions. We design recurrent neural network (RNN) based contextual language models that specially track the interactions between speakers in a dialog. Experiment results on Switchboard Dialog Act Corpus show that the proposed model outperforms conventional single turn based RNN language model by 3.3% on perplexity. The proposed models also demonstrate advantageous performance over other competitive contextual language models. | {
"paragraphs": [
[
"Language model plays an important role in many natural language processing systems, such as in automatic speech recognition BIBREF0 , BIBREF1 and machine translation systems BIBREF2 , BIBREF3 . Recurrent neural network (RNN) based models BIBREF4 , BIBREF5 have recently shown success in language modeling, outperforming conventional n-gram based models. Long short-term memory BIBREF6 , BIBREF7 is a widely used RNN variant for language modeling due to its superior performance in capturing longer term dependencies.",
"Conventional RNN based language model uses a hidden state to represent the summary of the preceding words in a sentence without considering context signals. Mikolov et al. proposed a context dependent RNN language model BIBREF8 by connecting a contextual vector to the RNN hidden state. This contextual vector is produced by applying Latent Dirichlet Allocation BIBREF9 on preceding text. Several other contextual language models were later proposed by using bag-of-word BIBREF10 and RNN methods BIBREF11 to learn larger context representation that beyond the target sentence.",
"The previously proposed contextual language models treat preceding sentences as a sequence of inputs, and they are suitable for document level context modeling. In dialog modeling, however, dialog interactions between speakers play an important role. Modeling utterances in a dialog as a sequence of inputs might not well capture the pauses, turn-taking, and grounding phenomena BIBREF12 in a dialog. In this work, we propose contextual RNN language models that specially track the interactions between speakers. We expect such models to generate better representations of the dialog context.",
"The remainder of the paper is organized as follows. In section 2, we introduce the background on contextual language modeling. In section 3, we describe the proposed dialog context language models. Section 4 discusses the evaluation procedures and results. Section 5 concludes the work."
],
[
"A language model assigns a probability to a sequence of words $\\mathbf {w}=(w_1, w_2, ..., w_{T})$ following probability distribution. Using the chain rule, the likelihood of the word sequence $\\mathbf {w}$ can be factorized as: ",
"$$P(\\mathbf {w}) = P(w_1, w_2, ..., w_{T}) = \\prod _{t=1}^{T}P(w_{t}|w_{< t}) \\\\$$ (Eq. 2) ",
"At time step $t$ , the system input is the embedding of the word at index $t$ , and the system output is the probability distribution of the word at index $t+1$ . The RNN hidden state $h_t$ encodes the information of the word sequence up till current step: ",
"$$&h_t = \\operatorname{RNN}(h_{t-1}, w_t) \\\\\n&P(w_{t+1}|w_{< t+1}) = \\operatorname{softmax}(W_{o}h_{t} + b_{o})$$ (Eq. 3) ",
" where $W_{o}$ and $b_{o}$ are the output layer weights and biases."
],
[
"A number of methods have been proposed to introduce contextual information to the language model. Mikolov and Zweig BIBREF8 proposed a topic-conditioned RNNLM by introducing a contextual real-valued vector to RNN hidden state. The contextual vector was created by performing LDA BIBREF9 on preceding text. Wang and Cho BIBREF10 studied introducing corpus-level discourse information into language modeling. A number of context representation methods were explored, including bag-of-words, sequence of bag-of-words, and sequence of bag-of-words with attention. Lin et al. BIBREF13 proposed using hierarchical RNN for document modeling. Comparing to using bag-of-words and sequence of bag-of-words for document context representation, using hierarchical RNN can better model the order of words in preceding text, at the cost of the increased computational complexity. These contextual language models focused on contextual information at the document level. Tran et al. BIBREF14 further proposed a contextual language model that consider information at inter-document level. They claimed that by utilizing the structural information from a tree-structured document set, language modeling performance was largely improved."
],
[
"The previously proposed contextual language models focus on applying context by encoding preceding text, without considering interactions in dialogs. These models may not be well suited for dialog language modeling, as they are not designed to capture dialog interactions, such as clarifications and confirmations. By making special design in learning dialog interactions, we expect the models to generate better representations of the dialog context, and thus lower perplexity of the target dialog turn or utterance.",
"In this section, we first explain the context dependent RNN language model that operates on utterance or turn level. Following that, we describe the two proposed contextual language models that utilize the dialog level context."
],
[
"Let $\\mathbf {D} = (\\mathbf {U}_1, \\mathbf {U}_2, ..., \\mathbf {U}_K)$ be a dialog that has $K$ turns and involves two speakers. Each turn may have one or more utterances. The $k$ th turn $\\mathbf {U}_k = (w_1, w_2, ..., w_{T_k})$ is represented as a sequence of $T_k$ words. Conditioning on information of the preceding text in the dialog, probability of the target turn $\\mathbf {U}_k$ can be calculated as: ",
"$$P(\\mathbf {U}_k|\\mathbf {U}_{<k}) = \\prod _{t=1}^{T_k}P(w^{\\mathbf {U}_{k}}_{t}|w^{\\mathbf {U}_{k}}_{< t}, \\mathbf {U}_{<k}) \\\\$$ (Eq. 6) ",
"where $\\mathbf {U}_{<k}$ denotes all previous turns before $\\mathbf {U}_k$ , and $w^{\\mathbf {U}_{k}}_{< t}$ denotes all previous words before the $t$ th word in turn $\\mathbf {U}_k$ .",
"In context dependent RNN language model, the context vector $c$ is connected to the RNN hidden state together with the input word embedding at each time step (Figure 1 ). This is similar to the context dependent RNN language model proposed in BIBREF8 , other than that the context vector is not connected directly to the RNN output layer. With the additional context vector input $c$ , the RNN state $h_t$ is updated as: ",
"$$h_t = \\operatorname{RNN}(h_{t-1}, [w_t, c])$$ (Eq. 8) "
],
[
"In neural network based language models, the dialog context can be represented as a dense continuous vector. This context vector can be produced in a number of ways.",
"One simple approach is to use bag of word embeddings. However, bag of word embedding context representation does not take word order into consideration. An alternative approach is to use an RNN to read the preceding text. The last hidden state of the RNN encoder can be seen as the representation of the text and be used as the context vector for the next turn. To generate document level context representation, one may cascade all sentences in a document by removing the sentence boundaries. The last RNN hidden state of the previous utterance serves as the initial RNN state of the next utterance. As in BIBREF11 , we refer to this model as DRNNLM. Alternatively, in the CCDCLM model proposed in BIBREF11 , the last RNN hidden state of the previous utterance is fed to the RNN hidden state of the target utterance at each time step."
],
[
"The previously proposed contextual language models, such as DRNNLM and CCDCLM, treat dialog history as a sequence of inputs, without modeling dialog interactions. A dialog turn from one speaker may not only be a direct response to the other speaker's query, but also likely to be a continuation of his own previous statement. Thus, when modeling turn $k$ in a dialog, we propose to connect the last RNN state of turn $k-2$ directly to the starting RNN state of turn $k$ , instead of letting it to propagate through the RNN for turn $k-1$ . The last RNN state of turn $k-1$ serves as the context vector to turn $k$ , which is fed to turn $k$ 's RNN hidden state at each time step together with the word input. The model architecture is as shown in Figure 2 . The context vector $c$ and the initial RNN hidden state for the $k$ th turn $h^{\\mathbf {U}_k}_{0}$ are defined as: ",
"$$c = h^{\\mathbf {U}_{k-1}}_{T_{k-1}}, \\; h^{\\mathbf {U}_k}_{0} = h^{\\mathbf {U}_{k-2}}_{T_{k-2}}$$ (Eq. 11) ",
"where $h^{\\mathbf {U}_{k-1}}_{T_{k-1}}$ represents the last RNN hidden state of turn $k-1$ . This model also allows the context signal from previous turns to propagate through the network in fewer steps, which helps to reduce information loss along the propagation. We refer to this model as Interactive Dialog Context Language Model (IDCLM)."
],
[
"The propagation of dialog context can be seen as a series of updates of a hidden dialog context state along the growing dialog. IDCLM models this hidden dialog context state changes implicitly in the turn level RNN state. Such dialog context state updates can also be modeled in a separated RNN. As shown in the architecture in Figure 3 , we use an external RNN to model the context changes explicitly. Input to the external state RNN is the vector representation of the previous dialog turns. The external state RNN output serves as the dialog context for next turn: ",
"$$s_{k-1} = \\operatorname{RNN}_{ES}(s_{k-2}, h^{\\mathbf {U}_{k-1}}_{T_{k-1}})$$ (Eq. 14) ",
"where $s_{k-1}$ is the output of the external state RNN after the processing of turn $k-1$ . The context vector $c$ and the initial RNN hidden state for the $k$ th turn $h^{\\mathbf {U}_k}_{0}$ are then defined as: ",
"$$c = s_{k-1}, \\; h^{\\mathbf {U}_k}_{0} = h^{\\mathbf {U}_{k-2}}_{T_{k-2}}$$ (Eq. 15) ",
"We refer to this model as External State Interactive Dialog Context Language Model (ESIDCLM).",
"Comparing to IDCLM, ESIDCLM releases the burden of turn level RNN by using an external RNN to model dialog context state changes. One drawback of ESIDCLM is that there are additional RNN model parameters to be learned during model training, which may make the model more prone to overfitting when training data size is limited."
],
[
"We use the Switchboard Dialog Act Corpus (SwDA) in evaluating our contextual langauge models. The SwDA corpus extends the Switchboard-1 Telephone Speech Corpus with turn and utterance-level dialog act tags. The utterances are also tagged with part-of-speech (POS) tags. We split the data in folder sw00 to sw09 as training set, folder sw10 as test set, and folder sw11 to sw13 as validation set. The training, validation, and test sets contain 98.7K turns (190.0K utterances), 5.7K turns (11.3K utterances), and 11.9K turns (22.2K utterances) respectively. Maximum turn length is set to 160. The vocabulary is defined with the top frequent 10K words."
],
[
"We compare IDCLM and ESIDCLM to several baseline methods, including n-gram based model, single turn RNNLM, and various context dependent RNNLMs.",
"5-gram KN A 5-gram language model with modified Kneser-Ney smoothing BIBREF15 .",
"Single-Turn-RNNLM Conventional RNNLM that operates on single turn level with no context information.",
"BoW-Context-RNNLM Contextual RNNLM with BoW representation of preceding text as context.",
"DRNNLM Contextual RNNLM with turn level context vector connected to initial RNN state of the target turn.",
"CCDCLM Contextual RNNLM with turn level context vector connected to RNN hidden state of the target turn at each time step. We implement this model following the design in BIBREF11 .",
"In order to investigate the potential performance gain that can be achieved by introducing context, we also compare the proposed methods to RNNLMs that use true dialog act tags as context. Although human labeled dialog act might not be the best option for modeling the dialog context state, it provides a reasonable estimation of the best gain that can be achieved by introducing linguistic context. The dialog act sequence is modeled by a separated RNN, similar to the external state RNN used in ESIDCLM. We refer to this model as Dialog Act Context Language Model (DACLM).",
"DACLM RNNLM with true dialog act context vector connected to RNN state of the target turn at each time step."
],
[
"In this work, we use LSTM cell BIBREF6 as the basic RNN unit for its stronger capability in capturing long-range dependencies in a word sequence comparing to simple RNN. We use pre-trained word vectors BIBREF16 that are trained on Google News dataset to initialize the word embeddings. These word embeddings are fine-tuned during model training. We conduct mini-batch training using Adam optimization method following the suggested parameter setup in BIBREF17 . Maximum norm is set to 5 for gradient clipping . For regularization, we apply dropout ( $p=0.8$ ) on the non-recurrent connections BIBREF18 of LSTM. In addition, we apply $L_2$ regularization ( $\\lambda = 10^{-4}$ ) on the weights and biases of the RNN output layer."
],
[
"The experiment results on language modeling perplexity for models using different dialog turn size are shown in Table 1 . $K$ value indicates the number of turns in the dialog. Perplexity is calculated on the last turn, with preceding turns used as context to the model.",
"As can be seen from the results, all RNN based models outperform the n-gram model by large margin. The BoW-Context-RNNLM and DRNNLM beat the Single-Turn-RNNLM consistently. Our implementation of the context dependent CCDCLM performs worse than Single-Turn-RNNLM. This might due to fact that the target turn word prediction depends too much on the previous turn context vector, which connects directly to the hidden state of current turn RNN at each time step. The model performance on training set might not generalize well during inference given the limited size of the training set.",
"The proposed IDCLM and ESIDCLM beat the single turn RNNLM consistently under different context turn sizes. ESIDCLM shows the best language modeling performance under dialog turn size of 3 and 5, outperforming IDCLM by a small margin. IDCLM beats all baseline models when using dialog turn size of 5, and produces slightly worse perplexity than DRNNLM when using dialog turn size of 3.",
"To analyze the best potential gain that may be achieved by introducing linguistic context, we compare the proposed contextual models to DACLM, the model that uses true dialog act history for dialog context modeling. As shown in Table 1 , the gap between our proposed models and DACLM is not wide. This gives a positive hint that the proposed contextual models may implicitly capture the dialog context state changes.",
"For fine-grained analyses of the model performance, we further compute the test set perplexity per POS tag and per dialog act tag. We selected the most frequent POS tags and dialog act tags in SwDA corpus, and report the tag based perplexity relative changes ( $\\%$ ) of the proposed models comparing to Single-Turn-RNNLM. A negative number indicates performance gain.",
"Table 2 shows the model perplexity per POS tag. All the three context dependent models produce consistent performance gain over the Single-Turn-RNNLM for pronouns, prepositions, and adverbs, with pronouns having the largest perplexity improvement. However, the proposed contextual models are less effective in capturing nouns. This suggests that the proposed contextual RNN language models exploit the context to achieve superior prediction on certain but not all POS types. Further exploration on the model design is required if we want to better capture words of a specific type.",
"For the dialog act tag based results in Table 3 , the three contextual models show consistent performance gain on Statement-non-opinion type utterances. The perplexity changes for other dialog act tags vary for different models."
],
[
"In this work, we propose two dialog context language models that with special design to model dialog interactions. Our evaluation results on Switchboard Dialog Act Corpus show that the proposed model outperform conventional RNN language model by 3.3%. The proposed models also illustrate advantageous performance over several competitive contextual language models. Perplexity of the proposed dialog context language models is higher than that of the model using true dialog act tags as context by a small margin. This indicates that the proposed model may implicitly capture the dialog context state for language modeling."
]
],
"section_name": [
"Introduction",
"RNN Language Model",
"Contextual RNN Language Model",
"Methods",
"Context Dependent RNNLM",
"Context Representations",
"Interactive Dialog Context LM",
"External State Interactive Dialog Context LM",
"Data Set",
"Baselines",
"Model Configuration and Training",
"Results and Analysis",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"5b423ac255228aa8b08eabbe2603b7a4ffdd956c",
"f7f4ab93e6a2688aefdc37a5faa3559df3c1b33c"
],
"answer": [
{
"evidence": [
"The previously proposed contextual language models, such as DRNNLM and CCDCLM, treat dialog history as a sequence of inputs, without modeling dialog interactions. A dialog turn from one speaker may not only be a direct response to the other speaker's query, but also likely to be a continuation of his own previous statement. Thus, when modeling turn $k$ in a dialog, we propose to connect the last RNN state of turn $k-2$ directly to the starting RNN state of turn $k$ , instead of letting it to propagate through the RNN for turn $k-1$ . The last RNN state of turn $k-1$ serves as the context vector to turn $k$ , which is fed to turn $k$ 's RNN hidden state at each time step together with the word input. The model architecture is as shown in Figure 2 . The context vector $c$ and the initial RNN hidden state for the $k$ th turn $h^{\\mathbf {U}_k}_{0}$ are defined as:"
],
"extractive_spans": [],
"free_form_answer": "two previous turns",
"highlighted_evidence": [
" A dialog turn from one speaker may not only be a direct response to the other speaker's query, but also likely to be a continuation of his own previous statement. Thus, when modeling turn $k$ in a dialog, we propose to connect the last RNN state of turn $k-2$ directly to the starting RNN state of turn $k$ , instead of letting it to propagate through the RNN for turn $k-1$ ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use the Switchboard Dialog Act Corpus (SwDA) in evaluating our contextual langauge models. The SwDA corpus extends the Switchboard-1 Telephone Speech Corpus with turn and utterance-level dialog act tags. The utterances are also tagged with part-of-speech (POS) tags. We split the data in folder sw00 to sw09 as training set, folder sw10 as test set, and folder sw11 to sw13 as validation set. The training, validation, and test sets contain 98.7K turns (190.0K utterances), 5.7K turns (11.3K utterances), and 11.9K turns (22.2K utterances) respectively. Maximum turn length is set to 160. The vocabulary is defined with the top frequent 10K words."
],
"extractive_spans": [
"160"
],
"free_form_answer": "",
"highlighted_evidence": [
"Maximum turn length is set to 160"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"How long of dialog history is captured?"
],
"question_id": [
"5260cb56b7d127772425583c5c28958c37cb9bea"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"dialog"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Fig. 1. Context dependent RNN language model.",
"Fig. 2. Interactive Dialog Context Language Model (IDCLM).",
"Fig. 3. External State Interactive Dialog Context Language Model (ESIDCLM).",
"Table 2. Perplexity relative change (%) per POS tag",
"Table 1. Language modeling perplexities on SwDA corpus with various dialog context turn sizes (K).",
"Table 3. Perplexity relative change (%) per dialog act tag."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Table2-1.png",
"4-Table1-1.png",
"4-Table3-1.png"
]
} | [
"How long of dialog history is captured?"
] | [
[
"1701.04056-Data Set-0"
]
] | [
"two previous turns"
] | 76 |
1904.07904 | Mitigating the Impact of Speech Recognition Errors on Spoken Question Answering by Adversarial Domain Adaptation | Spoken question answering (SQA) is challenging due to complex reasoning on top of the spoken documents. The recent studies have also shown the catastrophic impact of automatic speech recognition (ASR) errors on SQA. Therefore, this work proposes to mitigate the ASR errors by aligning the mismatch between ASR hypotheses and their corresponding reference transcriptions. An adversarial model is applied to this domain adaptation task, which forces the model to learn domain-invariant features the QA model can effectively utilize in order to improve the SQA results. The experiments successfully demonstrate the effectiveness of our proposed model, and the results are better than the previous best model by 2% EM score. | {
"paragraphs": [
[
"Question answering (QA) has drawn a lot of attention in the past few years. QA tasks on images BIBREF0 have been widely studied, but most focused on understanding text documents BIBREF1 . A representative dataset in text QA is SQuAD BIBREF1 , in which several end-to-end neural models have accomplished promising performance BIBREF2 . Although there is a significant progress in machine comprehension (MC) on text documents, MC on spoken content is a much less investigated field. In spoken question answering (SQA), after transcribing spoken content into text by automatic speech recognition (ASR), typical approaches use information retrieval (IR) techniques BIBREF3 to find the proper answer from the ASR hypotheses. One attempt towards QA of spoken content is TOEFL listening comprehension by machine BIBREF4 . TOEFL is an English examination that tests the knowledge and skills of academic English for English learners whose native languages are not English. Another SQA corpus is Spoken-SQuAD BIBREF5 , which is automatically generated from SQuAD dataset through Google Text-to-Speech (TTS) system. Recently ODSQA, a SQA corpus recorded by real speakers, is released BIBREF6 .",
"To mitigate the impact of speech recognition errors, using sub-word units is a popular approach for speech-related downstream tasks. It has been applied to spoken document retrieval BIBREF7 and spoken term detection BIBREF8 The prior work showed that, using phonectic sub-word units brought improvements for both Spoken-SQuAD and ODSQA BIBREF5 .",
"Instead of considering sub-word features, this paper proposes a novel approach to mitigate the impact of ASR errors. We consider reference transcriptions and ASR hypotheses as two domains, and adapt the source domain data (reference transcriptions) to the target domain data (ASR hypotheses) by projecting these two domains in the shared common space. Therefore, it can effectively benefit the SQA model by improving the robustness to ASR errors in the SQA model.",
"Domain adaptation has been successfully applied on computer vision BIBREF9 and speech recognition BIBREF10 . It is also widely studied on NLP tasks such as sequence tagging and parsing BIBREF11 , BIBREF12 , BIBREF13 . Recently, adversarial domain adaptation has already been explored on spoken language understanding (SLU). Liu and Lane learned domain-general features to benefit from multiple dialogue datasets BIBREF14 ; Zhu et al. learned to transfer the model from the transcripts side to the ASR hypotheses side BIBREF15 ; Lan et al. constructed a shared space for slot tagging and language model BIBREF16 . This paper extends the capability of adversarial domain adaptation for SQA, which has not been explored yet."
],
[
"In SQA, each sample is a triple, INLINEFORM0 , where INLINEFORM1 is a question in either spoken or text form, INLINEFORM2 is a multi-sentence spoken-form document, and INLINEFORM3 is the answer in text from. The task of this work is extractive SQA; that means INLINEFORM4 is a word span from the reference transcription of INLINEFORM5 . An overview framework of SQA is shown in Figure FIGREF1 . In this paper, we frame the source domain as reference transcriptions and the target domain as ASR hypotheses. Hence, we can collect source domain data more easily, and adapt the model to the target domain.",
"In this task, when the machine is given a spoken document, it needs to find the answer of a question from the spoken document. SQA can be solved by the concatenation of an ASR module and a question answering module. Given the ASR hypotheses of a spoken document and a question, the question answering module can output a text answer.",
"The most intuitive way to evaluate the text answer is to directly compute the Exact Match (EM) and Macro-averaged F1 scores (F1) between the predicted text answer and the ground-truth text answer. We used the standard evaluation script from SQuAD BIBREF1 to evaluate the performance."
],
[
"The used architecture of the QA model is briefly summarized below. Here we choose QANet BIBREF2 as the base model due to the following reasons: 1) it achieves the second best performance on SQuAD, and 2) since there are completely no recurrent networks in QANet, its training speed is 5x faster than BiDAF BIBREF17 when reaching the same performance on SQuAD.",
"The network architecture is illustrated in Figure FIGREF2 . The left blocks and the right blocks form two QANets, each of which takes a document and a question as the input and outputs an answer. In QANet, firstly, an embedding encoder obtains word and character embeddings for each word in INLINEFORM0 or INLINEFORM1 and then models the temporal interactions between words and refines word vectors to contextualized word representations. All encoder blocks used in QANet are composed exclusively of depth-wise separable convolutions and self-attention. The intuition here is that convolution components can model local interactions and self-attention components focus on modeling global interactions. The context-query attention layer generates the question-document similarity matrix and computes the question-aware vector representations of the context words. After that, a model encoder layer containing seven encoder blocks captures the interactions among the context words conditioned on the question. Finally, the output layer predicts a start position and an end position in the document to extract the answer span from the document."
],
[
"The main focus of this paper is to apply domain adaptation for SQA. In this approach, we have two SQA models (QANets), one trained from target domain data (ASR hypotheses) and another trained from source domain data (reference transcriptions). Because the two domains share common information, some layers in these two models can be tied in order to model the shared features. Hence, we can choose whether each layer in the QA model should be shared. Tying the weights between the source layer and the target layer in order to learn a symmetric mapping is to project both source and target domain data to a shared common space. Different combinations will be investigated in our experiments.",
"More specifically, we incorporate a domain discriminator into the SQA model shown in Figure FIGREF2 , which can enforce the embedding encoder to project the sentences from both source and target domains into a shared common space and consequentially to be ASR-error robust. Although the embedding encoder for both domains may implicitly learn some common latent representations, adversarial learning can provide a more direct training signal for aligning the output distribution of the embedding encoder from both domains. The embedding encoder takes in a sequence of word vectors and generates a sequence of hidden vectors with the same length. We use INLINEFORM0 and INLINEFORM1 ( INLINEFORM2 and INLINEFORM3 ) to represent the hidden vector sequence given the question INLINEFORM4 and the document INLINEFORM5 in the target (source) domain respectively.",
"The domain discriminator INLINEFORM0 focuses on identifying the domain of the vector sequence is from given INLINEFORM1 or INLINEFORM2 , where the objective is to minimize INLINEFORM3 . DISPLAYFORM0 ",
"Given a training example from the target domain ( INLINEFORM0 ), INLINEFORM1 learns to assign a lower score to INLINEFORM2 and INLINEFORM3 in that example, that is, to minimize INLINEFORM4 and INLINEFORM5 . On the other hand, given a training example from the source domain ( INLINEFORM6 ), INLINEFORM7 learns to assign a larger value to INLINEFORM8 and INLINEFORM9 .",
"Furthermore, we update the parameters of the embedding encoders to maximize the domain classification loss INLINEFORM0 , which works adversarially towards the domain discriminator. We thus expect the model to learn features and structures that can generalize across domains when the outputs of INLINEFORM1 are indistinguishable from the outputs of INLINEFORM2 . The loss function for embedding encoder, INLINEFORM3 , is formulated as DISPLAYFORM0 ",
"where INLINEFORM0 is a hyperparameter. The two embedding encoders in the QA model are learned to maximize INLINEFORM1 while minimizing the loss for QA, INLINEFORM2 . Because the parameters of other layers in QA model are independent to the loss of the domain discriminator, the loss function of other layers, INLINEFORM3 , is equivalent to INLINEFORM4 , that is, INLINEFORM5 .",
"Although the discriminator is applied to the output of embedding encoder in Figure FIGREF2 , it can be also applied to other layers. Considering that almost all QA model contains such embedding encoders, the proposed approach is expected to generalize to other QA models in addition to QANet."
],
[
"Spoken-SQuAD is chosen as the target domain data for training and testing. Spoken-SQuAD BIBREF5 is an automatically generated corpus in which the document is in spoken form and the question is in text form. The reference transcriptions are from SQuAD BIBREF1 . There are 37,111 and 5,351 question answer pairs in the training and testing sets respectively, and the word error rate (WER) of both sets is around 22.7%.",
"The original SQuAD, Text-SQuAD, is chosen as the source domain data, where only question answering pairs appearing in Spoken-SQuAD are utilized. In our task setting, during training we train the proposed QA model on both Text-SQuAD and Spoken-SQuAD training sets. While in the testing stage, we evaluate the performance on Spoken-SQuAD testing set."
],
[
"We utilize fasttext BIBREF18 to generate the embeddings of all words from both Text-SQuAD and Spoken-SQuAD. We adopt the phoneme sequence embeddings to replace the original character sequence embeddings using the method proposed by Li et al. BIBREF5 . The source domain model and the target domain model share the same set of word embedding matrix to improve the alignment between these two domains.",
"W-GAN is adopted for our domain discriminator BIBREF19 , which stacks 5 residual blocks of 1D convolutional layers with 96 filters and filter size 5 followed by one linear layer to convert each input vector sequence into one scalar value.",
"All models used in the experiments are trained with batch size 20, using adam with learning rate INLINEFORM0 and the early stop strategy. The dimension of the hidden state is set to 96 for all layers, and the number of self-attention heads is set to 2. The setup is slightly different but better than the setting suggested by the original QAnet."
],
[
"First, we highlight the domain mismatch phenomenon in our experiments shown in Table TABREF9 . Row (a) is when QANet is trained on Text-SQuAD, row (b) is when QANet is trained on Spoken-SQuAD, and row (c) is when QANet is trained on Text-SQuAD and then finetuned on Spoken-SQuAD. The columns show the evaluation on the testing sets of Text-SQuAD and Spoken-SQuAD.",
"It is clear that the performance drops a lot when the training and testing data mismatch, indicating that model training on ASR hypotheses can not generalize well on reference transcriptions. The performance gap is nearly 20% F1 score (72% to 55%). The row (c) shows the improved performance when testing on S-SQuAD due to the transfer learning via fine-tuning.",
"To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 . The baselines are: (a) trained on S-SQuAD, (b) trained on T-SQuAD and then fine-tuned on S-SQuAD, and (c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BIBREF20 . We also compare to the approach proposed by Lan et al. BIBREF16 in the row (d). This approach is originally proposed for spoken language understanding, and we adopt the same approach on the setting here. The approach models domain-specific features from the source and target domains separately by two different embedding encoders with a shared embedding encoder for modeling domain-general features. The domain-general parameters are adversarially trained by domain discriminator.",
"Row (e) is the model that the weights of all layers are tied between the source domain and the target domain. Row (f) uses the same architecture as row (e) with an additional domain discriminator applied to the embedding encoder. It can be found that row (f) outperforms row (e), indicating that the proposed domain adversarial learning is helpful. Therefore, our following experiments contain domain adversarial learning. The proposed approach (row (f)) outperforms previous best model (row (c)) by 2% EM score and over 1.5% F1 score. We also show the results of applying the domain discriminator to the top of context query attention layer in row (g), which obtains poor performance. To sum it up, incorporating adversarial learning by applying the domain discriminator on top of the embedding encoder layer is effective.",
"Layer weight tying or untying within the model indicates different levels of symmetric mapping between the source and target domains. Different combinations are investigated and shown in Table TABREF14 . The row (a) in which all layers are tied is the row (e) of Table TABREF12 . The results show that untying context-query attention layer L2 (rows (c, f, g)) or model encoder layer L3 (rows (d, f, h)) lead to degenerated solutions in comparison to row (a) where all layers are tied. Untying both of them simultaneously leads to the worst performance which is even worse than the finetuning (row (g) v.s. (c) from Table TABREF12 ). These results imply that sharing the context-query attention layer and the model encoder layer are important for domain adaptation on SQA. We conjecture that these two layers benefit from training on source domain data where there are no ASR errors, so the QA model learns to conduct attention or further reason well on target domain data with ASR errors.",
"Overall, it is not beneficial to untie any layer, because no information can be shared across different domains. Untying the embedding encoder L1 and the output layer L4 leads to the least degradation in comparison to row (a)."
],
[
"In this work, we incorporate a domain discriminator to align the mismatched domains between ASR hypotheses and reference transcriptions. The adversarial learning allows the end-to-end QA model to learn domain-invariant features and improve the robustness to ASR errors. The experiments demonstrate that the proposed model successfully achieves superior performance and outperforms the previous best model by 2% EM score and over 1.5% F1 score. "
]
],
"section_name": [
"Introduction",
"Spoken Question Answering",
"Question Answering Model",
"Domain Adaptation Approach",
"Corpus",
"Experiment Setup",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"9ff983e6b2f3f849ecf3878dbd059716d01c4cee",
"b3bd601884591291c5b1f6605cb8eb90cb736433"
],
"answer": [
{
"evidence": [
"The most intuitive way to evaluate the text answer is to directly compute the Exact Match (EM) and Macro-averaged F1 scores (F1) between the predicted text answer and the ground-truth text answer. We used the standard evaluation script from SQuAD BIBREF1 to evaluate the performance."
],
"extractive_spans": [
"Exact Match (EM)",
"Macro-averaged F1 scores (F1)"
],
"free_form_answer": "",
"highlighted_evidence": [
"The most intuitive way to evaluate the text answer is to directly compute the Exact Match (EM) and Macro-averaged F1 scores (F1) between the predicted text answer and the ground-truth text answer."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The most intuitive way to evaluate the text answer is to directly compute the Exact Match (EM) and Macro-averaged F1 scores (F1) between the predicted text answer and the ground-truth text answer. We used the standard evaluation script from SQuAD BIBREF1 to evaluate the performance."
],
"extractive_spans": [
"Exact Match (EM) and Macro-averaged F1 scores (F1) "
],
"free_form_answer": "",
"highlighted_evidence": [
"The most intuitive way to evaluate the text answer is to directly compute the Exact Match (EM) and Macro-averaged F1 scores (F1) between the predicted text answer and the ground-truth text answer."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"5c65d3bb49ceec5dcc0f2ada4a6a1cf1ce372612",
"ab2e9157cd19a6eb2dd7e4fc998c7c55a3ef5aaf"
],
"answer": [
{
"evidence": [
"To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 . The baselines are: (a) trained on S-SQuAD, (b) trained on T-SQuAD and then fine-tuned on S-SQuAD, and (c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BIBREF20 . We also compare to the approach proposed by Lan et al. BIBREF16 in the row (d). This approach is originally proposed for spoken language understanding, and we adopt the same approach on the setting here. The approach models domain-specific features from the source and target domains separately by two different embedding encoders with a shared embedding encoder for modeling domain-general features. The domain-general parameters are adversarially trained by domain discriminator."
],
"extractive_spans": [],
"free_form_answer": "Best results authors obtain is EM 51.10 and F1 63.11",
"highlighted_evidence": [
"To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2. The EM/F1 scores of proposed adversarial domain adaptation approaches over Spoken-SQuAD."
],
"extractive_spans": [],
"free_form_answer": "EM Score of 51.10",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2. The EM/F1 scores of proposed adversarial domain adaptation approaches over Spoken-SQuAD."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"b020f0cbbbef076ce076f4caaab6c9952dabef5d",
"b287395d112c334ed19021e1222e9ab2ab16d4a9"
],
"answer": [
{
"evidence": [
"To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 . The baselines are: (a) trained on S-SQuAD, (b) trained on T-SQuAD and then fine-tuned on S-SQuAD, and (c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BIBREF20 . We also compare to the approach proposed by Lan et al. BIBREF16 in the row (d). This approach is originally proposed for spoken language understanding, and we adopt the same approach on the setting here. The approach models domain-specific features from the source and target domains separately by two different embedding encoders with a shared embedding encoder for modeling domain-general features. The domain-general parameters are adversarially trained by domain discriminator.",
"Row (e) is the model that the weights of all layers are tied between the source domain and the target domain. Row (f) uses the same architecture as row (e) with an additional domain discriminator applied to the embedding encoder. It can be found that row (f) outperforms row (e), indicating that the proposed domain adversarial learning is helpful. Therefore, our following experiments contain domain adversarial learning. The proposed approach (row (f)) outperforms previous best model (row (c)) by 2% EM score and over 1.5% F1 score. We also show the results of applying the domain discriminator to the top of context query attention layer in row (g), which obtains poor performance. To sum it up, incorporating adversarial learning by applying the domain discriminator on top of the embedding encoder layer is effective."
],
"extractive_spans": [
"(c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BIBREF20 "
],
"free_form_answer": "",
"highlighted_evidence": [
"The baselines are: (a) trained on S-SQuAD, (b) trained on T-SQuAD and then fine-tuned on S-SQuAD, and (c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BIBREF20 .",
"The proposed approach (row (f)) outperforms previous best model (row (c)) by 2% EM score and over 1.5% F1 score."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7e9359537875c771b04c83acb5a603e1aeea7acc",
"9b16bb0ea4668d64a4aae8b9da9d25da1bb58f6b"
],
"answer": [
{
"evidence": [
"Spoken-SQuAD is chosen as the target domain data for training and testing. Spoken-SQuAD BIBREF5 is an automatically generated corpus in which the document is in spoken form and the question is in text form. The reference transcriptions are from SQuAD BIBREF1 . There are 37,111 and 5,351 question answer pairs in the training and testing sets respectively, and the word error rate (WER) of both sets is around 22.7%.",
"The original SQuAD, Text-SQuAD, is chosen as the source domain data, where only question answering pairs appearing in Spoken-SQuAD are utilized. In our task setting, during training we train the proposed QA model on both Text-SQuAD and Spoken-SQuAD training sets. While in the testing stage, we evaluate the performance on Spoken-SQuAD testing set."
],
"extractive_spans": [
"Spoken-SQuAD testing set"
],
"free_form_answer": "",
"highlighted_evidence": [
"Spoken-SQuAD is chosen as the target domain data for training and testing. Spoken-SQuAD BIBREF5 is an automatically generated corpus in which the document is in spoken form and the question is in text form. The reference transcriptions are from SQuAD BIBREF1 . There are 37,111 and 5,351 question answer pairs in the training and testing sets respectively, and the word error rate (WER) of both sets is around 22.7%.\n\nThe original SQuAD, Text-SQuAD, is chosen as the source domain data, where only question answering pairs appearing in Spoken-SQuAD are utilized. In our task setting, during training we train the proposed QA model on both Text-SQuAD and Spoken-SQuAD training sets. While in the testing stage, we evaluate the performance on Spoken-SQuAD testing set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Spoken-SQuAD is chosen as the target domain data for training and testing. Spoken-SQuAD BIBREF5 is an automatically generated corpus in which the document is in spoken form and the question is in text form. The reference transcriptions are from SQuAD BIBREF1 . There are 37,111 and 5,351 question answer pairs in the training and testing sets respectively, and the word error rate (WER) of both sets is around 22.7%.",
"The original SQuAD, Text-SQuAD, is chosen as the source domain data, where only question answering pairs appearing in Spoken-SQuAD are utilized. In our task setting, during training we train the proposed QA model on both Text-SQuAD and Spoken-SQuAD training sets. While in the testing stage, we evaluate the performance on Spoken-SQuAD testing set."
],
"extractive_spans": [
"Spoken-SQuAD"
],
"free_form_answer": "",
"highlighted_evidence": [
"Spoken-SQuAD is chosen as the target domain data for training and testing.",
"While in the testing stage, we evaluate the performance on Spoken-SQuAD testing set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What evaluation metrics were used?",
"What was the score of the proposed model?",
"What was the previous best model?",
"Which datasets did they use for evaluation?"
],
"question_id": [
"9b97805a0c093df405391a85e4d3ab447671c86a",
"38f58f13c7f23442d5952c8caf126073a477bac0",
"7ee5c45b127fb284a4a9e72bb9b980a602f7445a",
"ddf5e1f600b9ce2e8f63213982ef4209bab01fd8"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Fig. 1. Flow diagram of the SQA system.",
"Fig. 2. The overall architecture of the proposed QA model with a domain discriminator. Each layer can be tied or untied between the source and target models.",
"Table 1. Illustration of domain mismatch, where the models are trained on the source domain (Text-SQuAD; T-SQuAD) or the target domain (Spoken-SQuAD; S-SQuAD) and then evaluated on both source and target domains.",
"Table 3. Investigation of different layer tying mechanisms, where Xmeans that weights of the layer are tied between the source model and the target model. (L1: embedding encoder, L2: context query attention layer, L3: model encoder layer, L4: output layer.)",
"Table 2. The EM/F1 scores of proposed adversarial domain adaptation approaches over Spoken-SQuAD."
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"3-Table1-1.png",
"4-Table3-1.png",
"4-Table2-1.png"
]
} | [
"What was the score of the proposed model?"
] | [
[
"1904.07904-Results-2",
"1904.07904-4-Table2-1.png"
]
] | [
"EM Score of 51.10"
] | 77 |
2003.11645 | Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream Tasks | Word2Vec is a prominent tool for Natural Language Processing (NLP) tasks. Similar inspiration is found in distributed embeddings for state-of-the-art (sota) deep neural networks. However, wrong combination of hyper-parameters can produce poor quality vectors. The objective of this work is to show optimal combination of hyper-parameters exists and evaluate various combinations. We compare them with the original model released by Mikolov. Both intrinsic and extrinsic (downstream) evaluations, including Named Entity Recognition (NER) and Sentiment Analysis (SA) were carried out. The downstream tasks reveal that the best model is task-specific, high analogy scores don't necessarily correlate positively with F1 scores and the same applies for more data. Increasing vector dimension size after a point leads to poor quality or performance. If ethical considerations to save time, energy and the environment are made, then reasonably smaller corpora may do just as well or even better in some cases. Besides, using a small corpus, we obtain better human-assigned WordSim scores, corresponding Spearman correlation and better downstream (NER&SA) performance compared to Mikolov's model, trained on 100 billion word corpus. | {
"paragraphs": [
[
"There have been many implementations of the word2vec model in either of the two architectures it provides: continuous skipgram and CBoW (BIBREF0). Similar distributed models of word or subword embeddings (or vector representations) find usage in sota, deep neural networks like BERT and its successors (BIBREF1, BIBREF2, BIBREF3). These deep networks generate contextual representations of words after been trained for extended periods on large corpora, unsupervised, using the attention mechanisms (BIBREF4).",
"It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets.",
"The objective of this work is to determine the optimal combinations of word2vec hyper-parameters for intrinsic evaluation (semantic and syntactic analogies) and extrinsic evaluation tasks (BIBREF13, BIBREF14), like SA and NER. It is not our objective in this work to record sota results. Some of the main contributions of this research are the empirical establishment of optimal combinations of word2vec hyper-parameters for NLP tasks, discovering the behaviour of quality of vectors viz-a-viz increasing dimensions and the confirmation of embeddings being task-specific for the downstream. The rest of this paper is organised as follows: the literature review that briefly surveys distributed representation of words, particularly word2vec; the methodology employed in this research work; the results obtained and the conclusion."
],
[
"Breaking away from the non-distributed (high-dimensional, sparse) representations of words, typical of traditional bag-of-words or one-hot-encoding (BIBREF15), BIBREF0 created word2vec. Word2Vec consists of two shallow neural network architectures: continuous skipgram and CBoW. It uses distributed (low-dimensional, dense) representations of words that group similar words. This new model traded the complexity of deep neural network architectures, by other researchers, for more efficient training over large corpora. Its architectures have two training algorithms: negative sampling and hierarchical softmax (BIBREF16). The released model was trained on Google news dataset of 100 billion words. Implementations of the model have been undertaken by researchers in the programming languages Python and C++, though the original was done in C (BIBREF17).",
"Continuous skipgram predicts (by maximizing classification of) words before and after the center word, for a given range. Since distant words are less connected to a center word in a sentence, less weight is assigned to such distant words in training. CBoW, on the other hand, uses words from the history and future in a sequence, with the objective of correctly classifying the target word in the middle. It works by projecting all history or future words within a chosen window into the same position, averaging their vectors. Hence, the order of words in the history or future does not influence the averaged vector. This is similar to the traditional bag-of-words, which is oblivious of the order of words in its sequence. A log-linear classifier is used in both architectures (BIBREF0). In further work, they extended the model to be able to do phrase representations and subsample frequent words (BIBREF16). Being a NNLM, word2vec assigns probabilities to words in a sequence, like other NNLMs such as feedforward networks or recurrent neural networks (BIBREF15). Earlier models like latent dirichlet allocation (LDA) and latent semantic analysis (LSA) exist and effectively achieve low dimensional vectors by matrix factorization (BIBREF18, BIBREF19).",
"It's been shown that word vectors are beneficial for NLP tasks (BIBREF15), such as sentiment analysis and named entity recognition. Besides, BIBREF0 showed with vector space algebra that relationships among words can be evaluated, expressing the quality of vectors produced from the model. The famous, semantic example: vector(\"King\") - vector(\"Man\") + vector(\"Woman\") $\\approx $ vector(\"Queen\") can be verified using cosine distance. Another type of semantic meaning is the relationship between a capital city and its corresponding country. Syntactic relationship examples include plural verbs and past tense, among others. Combination of both syntactic and semantic analyses is possible and provided (totaling over 19,000 questions) as Google analogy test set by BIBREF0. WordSimilarity-353 test set is another analysis tool for word vectors (BIBREF20). Unlike Google analogy score, which is based on vector space algebra, WordSimilarity is based on human expert-assigned semantic similarity on two sets of English word pairs. Both tools rank from 0 (totally dissimilar) to 1 (very much similar or exact, in Google analogy case).",
"A typical artificial neural network (ANN) has very many hyper-parameters which may be tuned. Hyper-parameters are values which may be manually adjusted and include vector dimension size, type of algorithm and learning rate (BIBREF19). BIBREF0 tried various hyper-parameters with both architectures of their model, ranging from 50 to 1,000 dimensions, 30,000 to 3,000,000 vocabulary sizes, 1 to 3 epochs, among others. In our work, we extended research to 3,000 dimensions. Different observations were noted from the many trials. They observed diminishing returns after a certain point, despite additional dimensions or larger, unstructured training data. However, quality increased when both dimensions and data size were increased together. Although BIBREF16 pointed out that choice of optimal hyper-parameter configurations depends on the NLP problem at hand, they identified the most important factors are architecture, dimension size, subsampling rate, and the window size. In addition, it has been observed that variables like size of datasets improve the quality of word vectors and, potentially, performance on downstream tasks (BIBREF21, BIBREF0)."
],
[
"The models were generated in a shared cluster running Ubuntu 16 with 32 CPUs of 32x Intel Xeon 4110 at 2.1GHz. Gensim (BIBREF17) python library implementation of word2vec was used with parallelization to utilize all 32 CPUs. The downstream experiments were run on a Tesla GPU on a shared DGX cluster running Ubuntu 18. Pytorch deep learning framework was used. Gensim was chosen because of its relative stability, popular support and to minimize the time required in writing and testing a new implementation in python from scratch.",
"To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased.",
"Google (semantic and syntactic) analogy tests and WordSimilarity-353 (with Spearman correlation) by BIBREF20 were chosen for intrinsic evaluations. They measure the quality of word vectors. The analogy scores are averages of both semantic and syntactic tests. NER and SA were chosen for extrinsic evaluations. The GMB dataset for NER was trained in an LSTM network, which had an embedding layer for input. The network diagram is shown in fig. FIGREF4. The IMDb dataset for SA was trained in a BiLSTM network, which also used an embedding layer for input. Its network diagram is given in fig. FIGREF4. It includes an additional hidden linear layer. Hyper-parameter details of the two networks for the downstream tasks are given in table TABREF3. The metrics for extrinsic evaluation include F1, precision, recall and accuracy scores. In both tasks, the default pytorch embedding was tested before being replaced by pre-trained embeddings released by BIBREF0 and ours. In each case, the dataset was shuffled before training and split in the ratio 70:15:15 for training, validation (dev) and test sets. Batch size of 64 was used. For each task, experiments for each embedding was conducted four times and an average value calculated and reported in the next section"
],
[
"Table TABREF5 summarizes key results from the intrinsic evaluations for 300 dimensions. Table TABREF6 reveals the training time (in hours) and average embedding loading time (in seconds) representative of the various models used. Tables TABREF11 and TABREF12 summarize key results for the extrinsic evaluations. Figures FIGREF7, FIGREF9, FIGREF10, FIGREF13 and FIGREF14 present line graph of the eight combinations for different dimension sizes for Simple Wiki, trend of Simple Wiki and Billion Word corpora over several dimension sizes, analogy score comparison for models across datasets, NER mean F1 scores on the GMB dataset and SA mean F1 scores on the IMDb dataset, respectively. Combination of the skipgram using hierarchical softmax and window size of 8 for 300 dimensions outperformed others in analogy scores for the Wiki Abstract. However, its results are so poor, because of the tiny file size, they're not worth reporting here. Hence, we'll focus on results from the Simple Wiki and Billion Word corpora.",
"Best combination changes when corpus size increases, as will be noticed from table TABREF5. In terms of analogy score, for 10 epochs, w8s0h0 performs best while w8s1h0 performs best in terms of WordSim and corresponding Spearman correlation. Meanwhile, increasing the corpus size to BW, w4s1h0 performs best in terms of analogy score while w8s1h0 maintains its position as the best in terms of WordSim and Spearman correlation. Besides considering quality metrics, it can be observed from table TABREF6 that comparative ratio of values between the models is not commensurate with the results in intrinsic or extrinsic values, especially when we consider the amount of time and energy spent, since more training time results in more energy consumption (BIBREF23).",
"Information on the length of training time for the released Mikolov model is not readily available. However, it's interesting to note that their presumed best model, which was released is also s1h0. Its analogy score, which we tested and report, is confirmed in their paper. It beats our best models in only analogy score (even for Simple Wiki), performing worse in others. This is inspite of using a much bigger corpus of 3,000,000 vocabulary size and 100 billion words while Simple Wiki had vocabulary size of 367,811 and is 711MB. It is very likely our analogy scores will improve when we use a much larger corpus, as can be observed from table TABREF5, which involves just one billion words.",
"Although the two best combinations in analogy (w8s0h0 & w4s0h0) for SW, as shown in fig. FIGREF7, decreased only slightly compared to others with increasing dimensions, the increased training time and much larger serialized model size render any possible minimal score advantage over higher dimensions undesirable. As can be observed in fig. FIGREF9, from 100 dimensions, scores improve but start to drop after over 300 dimensions for SW and after over 400 dimensions for BW. More becomes worse! This trend is true for all combinations for all tests. Polynomial interpolation may be used to determine the optimal dimension in both corpora. Our models are available for confirmation and source codes are available on github.",
"With regards to NER, most pretrained embeddings outperformed the default pytorch embedding, with our BW w4s1h0 model (which is best in BW analogy score) performing best in F1 score and closely followed by BIBREF0 model. On the other hand, with regards to SA, pytorch embedding outperformed the pretrained embeddings but was closely followed by our SW w8s0h0 model (which also had the best SW analogy score). BIBREF0 performed second worst of all, despite originating from a very huge corpus. The combinations w8s0h0 & w4s0h0 of SW performed reasonably well in both extrinsic tasks, just as the default pytorch embedding did."
],
[
"This work analyses, empirically, optimal combinations of hyper-parameters for embeddings, specifically for word2vec. It further shows that for downstream tasks, like NER and SA, there's no silver bullet! However, some combinations show strong performance across tasks. Performance of embeddings is task-specific and high analogy scores do not necessarily correlate positively with performance on downstream tasks. This point on correlation is somewhat similar to results by BIBREF24 and BIBREF14. It was discovered that increasing dimension size depreciates performance after a point. If strong considerations of saving time, energy and the environment are made, then reasonably smaller corpora may suffice or even be better in some cases. The on-going drive by many researchers to use ever-growing data to train deep neural networks can benefit from the findings of this work. Indeed, hyper-parameter choices are very important in neural network systems (BIBREF19).",
"Future work that may be investigated are performance of other architectures of word or sub-word embeddings, the performance and comparison of embeddings applied to languages other than English and how embeddings perform in other downstream tasks. In addition, since the actual reason for the changes in best model as corpus size increases is not clear, this will also be suitable for further research.",
"The work on this project is partially funded by Vinnova under the project number 2019-02996 \"Språkmodeller för svenska myndigheter\""
],
[
""
]
],
"section_name": [
"Introduction",
"Literature Review",
"Methodology",
"Results and Discussion",
"Conclusion",
"Acronyms"
]
} | {
"answers": [
{
"annotation_id": [
"cd92306f733a91cd09d573716d02001a4d363269",
"fbe2b7c883e88eb2b97b93468cadb0c9899f4df5"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Hyper-parameter choices",
"FLOAT SELECTED: Table 2: Network hyper-parameters"
],
"extractive_spans": [],
"free_form_answer": "Dimension size, window size, architecture, algorithm, epochs, hidden dimension size, learning rate, loss function, optimizer algorithm.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Hyper-parameter choices",
"FLOAT SELECTED: Table 2: Network hyper-parameters"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased.",
"FLOAT SELECTED: Table 1: Hyper-parameter choices"
],
"extractive_spans": [],
"free_form_answer": "Hyperparameters explored were: dimension size, window size, architecture, algorithm and epochs.",
"highlighted_evidence": [
"Table TABREF2 describes most hyper-parameters explored for each dataset.",
"FLOAT SELECTED: Table 1: Hyper-parameter choices"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5d0cd393d4647c690871d8ba312f17b694e9f454",
"e34e62bea37b30b661a6c340b3819319260643fa"
],
"answer": [
{
"evidence": [
"It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets."
],
"extractive_spans": [
"Groningen Meaning Bank"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets."
],
"extractive_spans": [
"Groningen Meaning Bank (GMB)"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"731d47ad5d0ea44a71ae361043d09f49426e8fdc",
"bb46c034dda431ee80ecd2897f41a3a0a2e6eae2"
],
"answer": [
{
"evidence": [
"It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets."
],
"extractive_spans": [
"IMDb dataset of movie reviews"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets."
],
"extractive_spans": [
"IMDb"
],
"free_form_answer": "",
"highlighted_evidence": [
"The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"8e6226c7d2522374fa6dc5c0c50e9c381e12886b",
"cf8ca43c8655ee8de23b5cc92effdbf5067d8b47"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Hyper-parameter choices"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Hyper-parameter choices"
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Hyper-parameter choices",
"To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Hyper-parameter choices",
"Table TABREF2 describes most hyper-parameters explored for each dataset."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What hyperparameters are explored?",
"What Named Entity Recognition dataset is used?",
"What sentiment analysis dataset is used?",
"Do they test both skipgram and c-bow?"
],
"question_id": [
"27275fe9f6a9004639f9ac33c3a5767fea388a98",
"ef3567ce7301b28e34377e7b62c4ec9b496f00bf",
"7595260c5747aede0b32b7414e13899869209506",
"c2d1387e08cf25cb6b1f482178cca58030e85b70"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"word2vec",
"word2vec",
"word2vec",
"word2vec"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Hyper-parameter choices",
"Table 2: Network hyper-parameters",
"Figure 1: Network architecture for NER Figure 2: Network architecture for SA",
"Table 3: Analogy scores for 300 dimensions for 10 epochs for SW, BW corpora & Mikolov.",
"Table 4: Training time & embedding loading time for models w8s1h0, w8s1h1 & Mikolov",
"Figure 3: Simple Wiki: Analogy Scores for 10 Epochs",
"Figure 5: Comparison of 300 dimension models for 10 epochs for SW & BW corpora",
"Table 5: NER Dev and Test sets Mean Results",
"Figure 6: Named Entity Recognition (NER) Mean F1 Scores on GMB Dataset",
"Figure 7: Sentiment Analysis Mean F1 Scores on IMDB Dataset",
"Table 6: Sentiment Analysis Dev and Test sets Mean Results"
],
"file": [
"4-Table1-1.png",
"5-Table2-1.png",
"5-Figure1-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"7-Figure3-1.png",
"8-Figure5-1.png",
"8-Table5-1.png",
"9-Figure6-1.png",
"9-Figure7-1.png",
"10-Table6-1.png"
]
} | [
"What hyperparameters are explored?"
] | [
[
"2003.11645-4-Table1-1.png",
"2003.11645-5-Table2-1.png",
"2003.11645-Methodology-1"
]
] | [
"Hyperparameters explored were: dimension size, window size, architecture, algorithm and epochs."
] | 78 |
1604.03114 | Conversational flow in Oxford-style debates | Public debates are a common platform for presenting and juxtaposing diverging views on important issues. In this work we propose a methodology for tracking how ideas flow between participants throughout a debate. We use this approach in a case study of Oxford-style debates---a competitive format where the winner is determined by audience votes---and show how the outcome of a debate depends on aspects of conversational flow. In particular, we find that winners tend to make better use of a debate's interactive component than losers, by actively pursuing their opponents' points rather than promoting their own ideas over the course of the conversation. | {
"paragraphs": [
[
"Public debates are a common platform for presenting and juxtaposing diverging viewpoints As opposed to monologues where speakers are limited to expressing their own beliefs, debates allow for participants to interactively attack their opponents' points while defending their own. The resulting flow of ideas is a key feature of this conversation genre.",
"In this work we introduce a computational framework for characterizing debates in terms of conversational flow. This framework captures two main debating strategies—promoting one's own points and attacking the opponents' points—and tracks their relative usage throughout the debate. By applying this methodology to a setting where debate winners are known, we show that conversational flow patterns are predictive of which debater is more likely to persuade an audience.",
"Case study: Oxford-style debates. Oxford-style debates provide a setting that is particularly convenient for studying the effects of conversational flow. In this competitive debate format, two teams argue for or against a preset motion in order to persuade a live audience to take their position. The audience votes before and after the debate, and the winning team is the one that sways more of the audience towards its view. This setup allows us to focus on the effects of conversational flow since it disentangles them from the audience's prior leaning.",
"The debate format involves an opening statement from the two sides, which presents an overview of their arguments before the discussion begins. This allows us to easily identify talking points held by the participants prior to the interaction, and consider them separately from points introduced spontaneously to serve the discussion.",
"This work is taking steps towards better modeling of conversational dynamics, by: (i) introducing a debate dataset with rich metadata (Section SECREF2 ), (ii) proposing a framework for tracking the flow of ideas (Section SECREF3 ), and (iii) showing its effectiveness in a predictive setting (Section SECREF4 )."
],
[
"In this study we use transcripts and results of Oxford-style debates from the public debate series “Intelligence Squared Debates” (IQ2 for short). These debates are recorded live, and contain motions covering a diversity of topics ranging from foreign policy issues to the benefits of organic food. Each debate consists of two opposing teams—one for the motion and one against—of two or three experts in the topic of the particular motion, along with a moderator. Each debate follows the Oxford-style format and consists of three rounds. In the introduction, each debater is given 7 minutes to lay out their main points. During the discussion, debaters take questions from the moderator and audience, and respond to attacks from the other team. This round lasts around 30 minutes and is highly interactive; teams frequently engage in direct conversation with each other. Finally, in the conclusion, each debater is given 2 minutes to make final remarks.",
"Our dataset consists of the transcripts of all debates held by IQ2 in the US from September 2006 up to September 2015; in total, there are 108 debates. Each debate is quite extensive: on average, 12801 words are uttered in 117 turns by members of either side per debate.",
"Winning side labels. We follow IQ2's criteria for deciding who wins a debate, as follows. Before the debate, the live audience votes on whether they are for, against, or undecided on the motion. A second round of voting occurs after the debate. A side wins the debate if the difference between the percentage of votes they receive post- and pre-debate (the “delta”) is greater than that of the other side's. Often the debates are quite tight: for 30% of the debates, the difference between the winning and losing sides' deltas is less than 10%.",
"Audience feedback. We check that the voting results are meaningful by verifying that audience reactions to the debaters are related to debate outcome. Using laughter and applause received by each side in each round as markers of positive reactions, we note that differences in audience reception of the two sides emerge over the course of the debate. While both sides get similar levels of reaction during the introduction, winning teams tend to receive more laughter during the discussion ( INLINEFORM0 ) and more applause during the conclusion ( INLINEFORM2 ).",
"Example debate. We will use a debate over the motion “Millennials don't stand a chance” (henceforth Millennials) as a running example. The For side won the debate with a delta of 20% of the votes, compared to the Against side which only gained 5%."
],
[
"Promoting one's own points and addressing the opponent's points are two primary debating strategies. Here we introduce a methodology to identify these strategies, and use it to investigate their usage and effect on a debate's outcome.",
"Identifying talking points . We first focus on ideas which form the basis of a side's stance on the motion. We identify such talking points by considering words whose frequency of usage differs significantly between the two teams during the introduction, before any interaction takes place. To find these words, we use the method introduced by monroe2008fightin in the context of U.S. Senate speeches. In particular, we estimate the divergence between the two sides' word-usage in the introduction, where word-usage is modeled as multinomial distributions smoothed with a uniform Dirichlet prior, and divergence is given by log-odds ratio. The most discriminating words are those with the highest and lowest z-scores of divergence estimates. For a side INLINEFORM0 , we define the set of talking points INLINEFORM1 to be the INLINEFORM2 words with the highest or lowest INLINEFORM3 -scores. We distinguish between INLINEFORM5 's own talking points INLINEFORM6 , and the opposing talking points INLINEFORM7 belonging to its opponent INLINEFORM8 . These are examples of talking points for the “Millennials” debate:",
"",
"The flow of talking points . A side can either promote its own talking points , address its opponent's points, or steer away from these initially salient ideas altogether. We quantify the use of these strategies by comparing the airtime debaters devote to talking points . For a side INLINEFORM0 , let the self-coverage INLINEFORM1 be the fraction of content words uttered by INLINEFORM2 in round INLINEFORM3 that are among their own talking points INLINEFORM4 ; and the opponent-coverage INLINEFORM5 be the fraction of its content words covering opposing talking points INLINEFORM6 .",
"Not surprisingly, we find that self-coverage dominates during the discussion ( INLINEFORM0 , INLINEFORM1 ). However, this does not mean debaters are simply giving monologues and ignoring each other: the effect of the interaction is reflected in a sharp drop in self-coverage and a rise in opponent-coverage once the discussion round begins. Respectively, INLINEFORM2 and INLINEFORM3 , both INLINEFORM4 . Examples of self- and opponent-coverage of two talking point s in the “Millennials” debate from the introduction and discussion are given in Table TABREF9 .",
"Does the change in focus translate to any strategic advantages? Figure FIGREF11 suggests this is the case: the drop in self-coverage is slightly larger for the side that eventually wins the debate ( INLINEFORM0 ). The drop in the sum of self- and opponent-coverage is also larger for winning teams, suggesting that they are more likely to steer away from discussing any talking points from either side ( INLINEFORM1 ).",
"Identifying discussion points. Having seen that debaters can benefit by shifting away from talking points that were salient during the introduction, we now examine the ideas that spontaneously arise to serve the discussion. We model such discussion points as words introduced to the debate during the discussion by a debater and adopted by his opponents at least twice. This allows us to focus on words that become relevant to the conversation; only 3% of all newly introduced words qualify, amounting to about 10 discussion points per debate.",
"The flow of discussion points . The adoption of discussion points plays an important role in persuading the audience: during the discussion, eventual winners adopt more discussion points introduced by their opponents than eventual losers ( INLINEFORM0 ). Two possible strategic interpretations emerge. From a topic control angle BIBREF0 , perhaps losers are more successful at imposing their discussion points to gain control of the discussion. This view appears counterintuitive given work linking topic control to influence in other settings BIBREF1 , BIBREF2 .",
"An alternative interpretation could be that winners are more active than losers in contesting their opponents' points, a strategy that might play out favorably to the audience. A post-hoc manual examination supports this interpretation: 78% of the valid discussion points are picked up by the opposing side in order to be challenged; this strategy is exemplified in Table TABREF14 . Overall, these observations tying the flow of discussion points to the debate's outcome suggest that winners are more successful at using the interaction to engage with their opponents' ideas."
],
[
"We evaluate the predictive power of our flow features in a binary classification setting: predict whether the For or Against side wins the debate. This is a challenging task even for humans, thus the dramatic reveal at the end of each IQ2 debate that partly explains the popularity of the show. Our goal here is limited to understanding which of the flow features that we developed carry predictive power.",
"Conversation flow features. We use all conversational features discussed above. For each side INLINEFORM0 we include INLINEFORM1 , INLINEFORM2 , and their sum. We also use the drop in self-coverage given by subtracting corresponding values for INLINEFORM3 , and the number of discussion points adopted by each side. We call these the Flow features.",
"Baseline features. To discard the possibility that our results are simply explained by debater verbosity, we use the number of words uttered and number of turns taken by each side (length) as baselines. We also compare to a unigram baseline (BOW).",
"Audience features. We use the counts of applause and laughter received by each side (described in Section SECREF2 ) as rough indicators of how well the audience can foresee a debate's outcome.",
"Prediction accuracy is evaluated using a leave-one-out (LOO) approach. We use logistic regression; model parameters for each LOO train-test split are selected via 3-fold cross-validation on the training set. To find particularly predictive flow features, we also try using univariate feature selection on the flow features before the model is fitted in each split; we refer to this setting as Flow*.",
"We find that conversation flow features obtain the best accuracy among all listed feature types (Flow: 63%; Flow*: 65%), performing significantly higher than a 50% random baseline (binomial test INLINEFORM0 ), and comparable to audience features (60%). In contrast, the length and BOW baselines do not perform better than chance. We note that Flow features perform competitively despite being the only ones that do not factor in the concluding round.",
"The features selected most often in the Flow* task are: the number of discussion points adopted (with positive regression coefficients), the recall of talking points during the discussion round (negative coefficients), and the drop in usage of own talking points from introduction to discussion (positive coefficients). The relative importance of these features, which focus on the interaction between teams, suggests that audiences tend to favor debating strategies which emphasize the discussion."
],
[
"Previous work on conversational structure has proposed approaches to model dialogue acts BIBREF3 , BIBREF4 , BIBREF5 or disentangle interleaved conversations BIBREF6 , BIBREF7 . Other research has considered the problem of detecting conversation-level traits such as the presence of disagreements BIBREF8 , BIBREF9 or the likelihood of relation dissolution BIBREF10 . At the participant level, several studies present approaches to identify ideological stances BIBREF11 , BIBREF12 , using features based on participant interactions BIBREF13 , BIBREF14 , or extracting words and reasons characterizing a stance BIBREF15 , BIBREF16 , BIBREF17 . In our setting, both the stances and the turn structure of a debate are known, allowing us to instead focus on the debate's outcome.",
"Existing research on argumentation strategies has largely focused on exploiting the structure of monologic arguments BIBREF18 , like those of persuasive essays BIBREF19 , BIBREF20 . In addition, tan+etal:16a has examined the effectiveness of arguments in the context of a forum where people invite others to challenge their opinions. We complement this line of work by looking at the relative persuasiveness of participants in extended conversations as they exchange arguments over multiple turns.",
"Previous studies of influence in extended conversations have largely dealt with the political domain, examining moderated but relatively unstructured settings such as talk shows or presidential debates, and suggesting features like topic control BIBREF0 , linguistic style matching BIBREF21 and turn-taking BIBREF22 . With persuasion in mind, our work extends these studies to explore a new dynamic, the flow of ideas between speakers, in a highly structured setting that controls for confounding factors."
],
[
"This study opens several avenues for future research. One could explore more complex representations of talking points and discussion points , for instance using topic models or word embeddings. Furthermore, augmenting the flow of content in a conversation with the speakers' linguistic choices could better capture their intentions. In addition, it would be interesting to study the interplay between our conversational flow features and relatively monologic features that consider the argumentative and rhetorical traits of each side separately. More explicitly comparing and contrasting monologic and interactive dynamics could lead to better models of conversations. Such approaches could also help clarify some of the intuitions about conversations explored in this work, particularly that engaging in dialogue carries different strategic implications from self-promotion.",
"Our focus in this paper is on capturing and understanding conversational flow. We hence make some simplifying assumptions that could be refined in future work. For instance, by using a basic unigram-based definition of discussion points , we do not account for the context or semantic sense in which these points occur. In particular, our annotators found that a significant proportion of the discussion points under our definition actually referred to differing ideas in the various contexts in which they appeared. We expect that improving our retrieval model will also improve the robustness of our idea flow analysis. A better model of discussion points could also provide more insight into the role of these points in persuading the audience.",
"While Oxford-style debates are a particularly convenient setting for studying the effects of conversational flow, our dataset is limited in terms of size. It would be worthwhile to examine the flow features we developed in the context of settings with richer incentives beyond persuading an audience, such as in the semi-cooperative environment of Wikipedia talk pages. Finally, our methodology could point to applications in areas such as education and cooperative work, where it is key to establish the link between conversation features and an interlocutor's ability to convey their point BIBREF23 .",
"Acknowledgements. We thank the reviewers and V. Niculae for their helpful comments, and I. Arawjo and D. Sedra for annotations. This work was supported in part by a Google Faculty Research Award."
]
],
"section_name": [
"Introduction",
"Debate Dataset: Intelligence Squared",
"Modeling Idea Flow",
"Predictive Power",
"Further Related Work",
"Limitations and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"612fe381f386e5ae7c93453dec93719e5f339be9",
"d2474ef2f2b0aea1b0a437464e321ba02c8e7d1d"
],
"answer": [
{
"evidence": [
"The flow of talking points . A side can either promote its own talking points , address its opponent's points, or steer away from these initially salient ideas altogether. We quantify the use of these strategies by comparing the airtime debaters devote to talking points . For a side INLINEFORM0 , let the self-coverage INLINEFORM1 be the fraction of content words uttered by INLINEFORM2 in round INLINEFORM3 that are among their own talking points INLINEFORM4 ; and the opponent-coverage INLINEFORM5 be the fraction of its content words covering opposing talking points INLINEFORM6 .",
"Conversation flow features. We use all conversational features discussed above. For each side INLINEFORM0 we include INLINEFORM1 , INLINEFORM2 , and their sum. We also use the drop in self-coverage given by subtracting corresponding values for INLINEFORM3 , and the number of discussion points adopted by each side. We call these the Flow features."
],
"extractive_spans": [],
"free_form_answer": "The time devoted to self-coverage, opponent-coverage, and the number of adopted discussion points.",
"highlighted_evidence": [
"We quantify the use of these strategies by comparing the airtime debaters devote to talking points . For a side INLINEFORM0 , let the self-coverage INLINEFORM1 be the fraction of content words uttered by INLINEFORM2 in round INLINEFORM3 that are among their own talking points INLINEFORM4 ; and the opponent-coverage INLINEFORM5 be the fraction of its content words covering opposing talking points INLINEFORM6 .",
" We use all conversational features discussed above. For each side INLINEFORM0 we include INLINEFORM1 , INLINEFORM2 , and their sum. We also use the drop in self-coverage given by subtracting corresponding values for INLINEFORM3 , and the number of discussion points adopted by each side."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this work we introduce a computational framework for characterizing debates in terms of conversational flow. This framework captures two main debating strategies—promoting one's own points and attacking the opponents' points—and tracks their relative usage throughout the debate. By applying this methodology to a setting where debate winners are known, we show that conversational flow patterns are predictive of which debater is more likely to persuade an audience."
],
"extractive_spans": [
"—promoting one's own points and attacking the opponents' points"
],
"free_form_answer": "",
"highlighted_evidence": [
"This framework captures two main debating strategies—promoting one's own points and attacking the opponents' points—and tracks their relative usage throughout the debate. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"6561163ef127a9dc8a104d0c8063a0d63de1e8bf",
"eb0fc8495aa6c079ade8ae29e4a9a51140a65259"
],
"answer": [
{
"evidence": [
"In this study we use transcripts and results of Oxford-style debates from the public debate series “Intelligence Squared Debates” (IQ2 for short). These debates are recorded live, and contain motions covering a diversity of topics ranging from foreign policy issues to the benefits of organic food. Each debate consists of two opposing teams—one for the motion and one against—of two or three experts in the topic of the particular motion, along with a moderator. Each debate follows the Oxford-style format and consists of three rounds. In the introduction, each debater is given 7 minutes to lay out their main points. During the discussion, debaters take questions from the moderator and audience, and respond to attacks from the other team. This round lasts around 30 minutes and is highly interactive; teams frequently engage in direct conversation with each other. Finally, in the conclusion, each debater is given 2 minutes to make final remarks."
],
"extractive_spans": [
"Intelligence Squared Debates"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this study we use transcripts and results of Oxford-style debates from the public debate series “Intelligence Squared Debates” (IQ2 for short)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this study we use transcripts and results of Oxford-style debates from the public debate series “Intelligence Squared Debates” (IQ2 for short). These debates are recorded live, and contain motions covering a diversity of topics ranging from foreign policy issues to the benefits of organic food. Each debate consists of two opposing teams—one for the motion and one against—of two or three experts in the topic of the particular motion, along with a moderator. Each debate follows the Oxford-style format and consists of three rounds. In the introduction, each debater is given 7 minutes to lay out their main points. During the discussion, debaters take questions from the moderator and audience, and respond to attacks from the other team. This round lasts around 30 minutes and is highly interactive; teams frequently engage in direct conversation with each other. Finally, in the conclusion, each debater is given 2 minutes to make final remarks."
],
"extractive_spans": [
"“Intelligence Squared Debates” (IQ2 for short)"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this study we use transcripts and results of Oxford-style debates from the public debate series “Intelligence Squared Debates” (IQ2 for short). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"what aspects of conversation flow do they look at?",
"what debates dataset was used?"
],
"question_id": [
"26327ccebc620a73ba37a95aabe968864e3392b2",
"ababb79dd3c301f4541beafa181f6a6726839a10"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Table 1: Example talking points used throughout the “Millennials” debate. Each talking point belongs to the side uttering the first excerpt, taken from the introduction; the second excerpt is from the discussion section. In the first example, the For side addresses the opposing talking point volunteer during the discussion; in the second example the For side refers to their own talking point boomer and recalls it later in the discussion.",
"Figure 1: The start of the debate’s interactive stage triggers a drop in self-coverage (> 0, indicated by leftmost two bars) and a rise in opponent-coverage (< 0, indicated by rightmost bars), with eventual winners showing a more pronounced drop in selfcoverage (comparing the two bars on the left).",
"Table 2: Example discussion points introduced by the Against side in the “Millennials” debate. For each point, the first excerpt is the context in which the point was first mentioned by the Against side in the discussion, and the second excerpt shows the For side challenging the point later on."
],
"file": [
"3-Table1-1.png",
"3-Figure1-1.png",
"4-Table2-1.png"
]
} | [
"what aspects of conversation flow do they look at?"
] | [
[
"1604.03114-Modeling Idea Flow-3",
"1604.03114-Predictive Power-1",
"1604.03114-Introduction-1"
]
] | [
"The time devoted to self-coverage, opponent-coverage, and the number of adopted discussion points."
] | 80 |
1608.06757 | Robust Named Entity Recognition in Idiosyncratic Domains | Named entity recognition often fails in idiosyncratic domains. That causes a problem for depending tasks, such as entity linking and relation extraction. We propose a generic and robust approach for high-recall named entity recognition. Our approach is easy to train and offers strong generalization over diverse domain-specific language, such as news documents (e.g. Reuters) or biomedical text (e.g. Medline). Our approach is based on deep contextual sequence learning and utilizes stacked bidirectional LSTM networks. Our model is trained with only few hundred labeled sentences and does not rely on further external knowledge. We report from our results F1 scores in the range of 84-94% on standard datasets. | {
"paragraphs": [
[
"Information extraction tasks have become very important not only in the Web, but also for in-house enterprise settings. One of the crucial steps towards understanding natural language is named entity recognition (NER), which aims to extract mentions of entity names in text. NER is necessary for many higher-level tasks such as entity linking, relation extraction, building knowledge graphs, question answering and intent based search. In these scenarios, NER recall is critical, as candidates that are never generated can not be recovered later BIBREF0 ."
],
[
"We abstract the task of NER as sequential word labeling problem. Figure FIGREF15 illustrates an example for sequential transformation of a sentence into word labels. We express each sentence in a document as a sequence of words: INLINEFORM0 , e.g. INLINEFORM1 Aspirin. We define a mention as the longest possible span of adjacent tokens that refer to a an entity or relevant concept of a real-world object, such as Aspirin (ASA). We further assume that mentions are non-recursive and non-overlapping. To encode boundaries of the mention span, we adapt the idea of ramshaw1995text, which has been adapted as BIO2 standard in the CoNLL2003 shared task BIBREF15 . We assign labels INLINEFORM2 to each token to mark begin, inside and outside of a mention from left to right. We use the input sequence INLINEFORM3 together with a target sequence INLINEFORM4 of the same length that contains a BIO2 label for each word: INLINEFORM5 , e.g. INLINEFORM6 B. To predict the most likely label INLINEFORM7 of a token regarding its context, we utilize recurrent neural networks."
],
[
"We have shown that most common errors for recall loss are misspellings, POS errors, capitalization, unseen words and irregular context. Therefore we generalize our model throughout three layers: robust word encoding, in-sentence word context and contextual sequence labeling.",
"Dictionary-based word vectorization methods suffer from sparse training sets, especially in the case of non-verbatim mentions, rare words, typing and capitalization errors. For example, the word2vec model of mikolov2013efficient generalizes insufficiently for rare words in idiosyncratic domains or for misspelled words, since for these words no vector representation is learned at training time. In the GENIA data set, we notice 27% unseen words (dictionary misses) in the pretrained word2vec model. As training data generation is expensive, we investigate a generic approach for the generation of word vectors. We use letter-trigram word hashing as introduced by huang2013learning. This technique goes beyond words and generates word vectors as a composite of discriminative three-letter “syllables”, that might also include misspellings. Therefore, it is robust against dictionary misses and has the advantage (despite its name) to group syntactically similar words in similar vector spaces. We compare this approach to word embedding models such as word2vec.",
"The most important features for NER are word shape properties, such as length, initial capitalization, all-word uppercase, in-word capitalization and use of numbers or punctuation BIBREF16 . Mixed-case word encodings implicitly include capitalization features. However, this approach impedes generalization, as words appear in various surface forms, e.g. capitalized at the beginning of sentences, uppercase in headlines, lowercase in social media text. The strong coherence between uppercase and lowercase characters – they might have identical semantics – is not encoded in the embedding. Therefore, we encode the words using lowercase letter-trigrams. To keep the surface information, we add flag bits to the vector that indicate initial capitalization, uppercase, lower case or mixed case."
],
[
"With sparse training data in the idiosyncratic domain, we expect input data with high variance. Therefore, we require a strong generalization for the syntactic and semantic representation of language. To reach into the high 80–90% NER F1 performance, long-range context-sensitive information is indispensable. We apply the computational model of recurrent neural networks, in particular long short-term memory networks (LSTMs) BIBREF17 , BIBREF18 to the problem of sequence labeling. Like neural feed-forward networks, LSTMs are able to learn complex parameters using gradient descent, but include additional recurrent connections between cells to influence weight updates over adjacent time steps. With their ability to memorize and forget over time, LSTMs have proven to generalize context-sensitive sequential data well BIBREF19 , BIBREF20 .",
"Figure FIGREF15 shows an unfolded representation of the steps through a sentence. We feed the LSTM with letter-trigram vectors INLINEFORM0 as input data, one word at a time. The hidden layer of the LSTM represents context from long range dependencies over the entire sentence from left to right. However, to achieve deeper contextual understanding over the boundaries of multi-word annotations and at the beginning of sentences, we require a backwards pass through the sentence. We therefore implement a bidirectional LSTM and feed the output of both directions into a second LSTM layer for combined label prediction.",
"For the use in the neural network, word encodings INLINEFORM0 and labels INLINEFORM1 are real-valued vectors. To predict the most likely label INLINEFORM2 of a token, we utilize a LSTM with input nodes INLINEFORM3 , input gates INLINEFORM4 , forget gate INLINEFORM5 , output gate INLINEFORM6 and internal state INLINEFORM7 . For the bidirectional case, all gates are duplicated and combined into forward state INLINEFORM8 and backward state INLINEFORM9 . The network is trained using backpropagation through time (BPTT) by adapting weights INLINEFORM10 and bias parameters INLINEFORM11 to fit the training examples. DISPLAYFORM0 ",
" We iterate over labeled sentences in mini-batches and update the weights accordingly. The network is then used to predict label probabilities INLINEFORM0 for unseen word sequences INLINEFORM1 ."
],
[
"To show the impact of our bidirectional LSTM model, we measure annotation performance on three different neural network configurations. We implement all components using the Deeplearning4j framework. For preprocessing (sentence and word tokenization), we use Stanford CoreNLP BIBREF11 . We test the sequence labeler using three input encodings:",
"[noitemsep]",
"DICT: We build a dictionary over all words in the corpus and generate the input vector using 1-hot encoding for each word",
"EMB: We use the GoogleNews word2vec embeddings, which encodes each word as vector of size 300",
"TRI: we implement letter-trigram word hashing as described in Section SECREF14 .",
"During training and test, we group all tokens of a sentence as mini-batch. We evaluate three different neural network types to show the impact of the bidirectional sequence learner.",
"[noitemsep]",
"FF: As baseline, we train a non-sequential feed-forward model based on a fully connected multilayer perceptron network with 3 hidden layers of size 150 with relu activation, feeding into a 3-class softmax classifier. We train the model using backpropagation with stochastic gradient descent and a learning rate of 0.005.",
"LSTM: We use a configuration of a single feed-forward layer of size 150 with two additional layers of single-direction LSTM with 20 cells and a 3-class softmax classifier. We train the model using backpropagation-through-time (BPTT) with stochastic gradient descent and a learning rate of 0.005.",
"BLSTM: Our final configuration consists of a single feed-forward layer of size 150 with one bidirectional LSTM layer with 20 cells and an additional single-direction LSTM with 20 cells into a 3-class softmax classifier. The BLSTM model is trained the same way as the single-direction LSTM."
],
[
"We evaluate nine configurations of our model on five gold standard evaluation data sets. We show that the combination of letter-trigram word hashing with bidirectional LSTM yields the best results and outperforms sequence learners based on dictionaries or word2vec. To highlight the generalization of our model to idiosyncratic domains, we run tests on common-typed data sets as well as on specialized medical documents. We compare our system on these data sets with specialized state-of-the-art systems."
],
[
"We train two models with identical parameterization, each with 2000 randomly chosen labeled sentences from a standard data set. To show the effectiveness of the components, we evaluate different configurations of this setting with 2000 random sentences from the remaining set. The model was trained using Deeplearning4j with nd4j-x86 backend. Training the TRI+BLSTM configuration on a commodity Intel i7 notebook with 4 cores at 2.8GHz takes approximately 50 minutes.",
"Table TABREF33 gives an overview of the standard data sets we use for training. The GENIA Corpus BIBREF3 contains biomedical abstracts from the PubMed database. We use GENIA technical term annotations 3.02, which cover linguistic expressions to entities of interest in molecular biology, e.g. proteins, genes and cells. CoNLL2003 BIBREF14 is a standard NER dataset based on the Reuters RCV-1 news corpus. It covers named entities of type person, location, organization and misc.",
"For testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. Additionally, we test on the complete KORE50 BIBREF21 , ACE2004 BIBREF22 and MSNBC data sets using the GERBIL evaluation framework BIBREF23 ."
],
[
"We measure precision, recall and F1 score of our DATEXIS-NER system and state-of-the-art annotators introduced in Section SECREF2 . For the comparison with black box systems, we evaluate annotation results using weak annotation match. For a more detailed in-system error analysis, we measure BIO2 labeling performance based on each token.",
"We measure the overall performance of mention annotation using the evaluation measures defined by cornolti2013framework, which are also used by ling2015design. Let INLINEFORM0 be a set of documents with gold standard mention annotations INLINEFORM1 with a total of INLINEFORM2 examples. Each mention INLINEFORM3 is defined by start position INLINEFORM4 and end position INLINEFORM5 in the source document INLINEFORM6 . To quantify the performance of the system, we compare INLINEFORM7 to the set of predicted annotations INLINEFORM8 with mentions INLINEFORM9 : DISPLAYFORM0 ",
" We compare using a weak annotation match: DISPLAYFORM0 ",
" We measure micro-averaged precision ( INLINEFORM0 ), recall ( INLINEFORM1 ) and NER-style ( INLINEFORM2 ) score: DISPLAYFORM0 ",
"Tuning the model configuration with annotation match measurement is not always feasible. We therefore measure INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 separately for each label class INLINEFORM4 in our classification model and calculate binary classification precision INLINEFORM5 , recall INLINEFORM6 and INLINEFORM7 scores. To avoid skewed results from the expectedly large INLINEFORM8 class, we use macro-averaging over the three classes: DISPLAYFORM0 "
],
[
"We now discuss the evaluation of our DATEXIS-NER system on common and idiosyncratic data.",
"Table TABREF35 shows the comparison of DATEXIS-NER with eight state-of-the-art annotators on four common news data sets. Both common and medical models are configured identically and trained on only 2000 labeled sentences, without any external prior knowledge. We observe that DATEXIS-NER achieves the highest recall scores of all tested annotators, with 95%–98% on all measured data sets. Moreover, DATEXIS-NER precision scores are equal or better than median. Overall, we achieve high micro-F1 scores of 84%–94% on news entity recognition, which is slightly better than the ontology-based Entityclassifier.eu NER and reveals a better generalization than the 3-type Stanford NER with distributional semantics. We notice that systems specialized on word-sense disambiguation (Babelfy, DBpedia Spotlight) don't perform well on “raw” untyped entity recognition tasks. The highest precision scores are reached by Stanford NER. We also notice a low precision of all annotators on the ACE2004 dataset and high variance in MSNBC performance, which are probably caused by differing annotation standards.",
"Table TABREF45 shows the results of biomedical entity recognition compared to the participants of the JNLPBA 2004 bio-entity recognition task BIBREF14 . We notice that for these well-written Medline abstracts, there is not such a strong skew between precision and recall. Our DATEXIS-NER system outperforms the HMM, MEMM, CRF and CDN based models with a micro-F1 score of 84%. However, the highly specialized GENIA chunker for LingPipe achieves higher scores. This chunker is a very simple generative model predictor that is based on a sliding window of two tokens, word shape and dictionaries. We interpret this score as strong overfitting using a dictionary of the well-defined GENIA terms. Therefore, this model will generalize hardly considering the simple model. We can confirm this presumption in the common data sets, where the MUC-6 trained HMM LingPipe chunker performs on average on unseen data.",
"We evaluate different configurations of the components that we describe in Section SECREF22 . Table TABREF47 shows the results of experiments on both CoNLL2003 and GENIA data sets. We report the highest macro-F1 scores for BIO2 labeling for the configuration of letter-trigram word vectors and bidirectional LSTM. We notice that dictionary-based word encodings (DICT) work well for idiosyncratic medical domains, whereas they suffer from high word ambiguity in the news texts. Pretrained word2vec embeddings (EMB) perform well on news data, but cannot adapt to the medical domain without retraining, because of a large number of unseen words. Therefore, word2vec generally achieves a high precision on news texts, but low recall on medical text. The letter-trigram approach (TRI) combines both word vector generalization and robustness towards idiosyncratic language.",
"We observe that the contextual LSTM model achieves scores throughout in the 85%–94% range and significantly outperforms the feed-forward (FF) baseline that shows a maximum of 75%. Bidirectional LSTMs can further improve label classification in both precision and recall."
],
[
"We investigate different aspects of the DATEXIS-NER components by manual inspection of classification errors in the context of the document. For the error classes described in the introduction (false negative detections, false positives and invalid boundaries), we observe following causes:",
"In dictionary based configurations (e.g. 1-hot word vector encoding DICT), we observe false negative predictions caused by dictionary misses for words that do not exist in the training data. The cause can be rare unseen or novel words (e.g. T-prolymphocytic cells) or misspellings (e.g. strengthnend). These words yield a null vector result from the encoder and can therefore not be distinguished by the LSTM. The error increases when using word2vec, because these models are trained with stop words filtered out. This implicates that e.g. mentions surrounded by or containing a determiner (e.g. The Sunday Telegraph quoted Majorie Orr) are highly error prone towards the detection of their boundaries. We resolve this error by the letter-trigram approach. Unseen trigrams (e.g. thh) may still be missing in the word vector, but only affect single dimensions as opposed to the vector as a whole.",
"Surface forms encode important features for NER (e.g. capitalization of “new” in Alan Shearer was named as the new England captain / as New York beat the Angels). However, case-sensitive word vectorization methods yield a large amount of false positive predictions caused by incorrect capitalization in the input data. An uppercase headline (e.g. TENNIS - U.S. TEAM ON THE ROAD FOR 1997 FED CUP) is encoded completely different than a lowercase one (e.g. U.S. team on the road for Fed Cup). Because of that, we achieve best results with lowercase word vectors and additional surface form feature flags, as described in Section SECREF14 .",
"We observe mentions that are composed of co-occurring words with high ambiguity (e.g. degradation of IkB alpha in T cell lines). These groups encode strong syntagmatic word relations BIBREF24 that can be leveraged to resolve word sense and homonyms from sentence context. Therefore, correct boundaries in these groups can effectively be identified only with contextual models such as LSTMs.",
"Orthogonal to the previous problem, different words in a paradigmatic relation BIBREF24 can occur in the same context (e.g. cyclosporin A-treated cells / HU treated cells). These groups are efficiently represented in word2vec. However, letter-trigram vectors cannot encode paradigmatic groups and therefore require a larger training sample to capture these relations.",
"Often, synonyms can only be resolved regarding a larger document context than the local sentence context known by the LSTM. In these cases, word sense is redefined by a topic model local to the paragraph (e.g. sports: Tiger was lost in the woods after divorce.). This problem does not heavily affect NER recall, but is crucial for named entity disambiguation and coreference resolution.",
"The proposed DATEXIS-NER model is restricted to recognize boundaries of generic mentions in text. We evaluate the model on annotations of isolated types (e.g. persons, organizations, locations) for comparison purposes only, but we do not approach NER-style typing. Contrary, we approach to detect mentions without type information. The detection of specific types can be realized by training multiple independent models on a selection of labels per type and nesting the resulting annotations using a longest-span semantic type heuristic BIBREF25 ."
],
[
"ling2015design show that the task of NER is not clearly defined and rather depends on a specific problem context. Contrary, most NER approaches are specifically trained on fixed datasets in a batch mode. Worse, they often suffer from poor recall BIBREF26 . Ideally, one could personalize the task of recognizing named entities, concepts or phrases according to the specific problem. “Personalizing” and adapting such annotators should happen with very limited human labeling effort, in particular for idiosyncratic domains with sparse training data.",
"Our work follows this line. From our results we report F1 scores between 84–94% when using bidirectional multi-layered LSTMs, letter-trigram word hashing and surface form features on only few hundred training examples.",
"This work is only a preliminary step towards the vision of personalizing annotation guidelines for NER BIBREF2 . In our future work, we will focus on additional important idiosyncratic domains, such as health, life science, fashion, engineering or automotive. For these domains, we will consider the process of detecting mentions and linking them to an ontology as a joint task and we will investigate simple and interactive workflows for creating robust personalized named entity linking systems."
]
],
"section_name": [
"Introduction",
"Robust Contextual Word Labeling",
"Robust Word Encoding Methods",
"Deep Contextual Sequence Learning",
"Implementation of NER Components ",
"Evaluation",
"Evaluation Set Up",
"Measurements",
"Evaluation Results",
"Discussion and Error Analysis",
"Summary"
]
} | {
"answers": [
{
"annotation_id": [
"622fe23e950e57fd49d1248b9cf192e1a997e929"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Comparison of annotators trained for common English news texts (micro-averaged scores on match per annotation span). The table shows micro-precision, recall and NER-style F1 for CoNLL2003, KORE50, ACE2004 and MSNBC datasets."
],
"extractive_spans": [],
"free_form_answer": "Babelfy, DBpedia Spotlight, Entityclassifier.eu, FOX, LingPipe MUC-7, NERD-ML, Stanford NER, TagMe 2",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Comparison of annotators trained for common English news texts (micro-averaged scores on match per annotation span). The table shows micro-precision, recall and NER-style F1 for CoNLL2003, KORE50, ACE2004 and MSNBC datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"678059f1629d978b0591080db2427bda76af80ad",
"9c9d4eafa0686fec16fbfbd3a6e1b1e84296ae30",
"dc2bafa6bcd6e5fd56a7ea9a3df8db225a7ba2d4"
],
"answer": [
{
"evidence": [
"Table TABREF33 gives an overview of the standard data sets we use for training. The GENIA Corpus BIBREF3 contains biomedical abstracts from the PubMed database. We use GENIA technical term annotations 3.02, which cover linguistic expressions to entities of interest in molecular biology, e.g. proteins, genes and cells. CoNLL2003 BIBREF14 is a standard NER dataset based on the Reuters RCV-1 news corpus. It covers named entities of type person, location, organization and misc.",
"For testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. Additionally, we test on the complete KORE50 BIBREF21 , ACE2004 BIBREF22 and MSNBC data sets using the GERBIL evaluation framework BIBREF23 ."
],
"extractive_spans": [
"The GENIA Corpus ",
"CoNLL2003"
],
"free_form_answer": "",
"highlighted_evidence": [
"The GENIA Corpus BIBREF3 contains biomedical abstracts from the PubMed database. We use GENIA technical term annotations 3.02, which cover linguistic expressions to entities of interest in molecular biology, e.g. proteins, genes and cells. CoNLL2003 BIBREF14 is a standard NER dataset based on the Reuters RCV-1 news corpus. It covers named entities of type person, location, organization and misc.\n\nFor testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF33 gives an overview of the standard data sets we use for training. The GENIA Corpus BIBREF3 contains biomedical abstracts from the PubMed database. We use GENIA technical term annotations 3.02, which cover linguistic expressions to entities of interest in molecular biology, e.g. proteins, genes and cells. CoNLL2003 BIBREF14 is a standard NER dataset based on the Reuters RCV-1 news corpus. It covers named entities of type person, location, organization and misc.",
"For testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. Additionally, we test on the complete KORE50 BIBREF21 , ACE2004 BIBREF22 and MSNBC data sets using the GERBIL evaluation framework BIBREF23 ."
],
"extractive_spans": [
"GENIA Corpus BIBREF3",
"CoNLL2003 BIBREF14",
"KORE50 BIBREF21 , ACE2004 BIBREF22 and MSNBC"
],
"free_form_answer": "",
"highlighted_evidence": [
"The GENIA Corpus BIBREF3 contains biomedical abstracts from the PubMed database. We use GENIA technical term annotations 3.02, which cover linguistic expressions to entities of interest in molecular biology, e.g. proteins, genes and cells. CoNLL2003 BIBREF14 is a standard NER dataset based on the Reuters RCV-1 news corpus. It covers named entities of type person, location, organization and misc.\n\nFor testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. Additionally, we test on the complete KORE50 BIBREF21 , ACE2004 BIBREF22 and MSNBC data sets using the GERBIL evaluation framework BIBREF23 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. Additionally, we test on the complete KORE50 BIBREF21 , ACE2004 BIBREF22 and MSNBC data sets using the GERBIL evaluation framework BIBREF23 ."
],
"extractive_spans": [
"CoNLL2003-testA",
"GENIA"
],
"free_form_answer": "",
"highlighted_evidence": [
"For testing the overall annotation performance, we utilize CoNLL2003-testA and a 50 document split from GENIA. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"what is the state of the art?",
"what standard dataset were used?"
],
"question_id": [
"c2b8ee872b99f698b3d2082d57f9408a91e1b4c1",
"8eefa116e3c3d3db751423cc4095d1c4153d3a5f"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: Architecture of the LSTM network used for named entity recognition. The character stream “Aspirin has an antiplatelet effect.” is",
"Table 1: Overview of CoNLL2003 and GENIA training datasets and sizes of word encodings. We use 2000 sentences of each set for training.",
"Table 2: Comparison of annotators trained for common English news texts (micro-averaged scores on match per annotation span). The table shows micro-precision, recall and NER-style F1 for CoNLL2003, KORE50, ACE2004 and MSNBC datasets.",
"Table 3: Comparison of annotators trained for biomedical text. The table shows NER annotation results for 50 documents from the GENIA dataset.",
"Table 4: Comparison of nine configurations from our implementation (macro-averaged scores on BIO2 classification per token)."
],
"file": [
"3-Figure1-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png"
]
} | [
"what is the state of the art?"
] | [
[
"1608.06757-6-Table2-1.png"
]
] | [
"Babelfy, DBpedia Spotlight, Entityclassifier.eu, FOX, LingPipe MUC-7, NERD-ML, Stanford NER, TagMe 2"
] | 81 |
2001.03131 | Offensive Language Detection: A Comparative Analysis | Offensive behaviour has become pervasive in the Internet community. Individuals take the advantage of anonymity in the cyber world and indulge in offensive communications which they may not consider in the real life. Governments, online communities, companies etc are investing into prevention of offensive behaviour content in social media. One of the most effective solution for tacking this enigmatic problem is the use of computational techniques to identify offensive content and take action. The current work focuses on detecting offensive language in English tweets. The dataset used for the experiment is obtained from SemEval-2019 Task 6 on Identifying and Categorizing Offensive Language in Social Media (OffensEval). The dataset contains 14,460 annotated English tweets. The present paper provides a comparative analysis and Random kitchen sink (RKS) based approach for offensive language detection. We explore the effectiveness of Google sentence encoder, Fasttext, Dynamic mode decomposition (DMD) based features and Random kitchen sink (RKS) method for offensive language detection. From the experiments and evaluation we observed that RKS with fastetxt achieved competing results. The evaluation measures used are accuracy, precision, recall, f1-score. | {
"paragraphs": [
[
"In this digital era, online discussions and interactions has become a vital part of daily life of which a huge part is covered by social media platforms like twitter, facebook, instagram etc. Similar to real life there exist anti-social elements in the cyberspace, who take advantage of the anonymous nature in cyber world and indulge in vulgar and offensive communications. This includes bullying, trolling, harassment BIBREF0, BIBREF1 and has become a growing concern for governments. Youth experiencing such victimization was recorded to have psychological symptoms of anxiety, depression, loneliness BIBREF1. Thus it is important to identify and remove such behaviours at the earliest. One solution to this is the automatic detection using machine learning algorithms.",
"Detecting offensive language from social media is a challenging research problem due to the different level of ambiguities present in the natural language and noisy nature of the social media language. Moreover, social media subscribers are from linguistically diverse and varying communities. Overseeing the complication of this problem, BIBREF2 organized a task in SemEval2019, Task 6: Identifying and Categorizing Offensive Language in Social Media. The tweets were collected by the organizers using Twitter API and have annotated them in a hierarchical manner as offensive language present in the tweet, type of the offense and target of the offense. There were three sub-tasks according to the hierarchy of annotation: a) To detect if a post is offensive (OFF) or not (NOT), b) To Identify the type of offense in the post as targeted threat (TTH), targeted insult (TIN), untargeted (UNT), c) To identify if offense is targeted to organization or entity (ORG), group of people (GRP), individual (IND), or other (OTH).",
"The dataset had the following challenges:",
"Dataset was comparatively smaller.",
"Dataset was biased/imbalanced BIBREF3.",
"In this paper, we are proposing a comparative analysis for the sub-task A :Offensive language identification of the SemEval2019, Task 6. Sub-task A was the most popular sub-task among the three and had total 104 team participation. In Table TABREF19, the list of first 5 participants along with system and f1-score has been shown.",
"Offensive language detection is one of the challenging and interesting topic for research. Recent past had multiple shared tasks and research on this topic. One of the initial work on offensive language using supervised classification was done by Yin et al. BIBREF4. They have used Ngram, TFIDF and combination of TFIDF with Sentiment and Contextual as Features. Schmidt and Wiegand BIBREF5 gave a survey on automatic hate speech detection using NLP. The authors surveyed on features like Simple Surface Features, Word Generalization, Linguistic Features, Knowledge-Based Features, Multimodal Information etc. In 2013, a study on detecting cyberbullying in YouTube comments was done by Dadvar et al. BIBREF6. They have used a combination of content-based, cyberbullying-specific and user-based features and showed that detection of cyberbullying can be improved by taking user context into account. Shared task GermEval 2018 organised by Wiegand et al.,BIBREF7 was focused on offensive language detection on German tweets. It had a dataset of over 8,500 annotated tweets and was trained for binary classification of offensive and non-offensive tweets. They obtained an overall maco-F1 score of 76.77%. Another shared task on Aggression Identification in Social Media was organised by Kumar et al., BIBREF8. The task provided a dataset with 15,000 annotated Facebook posts and comments in Hindi and English. They obtained a weighted F-score of 64% for both English and Hindi. The rest of the paper is structured as follows. Section 2 explains about the methodology with formulation. Section 3 discusses on the Proposed approach. Section 4 talks on the Experiments and discussions performed. Finally, conclusion is given in Section 5."
],
[
"Data pre-processing is a very crucial step which needs to be done before applying any machine learning tasks, because the real time data could be very noisy and unstructured. For the two models used in this work, pre-processing of tweets is done separately:",
"Pre-processing for Google model:",
"It has become a common culture to use #tags across social media. So we have replaced multiple #tags with a single #tag. Mostly @ symbol id used to mention person or entities in a tweet. So we replace multiple @symbols with a single @-mention. Some tweets may contain the link to a website or some other urls. So we replace all of these with a single keyword URLS.",
"Pre-processing for fasttext model:",
"For applying fasttext model to get word vectors, we followed a different set of pre-processing steps. First, all the numbers, punctuation marks, urls (http:// or www.) and symbols (emoji, #tags, -mention) were removed from the tweet as it do not contain information related to sentiment. After that, tokenization and lowercasing was applied to the tweets. Tokenization was done using tokenizer from NLTK package BIBREF9. Finally, the stop word are removed. The list is obtained from NLTK package."
],
[
"Word embeddings are ubiquitous for any NLP problem, as algorithms cannot process the plain text or strings in its raw form. Word emeddings are vectors that captures the semantic and contextual information of words. The word embedding used for this work are:",
"FastText: The fastText algorithm created by Facebook BIBREF10 assumes every word to be n-grams of character. It helps to give the vector representations for out of vocabuary words. For the current work, fasttext based word embedding is used for generating token vectors of dimension 300 BIBREF11. Each vector corresponding to the tweet is generated by taking the average of token vectors.",
"Universal Sentence Encoder: Developed by Google, Universal sentence encoder BIBREF12, BIBREF13 provides embeddings at sentence level. The dimension of the embedding vector is 512, irrespective of the number of tokens in the input tweet. These vectors can capture good semantic information from the sentences. For each tweet, this model generates a 512 length embedding vector and is used as features for further classification.",
"DMD and HODMD: DMD is a method initially used in fluid dynamics which captures spatio-temporal features BIBREF14. It has been used in background-foreground separation BIBREF15, load forecasting BIBREF16, saliency detection BIBREF17 etc. For natural language processing, DMD has been first applied for sentiment analysis BIBREF18, BIBREF19. This motivated to explore DMD based feature for the present work.",
"Dynamic mode decomposition (DMD) is a much more powerful concept and it assumes the evolution of the function over the rectangular field is effected by the mapping of a constant matrix $A$. $A$ captures the system’s inherent dynamics and the aim of the DMD is to understand using its dominant eignevalues and eigenvectors. Assumption is that this matrix $A$ is of low rank and hence the sequence of vectors $ \\mathop {{x_1}}\\limits _|^| ,\\mathop {{x_2}}\\limits _|^| ,\\mathop {{x_3}}\\limits _|^| ,...\\mathop {,{x_k}}\\limits _|^| ,...,\\mathop {{x_{m + 1}}}\\limits _|^| $ finally become a linearly dependent set. That is, vector $ \\mathop {{x_{m + 1}}}\\limits _|^|$ become linearly dependent on previous vectors. The data matrix X in terms of eigen vectors associated with matrix $A$.",
"where, ${\\Phi ^\\dag }$ is pseudo inverse of ${\\Phi }$. $A$ is of rank m and ${\\Phi }$ have m columns. Hence, pseudo-inverse will do the job than inverse operation. The columns of ${\\Phi }$ are called DMD modes and this forms the features.",
"Time-lagged matrices are prepared as snapshot for this approach. In Eigensent BIBREF20, the authors proposed HODMD to find embedings for sentences. The authors suggested, sentences can be represented as a signal using word embeddings by taking the average of word vectors. This is intuitive because word embeddings almost obeys the laws of linear algebra, by capturing word analogies and relationships. Therefore, by considering every sentence as a multi-dimensional signal, we can capture the important transitional dynamics of sentences. Also, for the signal representation of sentences, each word vector will act as a single point in the signal. For the present work, to generate DMD and HODMD based features, Fastext based embedding is used."
],
[
"RKS approach proposed in BIBREF21, BIBREF22, explicitly maps data vectors to a space where linear separation is possible. It has been explored for natural language processing tasks BIBREF23, BIBREF24. The RKS method provides an approximate kernel function via explicit mapping.",
"Here, $\\phi (.)$ denotes the implicit mapping function (used to compute kernel matrix), $Z(.)$ denotes the explicit mapping function using RKS and ${\\Omega _k}$ denotes random variable .",
"Figure FIGREF15 show the block diagram of the proposed approach."
],
[
"OLID (Offensive Language Identification Dataset) is a collection of English tweets which are annotated using a three-level hierarchical annotation model. It was collected using Twitter API and contains 14,460 annotated tweets. The task divided the data as train, trial and test, of which train and trial was initially released as starting kit, finally test was released as Test A release. All three of these partitions were highly biased, thus making the task more challenging and real time. The train set had 13,240 tweets, out of which 8840 tweets were not offensive (NOT) and 4400 tweets were offensive (OFF). Similarly, test set had 860 tweets, which had 620 not offensive and 280 offensive tweets. TableTABREF17 show the data distribution of the entire dataset. For the current work, train and test data are taken which is 14,100 tweets in number.",
"In Sub-task A: Offensive language identification, the goal was to discriminate offensive and not-offensive twitter posts. The target classes for each instance were a) Offensive (OFF): posts that contain any form of profanity or targeted offence. This includes threats, insults, and any form of untargeted profanity. b) Not Offensive (NOT): posts that doesn't have any profanity or offense in it. The result and discussion of the top 10 teams for the sub-task A is in section for Introduction. In that, team BIBREF3 obtained highest f1-score of 82.9%"
],
[
"This section describes the result as three different cases. Case 1 & 2 provides baseline approach to compare with the proposed RKS approcah described in case 3. Table TABREF19 gives the results of the top 5 teams of sub-task A. Team with rank 1 achieved a maximum f1-score of 82.9%."
],
[
"In this work, we have selected word vectors generated by Google universal encoder model, Fasttext, and DMD based features. The classification using the selected features are performed using machine learning algorithms such as Random Forest (RF), Decision Tree (DT), Naive Bayes (NB), Support vector machine (SVM) linear and RBF kernels, Logistic Regression, and Random kitchen sinks. The evaluation measures used are accuracy (Acc.), precision (Prec), recall, f1-score (F1). Table TABREF21 shows the classification result obtained for classical machine learning algorithms using the Google universal sentence encoder model features. It can be observed that svm linear classifier and Logistic regression has given maximum accuracy of 82.44% and 82.56%.",
"Table TABREF22 shows the classification results obtained using the features generated by fasttext model for classical machine learning algorithms. For the fasttext model also, svm linear and logistic regression model have given maximum accuracies of 81.16% respectively."
],
[
"In order to provide a comparison, we explore DMD based features. The Table TABREF24 shows the result obtained for normal DMD and HODMD based feature. The order for HODMD for the present work is 2 & 3. The classification is performed using SVM-linear kernel with control parameter value chosen as 1000 as the suitable one. We tried for other values such as 0.1, 1, 100, 500, and 1000. Figure FIGREF25 shows the control parameter versus accuracy plot which helped to fix the parameter value."
],
[
"RKS approach has been used in the articles for NLP tasks [29,30,23]. In this work, we use RKS to imporve the evaluation scores as discussed previously. The RKS approach explicitly maps the embedding vectors to a dimension where the data becomes linearly separable. In that space, regularized least-square based classification (RLSC) is performed. The implementation of the RKS is taken from BIBREF29, BIBREF30. The feature vectors from Google universal sentence encoder and fasttext are explicitly mapped using RKS and the results are tabulated in Table TABREF27 and TABREF28.",
"The Table TABREF27 shows the classification report on the proposed RKS method taking word vectors generated by Google universal encoder model as features with dimension 512. For this work, such vector is explicitly mapped to dimensions 100, 200, 500 and 1000 using RKS. The maximum accuracy obtained is 90.58% for higher dimension 1000.",
"Table TABREF28 shows the classification report on the proposed RKS method taking word vectors generated by Fasttext model as features. For this model also, features are mapped to dimensions 100, 200, 500 and 1000. For Fasttext model, the proposed method gave a maximum accuracy of 99.53%, which is a bench marking result when compared to the literature. This result shows the discriminating capability of the features chosen, as when mapped to higher dimensions, they become linearly separable. From Table TABREF27 and TABREF28 it can be observed that as the mapping dimension increases, the evaluation score also improves. This shows the effectiveness of the RKS approach to obtain competing score. The capability of the RKS approach cane be explored on large datasets."
],
[
"Offensive language detection is an important task related to social media data analysis. The nature of the content can vary as its provided by different people. The current work uses the data provided in SemEval 2019 shared task A for Offensive language identification. A comparative study is provided by exploring the effectiveness of Google universal sentence encoder, Fasttext based embedding, Dynamic Mode Decomposition based features and RKS based explicit mapping approach. For the experiments, we used the machine learning methods such as SVM linear, Random Forest, Logistic regression, Navie Bayes and Regularized least-square based classification. The measures used for evaluation are accuracy, precision, recall, and f1-score. We observed that RKS approach improved the results. However, as a future work, the proposed approach cane be explored on large datasets.",
""
]
],
"section_name": [
"Introduction",
"Methodology ::: Data Pre-processing",
"Methodology ::: Embeddings",
"Proposed Approach",
"Experiments and Discussions ::: Data Description",
"Experiments and Discussions ::: Results and Comparisons",
"Experiments and Discussions ::: Results and Comparisons ::: Case 1: Embeddings approach",
"Experiments and Discussions ::: Results and Comparisons ::: Case 2: DMD approach",
"Experiments and Discussions ::: Results and Comparisons ::: Case 3: RKS approach",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"9dedeabb9ffdd6dcac3fdfe225c87381cb59c76d",
"e5950b1b217ff0f88448d653e35c1255eff81c60"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7f9383e4efca41fc619e66d9cf26878c0a088699",
"b13a2b30560e5849062e6daf4b8877c7f85d07bd"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"624d14d45d39a6998f5b92cdd2e08a67897d2823",
"db176830cb21bdf94686db24a6371e29088f1b43"
],
"answer": [
{
"evidence": [
"RKS approach proposed in BIBREF21, BIBREF22, explicitly maps data vectors to a space where linear separation is possible. It has been explored for natural language processing tasks BIBREF23, BIBREF24. The RKS method provides an approximate kernel function via explicit mapping."
],
"extractive_spans": [],
"free_form_answer": "Random Kitchen Sink method uses a kernel function to map data vectors to a space where linear separation is possible.",
"highlighted_evidence": [
"RKS approach proposed in BIBREF21, BIBREF22, explicitly maps data vectors to a space where linear separation is possible.",
"The RKS method provides an approximate kernel function via explicit mapping."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"RKS approach proposed in BIBREF21, BIBREF22, explicitly maps data vectors to a space where linear separation is possible. It has been explored for natural language processing tasks BIBREF23, BIBREF24. The RKS method provides an approximate kernel function via explicit mapping.",
"Here, $\\phi (.)$ denotes the implicit mapping function (used to compute kernel matrix), $Z(.)$ denotes the explicit mapping function using RKS and ${\\Omega _k}$ denotes random variable ."
],
"extractive_spans": [
"explicitly maps data vectors to a space where linear separation is possible",
"RKS method provides an approximate kernel function via explicit mapping"
],
"free_form_answer": "",
"highlighted_evidence": [
"RKS approach proposed in BIBREF21, BIBREF22, explicitly maps data vectors to a space where linear separation is possible. It has been explored for natural language processing tasks BIBREF23, BIBREF24. The RKS method provides an approximate kernel function via explicit mapping.\n\nHere, $\\phi (.)$ denotes the implicit mapping function (used to compute kernel matrix), $Z(.)$ denotes the explicit mapping function using RKS and ${\\Omega _k}$ denotes random variable ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they perform error analysis?",
"How do their results compare to state-of-the-art?",
"What is the Random Kitchen Sink approach?"
],
"question_id": [
"133eb4aa4394758be5f41744c60c99901b2bc01c",
"3fff37b9f68697d080dbd9d9008a63907137644e",
"a778b8204a415b295f73b93623d09599f242f202"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"Offensive language detection",
"Offensive language detection",
"Offensive language detection"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Illustrates the block diagram for the proposed approach",
"Table 1: Illustrates the Data distribution",
"Table 3: Performance evaluation of Universal encoder model features using classical machine learning algorithms",
"Table 4: Performance evaluation of Fasttext model features using classical machine learning algorithms",
"Table 2: Illustrates results of top 5 teams in semeval 2019: Task 6 sub-task A",
"Table 5: Performance evaluation of DMD and HODMD features using SVM linear for control parameter fixed as 1000",
"Table 7: Performance evaluation of proposed method using Fasttext model features",
"Figure 2: Illustrates the Accuracy v/s control parameter for DMD and HODMD",
"Table 6: Performance evaluation of proposed method using Universal encoder model features"
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"4-Table3-1.png",
"4-Table4-1.png",
"4-Table2-1.png",
"5-Table5-1.png",
"5-Table7-1.png",
"5-Figure2-1.png",
"5-Table6-1.png"
]
} | [
"What is the Random Kitchen Sink approach?"
] | [
[
"2001.03131-Proposed Approach-1",
"2001.03131-Proposed Approach-0"
]
] | [
"Random Kitchen Sink method uses a kernel function to map data vectors to a space where linear separation is possible."
] | 82 |
1803.09123 | Equation Embeddings | We present an unsupervised approach for discovering semantic representations of mathematical equations. Equations are challenging to analyze because each is unique, or nearly unique. Our method, which we call equation embeddings, finds good representations of equations by using the representations of their surrounding words. We used equation embeddings to analyze four collections of scientific articles from the arXiv, covering four computer science domains (NLP, IR, AI, and ML) and $\sim$98.5k equations. Quantitatively, we found that equation embeddings provide better models when compared to existing word embedding approaches. Qualitatively, we found that equation embeddings provide coherent semantic representations of equations and can capture semantic similarity to other equations and to words. | {
"paragraphs": [
[
"Equations are an important part of scientific articles, but many existing machine learning methods do not easily handle them. They are challenging to work with because each is unique or nearly unique; most equations occur only once. An automatic understanding of equations, however, would significantly benefit methods for analyzing scientific literature. Useful representations of equations can help draw connections between articles, improve retrieval of scientific texts, and help create tools for exploring and navigating scientific literature.",
"In this paper we propose equation embeddings (EqEmb), an unsupervised approach for learning distributed representations of equations. The idea is to treat the equation as a \"singleton word,\" one that appears once but that appears in the context of other words. The surrounding text of the equation—and in particular, the distributed representations of that text—provides the data we need to develop a useful representation of the equation.",
"Figure FIGREF1 illustrates our approach. On the left is an article snippet BIBREF0 . Highlighted in orange is an equation; in this example it represents a neural network layer. We note that this particular equation (in this form and with this notation) only occurs once in the collection of articles (from arXiv). The representations of the surrounding text, however, provide a meaningful context for the equation. Those words allow us to learn its embedding, specifically as a \"word\" which appears in the context of its surroundings. The resulting representation, when compared to other equations' representations and word representations, helps find both related equations and related words. These are illustrated on the right.",
"EqEmbs build on exponential family embeddings BIBREF1 to include equations as singleton observations and to model equation elements such as variables, symbols and operators. Exponential family embeddings, like all embedding methods, define a context of each word. In our initial EqEmb, the context for the words is a small window, such as four or eight words, but the context of an equation is a larger window, such as sixteen words. Using these two types of contexts together finds meaningful representations of words and equations. In the next EqEmb, which builds on the first, we consider equations to be sentences consisting of equation units, i.e., variables, symbols, and operators. Equation units help model equations across two types of context—over the surrounding units and over the surrounding words.",
"We studied EqEmbs on four collections of scientific articles from the arXiv, covering four computer science domains: natural language processing (NLP), information retrieval (IR), artificial intelligence (AI) and machine learning (ML). We found that EqEmbs provide more efficient modeling than existing word embedding methods. We further carried out an exploratory analysis of a large set of INLINEFORM0 87k equations. We found that EqEmbs provide better models when compared to existing word embedding approaches. EqEmbs also provide coherent semantic representations of equations and can capture semantic similarity to other equations and to words."
],
[
"Word embeddings were first introduced in BIBREF2 , BIBREF3 and there have been many variants BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Common for all of them is the idea that words can be represented by latent feature vectors. These feature vectors are optimized to maximize the conditional probability of the dataset. Recently BIBREF1 extended the idea of word embeddings to other types of data. EqEmb expand the idea of word embeddings to a new type of data points – equations.",
"There have been different proposed approaches for representing mathematical equations. BIBREF8 introduced the symbol layout tree, a representation that encodes the spatial relationship of variables and operators for the purpose of indexing and retrieving mathematical equations. Our work also falls into the framework of mathematical language processing (MLP) BIBREF9 whose first step is converting mathematical solutions into a series of numerical features."
],
[
"EqEmb are based on word embeddings BIBREF5 or specifically Bernoulli embeddings (b-embs) BIBREF1 . Word embeddings models the probability of a word INLINEFORM0 given its context INLINEFORM1 as a conditional distribution INLINEFORM2 where the context is defined as the set of words INLINEFORM3 in a window of size INLINEFORM4 that surrounds it. In word embeddings each word is assigned to two types of latent feature vectors, the embedding ( INLINEFORM5 ) and context ( INLINEFORM6 ) vectors, both of which are INLINEFORM7 dimensional.",
"B-emb is an exponential family embedding model where the conditional distribution is a Bernoulli: DISPLAYFORM0 ",
"The parameter INLINEFORM0 is defined using the word embedding INLINEFORM1 and the word context INLINEFORM2 vectors: DISPLAYFORM0 ",
"where INLINEFORM0 is the logistic function."
],
[
"Given a dataset of words and equations the goal of the EqEmb models is to derive a semantic representation of each equation. EqEmb model equations in the context of words. EqEmb is based on the idea that a good semantic representation of equations could be discovered by expanding the original word context to include any equations that appear in a possibly larger window around it.",
"We assign embeddings to words ( INLINEFORM0 , INLINEFORM1 ) and equations ( INLINEFORM2 , INLINEFORM3 ). The objective function contains conditionals over the observed words and equations: DISPLAYFORM0 ",
"This is a sum of two sets of conditional distributions, the first over observed words ( INLINEFORM0 ) and the second over observed equations ( INLINEFORM1 ). In word embedding models, INLINEFORM2 and INLINEFORM3 are referred to as embedding and context vectors. Here we use a different terminology: the interaction INLINEFORM4 and feature vector INLINEFORM5 .",
"In word embeddings, the context of the word INLINEFORM0 is defined to index the surrounding words in a small window around it. Here the context of the word INLINEFORM1 will be the original context ( INLINEFORM2 ) and any equations ( INLINEFORM3 ) that are in a possibly larger window around it. This is referred to as the word-equation context window.",
"Both conditionals are Bernoulli distributions. The first conditional is defined over the words in the collection. It has the following parameter: DISPLAYFORM0 ",
"The word context function is: DISPLAYFORM0 ",
"This function encompasses the words in the original word context ( INLINEFORM0 ) and any equations ( INLINEFORM1 ) that appear in a possibly larger window ( INLINEFORM2 ) around it.",
"The second term in the objective corresponds to the sum of the log conditional probabilities of each equation. Its parameter is: DISPLAYFORM0 ",
"Similar to word embeddings, equation context INLINEFORM0 contains words that are in a context window around the equation: DISPLAYFORM0 ",
"The equation context can have a larger window than the word context. Equation feature vectors ( INLINEFORM0 ) are only associated with the first term of the objective function. This function contains the words where the equation appears in their larger context INLINEFORM1 .",
"The left side of Figure FIGREF1 shows an example equation in a scientific article. With a word context of size INLINEFORM0 we model the words in the article while ignoring equations. For example when modeling the word \"embedding\" (highlighted in green) with context window size of 4 (i.e. INLINEFORM1 ), the context contains the words that appear two words before (\"current\" and \"word\") and after (\"recurrent\" and \"version\") this word. With a word-equation context window of size INLINEFORM2 =16, the term for the word \"embedding\" would have the feature vector of the equation as one of its components."
],
[
"Building on our previous method, we define a new model which we call equation unit embeddings (EqEmb-U). EqEmb-U model equations by treating them as sentences where the words are the equation variables, symbols and operators which we refer to as units. The first step in representing equations using equation units is to tokenize them. We use the approach outlined in BIBREF8 which represents equations into a syntax layout tree (SLT), a sequence of SLT tuples each of which contains the spatial relationship information between two equation symbols found within a particular window of equation symbols. Figure FIGREF11 shows example SLT representations of three equations.",
"Each equation INLINEFORM0 is a sequence of equation units INLINEFORM1 , INLINEFORM2 similar to a sentence where the words are the equation units. For each equation unit INLINEFORM3 we assign interaction INLINEFORM4 and feature INLINEFORM5 vectors.",
"We assume that the context of the word INLINEFORM0 will be the original context ( INLINEFORM1 ) and the equation units ( INLINEFORM2 ) of any equations that are in the word-equation context window. In addition for each equation unit we define its unit context INLINEFORM3 to be the set of surrounding equation units in a small window INLINEFORM4 around it: DISPLAYFORM0 ",
"The objective is over two conditionals, one for each context type: DISPLAYFORM0 ",
"The two parameters are: DISPLAYFORM0 ",
"We define equation-level representations by averaging the representations of their constituent units: DISPLAYFORM0 "
],
[
"We use stochastic gradient descent with Adagrad BIBREF10 to fit the embedding and context vectors. Following BIBREF1 , we reduce the computational complexity by splitting the gradient into two terms. The first term contains the non-zero entries ( INLINEFORM0 ); the second term contains the zero entries ( INLINEFORM1 ). We compute the exact gradient for the non-zero points; We subsample for the zero data points. This is similar to negative sampling BIBREF5 , which also down-weights the contributions of the zero points. Unlike BIBREF1 which uses INLINEFORM2 regularization to protect against overfitting when fitting the embedding vectors we use early stopping based on validation accuracy, for the same effect."
],
[
"We studied the performance of EqEmb on articles from the arXiv. EqEmb models provide better fits than existing embedding approaches, and infer meaningful semantic relationships between equations and words in the collection.",
"We present a comparison of the proposed models to existing word embeddings approaches. These are: the Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model."
],
[
"Our datasets are scientific articles that were published on arXiv. The sets contain articles (in LaTeX format) from four computer science domains: NLP, IR, AI, and ML. They were created by filtering arXiv articles based on their primary and secondary categories. We used the following categories for the four collections: cs.cl for NLP; cs.ir for IR; cs.ai for AI and stat.ml, stat.co, stat.me or cs.lg for ML.",
"Table TABREF22 shows the number of documents along with the number of unique words, equations and equation units for each collection. The equations are display equations that were enumerated in the LaTeX version of the articles. Unlike inline equations, which in many instances represent variables with general meaning (e.g. INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , etc.) and even numerical values, display equations typically represent mathematical concepts with more specific semantics. For the empirical study we used a random subset of 2k singletons from the total collection, along with all equations that occur more than once. For the qualitative analysis, we used all equations.",
"We extracted words by tokenizing articles using the NLTK package BIBREF12 and restricted the vocabulary to noun phrases and adjectives. The vocabulary was selected by:",
"removing common stopwords",
"treating the top 25 most frequent words as stop words and removing them",
"including words whose term frequency is greater than or equal to 10 and whose character length is greater than or equal to 4",
"including the top 50 most frequent abbreviations whose character length is 3 (an exception to our previous rule)",
"When tokenizing equations, we first create an effective vocabulary of equation units. We convert equations into SLT format and collect collection wide frequency statistics over the equation units. The vocabulary contains all equation units whose frequency count is greater than INLINEFORM0 ."
],
[
"We analyzed EqEmb models performance using a held out set of words that we generate for each equation in our collections. Held out sets are constructed using the following procedure: We traverse over the collections and for every discovered equation we randomly sample words from its context set. The held out set contains the sampled words and their context window which also includes the equation. For each held out word we also generate a set of negative samples for the given word context. We perform the same procedure to form a validation set. For each of the INLINEFORM0 equations in a collection, two held out words INLINEFORM1 are sampled. For a context window of size 4 the sampled word context is defined as INLINEFORM2 .",
"During training we compute the predictive log-likelihood on the validation set of words using the fitted model after each iteration over the collection. A fitted model is a collection of interaction and feature vectors for each equation and word. Given a fitted model, the log probability of a held out word is computed using the following formula: DISPLAYFORM0 ",
"which is the softmax function computed over a set of negative samples INLINEFORM0 and the held out word. In particular, we ran the model 20 times across the collection. After each collection iteration INLINEFORM1 we observe whether the predictive log-likelihood continues to improve compared to the previous ( INLINEFORM2 ) iteration. We stop at the INLINEFORM3 -th iteration when that is no longer the case.",
"When modeling equations using EqEmb we perform two passes over the collection. In the first pass we only model words while ignoring equations. In the second pass we only model equations while holding fixed the interaction and feature vectors of all words. In context of EqEmb we treat equations as singleton words and the broader question that we are trying to answer is whether we can learn something about the meaning of the singleton words given the fixed word interaction and feature vectors.",
"In our analysis we evaluated the performance of the EqEmb models across different sizes for the word context (W), word-equation context (E) and embedding vector size (K). Model performance was compared with 4 existing embedding models: b-emb, CBOW, GloVe and PV-DM. We used the gensim BIBREF13 implementation of the CBOW and PV-DM models. When modeling equations using the first 3 embedding models we treat equations as regular words in the collection. In case of the PV-DM model we parse the article so that equations and their surrounding context of length equivalent to the word-equation context window are labeled as a separate paragraph. We also assign paragraph labels to the article text occurring between equation paragraphs."
],
[
"Table TABREF23 shows the performance comparison results across the different embeddings models. For each model, performance results are shown on 4 latent dimension values (K=25, 50, 75 and 100). For each dimension we ran experiments by varying the context window size for words (Word Context=4, 8 and 16). In addition for the EqEmb, EqEmb-U and PV-DM models we also varied the word-equation window size (E=8 and 16). Comparisons across models are performed using the pseudo log-likelihood measure BIBREF14 . For a given held-out word INLINEFORM0 and a set of negative samples INLINEFORM1 the pseudo log-likelihood is defined as: DISPLAYFORM0 ",
"We treat this a downstream task. For each model type and latent dimension configuration, we use the validation set to select the best model configuration (i.e. combination of context window sizes). We report values on both datasets.",
"Across all collections EqEmb outperform previous embedding models and EqEmb-U further improves performance."
],
[
"EqEmb help obtain word descriptions of the equations. Table TABREF25 shows example equation and the 5 most similar words obtained using 4 different embedding approaches which include CBOW, PV-DM, GloVe and EqEmb. For the query equation we obtain most similar words by computing Cosine distance between the embedding vector ( INLINEFORM0 ) representation of the query equation and the context vector representation of the words ( INLINEFORM1 ).",
"With the embedding representation of words and equations we could also perform equation search using words as queries. For a set of query words we generate its embedding representation by taking the average of the embedding representation of each word and compute Cosine distance across all the equations embeddings. Table TABREF25 shows an example query, which consists of three words, and its 5 nearest equations discovered using EqEmb. For a given word query, EqEmb are able to retrieve query relevant equations."
],
[
"In addition to words, EqEmb models can capture the semantic similarity between equations in the collection. We performed qualitative analysis of the model performance using all discovered equations across the 4 collection. Table TABREF24 shows the query equation used in the previous analysis and its 5 most similar equations discovered using EqEmb-U. For qualitative comparisons across the other embedding models, in Appendix A we provide results over the same query using CBOW, PV-DM, GloVe and EqEmb. In Appendix A reader should notice the difference in performance between EqEmb-U and EqEmb compared to existing embedding models which fail to discover semantically similar equations. tab:irexample1,tab:nlpexample2 show two additional example equation and its 5 most similar equations and words discovered using the EqEmb model. Similar words were ranked by computing Cosine distance between the embedding vector ( INLINEFORM0 ) representation of the query equation and the context vector representation of the words ( INLINEFORM1 ). Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B."
],
[
"We presented unsupervised approaches for semantic representations of mathematical equations using their surrounding words. Across 4 different collections we showed that out methods offer more effective modeling compared to existing embedding models. We also demonstrate that they can capture the semantic similarity between equations and the words in the collection. In the future we plan to explore how EqEmb could be expend to represent other objects such as images, captions and inline figures."
]
],
"section_name": [
"Introduction",
"Related Work",
"Equation Embeddings Models",
"Equation Embeddings",
"Equation Unit Embeddings",
"Computation",
"Empirical Study",
"Datasets",
"Experimental Setup",
"Results",
"Word Representation of Equations",
"Discovering Semantically Similar Equations",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"938dbfd995022a81bf59f32d7cfa1888356a0b21",
"f985f304762d1df633ba08ef8963d2a0c44a93a4"
],
"answer": [
{
"evidence": [
"We present a comparison of the proposed models to existing word embeddings approaches. These are: the Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model."
],
"extractive_spans": [
"Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model"
],
"free_form_answer": "",
"highlighted_evidence": [
"We present a comparison of the proposed models to existing word embeddings approaches. These are: the Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We present a comparison of the proposed models to existing word embeddings approaches. These are: the Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model.",
"In this paper we propose equation embeddings (EqEmb), an unsupervised approach for learning distributed representations of equations. The idea is to treat the equation as a \"singleton word,\" one that appears once but that appears in the context of other words. The surrounding text of the equation—and in particular, the distributed representations of that text—provides the data we need to develop a useful representation of the equation.",
"Building on our previous method, we define a new model which we call equation unit embeddings (EqEmb-U). EqEmb-U model equations by treating them as sentences where the words are the equation variables, symbols and operators which we refer to as units. The first step in representing equations using equation units is to tokenize them. We use the approach outlined in BIBREF8 which represents equations into a syntax layout tree (SLT), a sequence of SLT tuples each of which contains the spatial relationship information between two equation symbols found within a particular window of equation symbols. Figure FIGREF11 shows example SLT representations of three equations."
],
"extractive_spans": [
"Bernoulli embeddings",
"continuous bag-of-words",
"Distributed Memory version of Paragraph Vector",
"Global Vectors",
"equation embeddings",
"equation unit embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
"We present a comparison of the proposed models to existing word embeddings approaches. These are: the Bernoulli embeddings (b-emb) BIBREF1 , continuous bag-of-words (CBOW) BIBREF5 , Distributed Memory version of Paragraph Vector (PV-DM) BIBREF11 and the Global Vectors (GloVe) BIBREF6 model.",
"In this paper we propose equation embeddings (EqEmb), an unsupervised approach for learning distributed representations of equations. ",
"Building on our previous method, we define a new model which we call equation unit embeddings (EqEmb-U)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"64c47caeff804ef251a08df2f8672ea726528e06",
"8f1299b5537074e373dc79cd43da82a33fc24818"
],
"answer": [
{
"evidence": [
"In addition to words, EqEmb models can capture the semantic similarity between equations in the collection. We performed qualitative analysis of the model performance using all discovered equations across the 4 collection. Table TABREF24 shows the query equation used in the previous analysis and its 5 most similar equations discovered using EqEmb-U. For qualitative comparisons across the other embedding models, in Appendix A we provide results over the same query using CBOW, PV-DM, GloVe and EqEmb. In Appendix A reader should notice the difference in performance between EqEmb-U and EqEmb compared to existing embedding models which fail to discover semantically similar equations. tab:irexample1,tab:nlpexample2 show two additional example equation and its 5 most similar equations and words discovered using the EqEmb model. Similar words were ranked by computing Cosine distance between the embedding vector ( INLINEFORM0 ) representation of the query equation and the context vector representation of the words ( INLINEFORM1 ). Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B."
],
"extractive_spans": [],
"free_form_answer": "By using Euclidean distance computed between the context vector representations of the equations",
"highlighted_evidence": [
"Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In addition to words, EqEmb models can capture the semantic similarity between equations in the collection. We performed qualitative analysis of the model performance using all discovered equations across the 4 collection. Table TABREF24 shows the query equation used in the previous analysis and its 5 most similar equations discovered using EqEmb-U. For qualitative comparisons across the other embedding models, in Appendix A we provide results over the same query using CBOW, PV-DM, GloVe and EqEmb. In Appendix A reader should notice the difference in performance between EqEmb-U and EqEmb compared to existing embedding models which fail to discover semantically similar equations. tab:irexample1,tab:nlpexample2 show two additional example equation and its 5 most similar equations and words discovered using the EqEmb model. Similar words were ranked by computing Cosine distance between the embedding vector ( INLINEFORM0 ) representation of the query equation and the context vector representation of the words ( INLINEFORM1 ). Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B."
],
"extractive_spans": [
"Similar words were ranked by computing Cosine distance between the embedding vector ( INLINEFORM0 ) representation of the query equation and the context vector representation of the words ( INLINEFORM1 ). Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B."
],
"free_form_answer": "",
"highlighted_evidence": [
"In addition to words, EqEmb models can capture the semantic similarity between equations in the collection. We performed qualitative analysis of the model performance using all discovered equations across the 4 collection. Table TABREF24 shows the query equation used in the previous analysis and its 5 most similar equations discovered using EqEmb-U. For qualitative comparisons across the other embedding models, in Appendix A we provide results over the same query using CBOW, PV-DM, GloVe and EqEmb. In Appendix A reader should notice the difference in performance between EqEmb-U and EqEmb compared to existing embedding models which fail to discover semantically similar equations. tab:irexample1,tab:nlpexample2 show two additional example equation and its 5 most similar equations and words discovered using the EqEmb model. Similar words were ranked by computing Cosine distance between the embedding vector ( INLINEFORM0 ) representation of the query equation and the context vector representation of the words ( INLINEFORM1 ). Similar equations were discovered using Euclidean distance computed between the context vector representations of the equations ( INLINEFORM2 ). We give additional example results in Appendix B."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What word embeddings do they test?",
"How do they define similar equations?"
],
"question_id": [
"493e971ee3f57a821ef1f67ef3cd47ade154e7c4",
"8dd8e5599fc56562f2acbc16dd8544689cddd938"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1. Top left: arXiv article snippet (Li et al., 2015) that contains an equation of a neural network layer along with its surrounding context. Highlighted in green are words whose word-equation context window contains the equation. Top right: Extracted equation (top) and its 5 nearest equations (blue rectangle) and words (green rectangle) discovered using our approach. Discovered equations relate to neural network layers while nearest words bear semantic relatedness with the equation definition. Bottom: Word-equation context window example. Highlighted in green is an example word whose word-equation context window contain the original context words (red) from the effective vocabulary that appear in a context window of size 4 along with the equation.",
"Figure 2. Examples of Syntax Layout Tree (SLT) representation of equations using a symbol window of size one. Each tuple represents the special relationship between two symbols (n-to the right; a-above; u-under; o-over; w-within).",
"Table 1. Collections statistics across arXiv articles that we use in our analysis.",
"Table 2. EqEmb outperform previous embedding models; EqEmb-U further improves performance. Performance comparisons between CBOW, GloVe, PV-DM, b-emb, EqEmb and EqEmb-U using log-likelihood computed on test and validation datasets. Comparisons were done over 4 different collections of scientific articles (NLP, IR, AI and ML) and across different latent dimensions (K=25, 50, 75 and 100).",
"Table 3. Example query equation (top row) and its 5 nearest equations discovered using EqEmb-U.",
"Table 4. Example query equation (top row) and its 5 nearest equations (left) and words (right) discovered using EqEmb. All similar equations relate to the LDA model.",
"Table 5. Example query equation (top row) and its 5 nearest equations (left column) and words (right column) discovered using EqEmb. All similar equations relate to classification performance measures such as F-measure.",
"Table 6. Example query equation (top row) and its five most similar words obtained using CBOW, PV-DM, GloVe and EqEmb.",
"Table 7. Example query which consists of 3 words (”similarity”, ”distance” and ”cosine”) and its 5 nearest equations discovered using EqEmb. For a given word query, EqEmb are able to retrieve query relevant equations.",
"Table 8. CBOW: Example query equation (top row) and its top 5 nearest equations discovered with this model.",
"Table 9. PV-DM: Example query equation (top row) and its top 5 nearest equations discovered model.",
"Table 10. GloVe: Example query equation (top row) and its 5 nearest equations discovered with this model.",
"Table 11. EqEmb: Example query equation (top row) and its 5 nearest equations discovered with this model.",
"Table 12. Example query equation (top row) and its 5 nearest equations (left) and words (right) discovered using EqEmb. All similar equations relate to neural network layers such as the query equation."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"7-Table5-1.png",
"8-Table6-1.png",
"8-Table7-1.png",
"10-Table8-1.png",
"10-Table9-1.png",
"10-Table10-1.png",
"11-Table11-1.png",
"12-Table12-1.png"
]
} | [
"How do they define similar equations?"
] | [
[
"1803.09123-Discovering Semantically Similar Equations-0"
]
] | [
"By using Euclidean distance computed between the context vector representations of the equations"
] | 84 |
1701.08229 | Feature Studies to Inform the Classification of Depressive Symptoms from Twitter Data for Population Health | The utility of Twitter data as a medium to support population-level mental health monitoring is not well understood. In an effort to better understand the predictive power of supervised machine learning classifiers and the influence of feature sets for efficiently classifying depression-related tweets on a large-scale, we conducted two feature study experiments. In the first experiment, we assessed the contribution of feature groups such as lexical information (e.g., unigrams) and emotions (e.g., strongly negative) using a feature ablation study. In the second experiment, we determined the percentile of top ranked features that produced the optimal classification performance by applying a three-step feature elimination approach. In the first experiment, we observed that lexical features are critical for identifying depressive symptoms, specifically for depressed mood (-35 points) and for disturbed sleep (-43 points). In the second experiment, we observed that the optimal F1-score performance of top ranked features in percentiles variably ranged across classes e.g., fatigue or loss of energy (5th percentile, 288 features) to depressed mood (55th percentile, 3,168 features) suggesting there is no consistent count of features for predicting depressive-related tweets. We conclude that simple lexical features and reduced feature sets can produce comparable results to larger feature sets. | {
"paragraphs": [
[
"In recent years, there has been a movement to leverage social medial data to detect, estimate, and track the change in prevalence of disease. For example, eating disorders in Spanish language Twitter tweets BIBREF0 and influenza surveillance BIBREF1 . More recently, social media has been leveraged to monitor social risks such as prescription drug and smoking behaviors BIBREF2 , BIBREF3 , BIBREF4 as well as a variety of mental health disorders including suicidal ideation BIBREF5 , attention deficient hyperactivity disorder BIBREF6 and major depressive disorder BIBREF7 . In the case of major depressive disorder, recent efforts range from characterizing linguistic phenomena associated with depression BIBREF8 and its subtypes e.g., postpartum depression BIBREF5 , to identifying specific depressive symptoms BIBREF9 , BIBREF10 e.g., depressed mood. However, more research is needed to better understand the predictive power of supervised machine learning classifiers and the influence of feature groups and feature sets for efficiently classifying depression-related tweets to support mental health monitoring at the population-level BIBREF11 .",
"This paper builds upon related works toward classifying Twitter tweets representing symptoms of major depressive disorder by assessing the contribution of lexical features (e.g., unigrams) and emotion (e.g., strongly negative) to classification performance, and by applying methods to eliminate low-value features."
],
[
"Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression\") or evidence of depression (e.g., “depressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps\"), disturbed sleep (e.g., “another restless night\"), or fatigue or loss of energy (e.g., “the fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0."
],
[
"Furthermore, this dataset was encoded with 7 feature groups with associated feature values binarized (i.e., present=1 or absent=0) to represent potentially informative features for classifying depression-related classes. We describe the feature groups by type, subtype, and provide one or more examples of words representing the feature subtype from a tweet:",
"lexical features, unigrams, e.g., “depressed”;",
"syntactic features, parts of speech, e.g., “cried” encoded as V for verb;",
"emotion features, emoticons, e.g., :( encoded as SAD;",
"demographic features, age and gender e.g., “this semester” encoded as an indicator of 19-22 years of age and “my girlfriend” encoded as an indicator of male gender, respectively;",
"sentiment features, polarity and subjectivity terms with strengths, e.g., “terrible” encoded as strongly negative and strongly subjective;",
"personality traits, neuroticism e.g., “pissed off” implies neuroticism;",
"LIWC Features, indicators of an individual's thoughts, feelings, personality, and motivations, e.g., “feeling” suggestions perception, feeling, insight, and cognitive mechanisms experienced by the Twitter user.",
"A more detailed description of leveraged features and their values, including LIWC categories, can be found in BIBREF10 .",
"Based on our prior initial experiments using these feature groups BIBREF10 , we learned that support vector machines perform with the highest F1-score compared to other supervised approaches. For this study, we aim to build upon this work by conducting two experiments: 1) to assess the contribution of each feature group and 2) to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy."
],
[
"Feature ablation studies are conducted to assess the informativeness of a feature group by quantifying the change in predictive power when comparing the performance of a classifier trained with the all feature groups versus the performance without a particular feature group. We conducted a feature ablation study by holding out (sans) each feature group and training and testing the support vector model using a linear kernel and 5-fold, stratified cross-validation. We report the average F1-score from our baseline approach (all feature groups) and report the point difference (+ or -) in F1-score performance observed by ablating each feature set.",
"By ablating each feature group from the full dataset, we observed the following count of features - sans lexical: 185, sans syntactic: 16,935, sans emotion: 16,954, sans demographics: 16,946, sans sentiment: 16,950, sans personality: 16,946, and sans LIWC: 16,832. In Figure 1, compared to the baseline performance, significant drops in F1-scores resulted from sans lexical for depressed mood (-35 points), disturbed sleep (-43 points), and depressive symptoms (-45 points). Less extensive drops also occurred for evidence of depression (-14 points) and fatigue or loss of energy (-3 points). In contrast, a 3 point gain in F1-score was observed for no evidence of depression. We also observed notable drops in F1-scores for disturbed sleep by ablating demographics (-7 points), emotion (-5 points), and sentiment (-5 points) features. These F1-score drops were accompanied by drops in both recall and precision. We found equal or higher F1-scores by removing non-lexical feature groups for no evidence of depression (0-1 points), evidence of depression (0-1 points), and depressive symptoms (2 points).",
"Unsurprisingly, lexical features (unigrams) were the largest contributor to feature counts in the dataset. We observed that lexical features are also critical for identifying depressive symptoms, specifically for depressed mood and for disturbed sleep. For the classes higher in the hierarchy - no evidence of depression, evidence of depression, and depressive symptoms - the classifier produced consistent F1-scores, even slightly above the baseline for depressive symptoms and minor fluctuations of change in recall and precision when removing other feature groups suggesting that the contribution of non-lexical features to classification performance was limited. However, notable changes in F1-score were observed for the classes lower in the hierarchy including disturbed sleep and fatigue or loss of energy. For instance, changes in F1-scores driven by both recall and precision were observed for disturbed sleep by ablating demographics, emotion, and sentiment features, suggesting that age or gender (“mid-semester exams have me restless”), polarity and subjective terms (“lack of sleep is killing me”), and emoticons (“wide awake :(”) could be important for both identifying and correctly classifying a subset of these tweets."
],
[
"Feature elimination strategies are often taken 1) to remove irrelevant or noisy features, 2) to improve classifier performance, and 3) to reduce training and run times. We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach:",
"Reduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset.",
"Selection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation.",
"Rank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class.",
"All experiments were programmed using scikit-learn 0.18.",
"The initial matrices of almost 17,000 features were reduced by eliminating features that only occurred once in the full dataset, resulting in 5,761 features. We applied Chi-Square feature selection and plotted the top-ranked subset of features for each percentile (at 5 percent intervals cumulatively added) and evaluated their predictive contribution using the support vector machine with linear kernel and stratified, 5-fold cross validation.",
"In Figure 2, we observed optimal F1-score performance using the following top feature counts: no evidence of depression: F1: 87 (15th percentile, 864 features), evidence of depression: F1: 59 (30th percentile, 1,728 features), depressive symptoms: F1: 55 (15th percentile, 864 features), depressed mood: F1: 39 (55th percentile, 3,168 features), disturbed sleep: F1: 46 (10th percentile, 576 features), and fatigue or loss of energy: F1: 72 (5th percentile, 288 features) (Figure 1). We note F1-score improvements for depressed mood from F1: 13 at the 1st percentile to F1: 33 at the 20th percentile.",
"We observed peak F1-score performances at low percentiles for fatigue or loss of energy (5th percentile), disturbed sleep (10th percentile) as well as depressive symptoms and no evidence of depression (both 15th percentile) suggesting fewer features are needed to reach optimal performance. In contrast, peak F1-score performances occurred at moderate percentiles for evidence of depression (30th percentile) and depressed mood (55th percentile) suggesting that more features are needed to reach optimal performance. However, one notable difference between these two classes is the dramatic F1-score improvements for depressed mood i.e., 20 point increase from the 1st percentile to the 20th percentile compared to the more gradual F1-score improvements for evidence of depression i.e., 11 point increase from the 1st percentile to the 20th percentile. This finding suggests that for identifying depressed mood a variety of features are needed before incremental gains are observed."
],
[
"From our annotated dataset of Twitter tweets (n=9,300 tweets), we conducted two feature studies to better understand the predictive power of several feature groups for classifying whether or not a tweet contains no evidence of depression (n=6,829 tweets) or evidence of depression (n=2,644 tweets). If there was evidence of depression, we determined whether the tweet contained one or more depressive symptoms (n=1,656 tweets) and further classified the symptom subtype of depressed mood (n=1,010 tweets), disturbed sleep (n=98 tweets), or fatigue or loss of energy (n=427 tweets) using support vector machines. From our prior work BIBREF10 and in Figure 1, we report the performance for prediction models built by training a support vector machine using 5-fold, stratified cross-validation with all feature groups as a baseline for each class. We observed high performance for no evidence of depression and fatigue or loss of energy and moderate performance for all remaining classes."
],
[
"We conducted two feature study experiments: 1) a feature ablation study to assess the contribution of feature groups and 2) a feature elimination study to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy."
],
[
"Our next step is to address the classification of rarer depressive symptoms suggestive of major depressive disorder from our dataset and hierarchy including inappropriate guilt, difficulty concentrating, psychomotor agitation or retardation, weight loss or gain, and anhedonia BIBREF15 , BIBREF16 . We are developing a population-level monitoring framework designed to estimate the prevalence of depression (and depression-related symptoms and psycho-social stressors) over millions of United States-geocoded tweets. Identifying the most discriminating feature sets and natural language processing classifiers for each depression symptom is vital for this goal."
],
[
"In summary, we conducted two feature study experiments to assess the contribution of feature groups and to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy. From these experiments, we conclude that simple lexical features and reduced feature sets can produce comparable results to the much larger feature dataset."
],
[
"Research reported in this publication was supported by the National Library of Medicine of the [United States] National Institutes of Health under award numbers K99LM011393 and R00LM011393. This study was granted an exemption from review by the University of Utah Institutional Review Board (IRB 00076188). Note that in order to protect tweeter anonymity, we have not reproduced tweets verbatim. Example tweets shown were generated by the researchers as exemplars only. Finally, we would like to thank the anonymous reviewers of this paper for their valuable comments."
]
],
"section_name": [
"Introduction",
"METHODS",
"Features",
"Feature Contribution",
"Feature Elimination",
"RESULTS",
"Discussion",
"Future Work",
"Conclusions",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"82eee9c6d27df15b1da69a0c9895408d58d868eb",
"d96b703566c952266783d7c8e9b9d949f7537ac3"
],
"answer": [
{
"evidence": [
"Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression\") or evidence of depression (e.g., “depressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps\"), disturbed sleep (e.g., “another restless night\"), or fatigue or loss of energy (e.g., “the fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression\") or evidence of depression (e.g., “depressed over disappointment\"). "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression\") or evidence of depression (e.g., “depressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps\"), disturbed sleep (e.g., “another restless night\"), or fatigue or loss of energy (e.g., “the fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression\") or evidence of depression (e.g., “depressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps\"), disturbed sleep (e.g., “another restless night\"), or fatigue or loss of energy (e.g., “the fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"82e059daa5257165995afb3457d6f4c843bff243",
"a8c4504fdfcc8633a05da8103cb92f85c753b445"
],
"answer": [
{
"evidence": [
"Feature elimination strategies are often taken 1) to remove irrelevant or noisy features, 2) to improve classifier performance, and 3) to reduce training and run times. We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach:",
"Reduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset.",
"Selection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation.",
"Rank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class."
],
"extractive_spans": [
"Reduction",
"Selection",
"Rank"
],
"free_form_answer": "",
"highlighted_evidence": [
"We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach:\n\nReduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset.\n\nSelection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation.\n\nRank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Feature elimination strategies are often taken 1) to remove irrelevant or noisy features, 2) to improve classifier performance, and 3) to reduce training and run times. We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach:",
"Reduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset.",
"Selection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation.",
"Rank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class."
],
"extractive_spans": [],
"free_form_answer": "reduced the dataset by eliminating features, apply feature selection to select highest ranked features to train and test the model and rank the performance of incrementally adding features.",
"highlighted_evidence": [
"Feature elimination strategies are often taken 1) to remove irrelevant or noisy features, 2) to improve classifier performance, and 3) to reduce training and run times. We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach:\n\nReduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset.\n\nSelection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation.\n\nRank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"65e62a14ca30f2f9fa1f0eecc3791dd34fc00633",
"d7607731de0ccd3a1f9ea8b3bf0143044140ba8b"
],
"answer": [
{
"evidence": [
"Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression\") or evidence of depression (e.g., “depressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps\"), disturbed sleep (e.g., “another restless night\"), or fatigue or loss of energy (e.g., “the fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0."
],
"extractive_spans": [
"no evidence of depression",
"depressed mood",
"disturbed sleep",
"fatigue or loss of energy"
],
"free_form_answer": "",
"highlighted_evidence": [
"Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression\") or evidence of depression (e.g., “depressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps\"), disturbed sleep (e.g., “another restless night\"), or fatigue or loss of energy (e.g., “the fatigue is unbearable\") BIBREF10 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression\") or evidence of depression (e.g., “depressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps\"), disturbed sleep (e.g., “another restless night\"), or fatigue or loss of energy (e.g., “the fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0."
],
"extractive_spans": [],
"free_form_answer": "The annotations are based on evidence of depression and further annotated by the depressive symptom if there is evidence of depression",
"highlighted_evidence": [
"We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression\") or evidence of depression (e.g., “depressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps\"), disturbed sleep (e.g., “another restless night\"), or fatigue or loss of energy (e.g., “the fatigue is unbearable\") BIBREF10 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"7225302d7ea31d900251f8967bdb78e937787d86",
"dfb2985d789ba85e30c094683f0bb566982c5123"
],
"answer": [
{
"evidence": [
"Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression\") or evidence of depression (e.g., “depressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps\"), disturbed sleep (e.g., “another restless night\"), or fatigue or loss of energy (e.g., “the fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0."
],
"extractive_spans": [
"BIBREF12 , BIBREF13"
],
"free_form_answer": "",
"highlighted_evidence": [
"We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression\") or evidence of depression (e.g., “depressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps\"), disturbed sleep (e.g., “another restless night\"), or fatigue or loss of energy (e.g., “the fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0."
],
"extractive_spans": [
"an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13"
],
"free_form_answer": "",
"highlighted_evidence": [
"We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they evaluate only on English datasets?",
"What are the three steps to feature elimination?",
"How is the dataset annotated?",
"What dataset is used for this study?"
],
"question_id": [
"00c57e45ac6afbdfa67350a57e81b4fad0ed2885",
"22714f6cad2d5c54c28823e7285dc85e8d6bc109",
"82642d3111287abf736b781043d49536fe48c350",
"5a81732d52f64e81f1f83e8fd3514251227efbc7"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Feature ablation study: for each class, we plotted the change of average F1-scores from the baseline reported in the titles by ablating each feature set. Black = point gains in F1; Purple = point losses in F1.",
"Figure 2: Feature elimination study: for each class, we plotted the change of average F1-scores for top features of percentiles by adding top-ranked features at 5% increments to the prediction model."
],
"file": [
"2-Figure1-1.png",
"4-Figure2-1.png"
]
} | [
"What are the three steps to feature elimination?",
"How is the dataset annotated?"
] | [
[
"1701.08229-Feature Elimination-3",
"1701.08229-Feature Elimination-1",
"1701.08229-Feature Elimination-2",
"1701.08229-Feature Elimination-0"
],
[
"1701.08229-METHODS-0"
]
] | [
"reduced the dataset by eliminating features, apply feature selection to select highest ranked features to train and test the model and rank the performance of incrementally adding features.",
"The annotations are based on evidence of depression and further annotated by the depressive symptom if there is evidence of depression"
] | 86 |
1709.07814 | Attention-based Wav2Text with Feature Transfer Learning | Conventional automatic speech recognition (ASR) typically performs multi-level pattern recognition tasks that map the acoustic speech waveform into a hierarchy of speech units. But, it is widely known that information loss in the earlier stage can propagate through the later stages. After the resurgence of deep learning, interest has emerged in the possibility of developing a purely end-to-end ASR system from the raw waveform to the transcription without any predefined alignments and hand-engineered models. However, the successful attempts in end-to-end architecture still used spectral-based features, while the successful attempts in using raw waveform were still based on the hybrid deep neural network - Hidden Markov model (DNN-HMM) framework. In this paper, we construct the first end-to-end attention-based encoder-decoder model to process directly from raw speech waveform to the text transcription. We called the model as"Attention-based Wav2Text". To assist the training process of the end-to-end model, we propose to utilize a feature transfer learning. Experimental results also reveal that the proposed Attention-based Wav2Text model directly with raw waveform could achieve a better result in comparison with the attentional encoder-decoder model trained on standard front-end filterbank features. | {
"paragraphs": [
[
"Conventional large-vocabulary continuous speech recognition (LVCSR) systems typically perform multi-level pattern recognition tasks that map the acoustic speech waveform into a hierarchy of speech units such as sub-words (phonemes), words, and strings of words (sentences). Such systems basically consist of several sub-components (feature extractor, acoustic model, pronunciation lexicon, language model) that are trained and tuned separately BIBREF0 . First, the speech signal is processed into a set of observation features based on a carefully hand-crafted feature extractor, such as Mel frequency cepstral coefficients (MFCC) or Mel-scale spectrogram. Then the acoustic model classifies the observation features into sub-unit or phoneme classes. Finally, the search algorithm finds the most probable word sequence based on the evidence of the acoustic model, the lexicon, and the language model. But, it is widely known that information loss in the earlier stage can propagate through the later stages.",
"Deep learning algorithms have produced many state-of-the-art performances in various tasks that have revitalized the use of neural networks for ASR. One of the important factors behind the popularity of deep learning is the possibility of simplifying many complicated hand-engineered models by letting DNNs find their way to map from input to output spaces. Interest has emerged recently in the possibility of learning DNN-based acoustic models directly from the raw speech waveform without any predefined alignments and hand-engineered models. In this way, the feature extractor and acoustic model can be integrated into a single architecture. Palaz et al. BIBREF1 , BIBREF2 proposed a convolutional neural network (CNN) to directly train an acoustic model from the raw speech waveform. Sainath et al. BIBREF3 used time-convolutional layers over raw speech and trained them jointly with the long short-term memory deep neural network (CLDNN) acoustic model. The results showed that raw waveform CLDNNs matched the performance of log-mel CLDNNs on a voice search task. Ghahremani et al. BIBREF4 recently proposed a CNN time-delay neural network (CNN-TDNN) with network-in-network (NIN) architecture, and also showed that their model outperformed MFCC-based TDNN on the Wall Street Journal (WSJ) BIBREF5 task. But despite significant progress that has been made, the successful models were mostly demonstrated only within the hybrid DNN-HMM speech recognition frameworks.",
"On the other hand, some existing works constructed end-to-end neural network models for ASR and replaced the acoustic model, the lexicon model, and the language model with a single integrated model, thus simplifying the pipeline. Graves et al. BIBREF6 , BIBREF7 successfully built an end-to-end ASR based on the connectionist temporal classification (CTC) framework. Amodei et al. BIBREF8 also constructed an end-to-end CTC-based ASR that directly produced character strings instead of phoneme sequences. But the CTC-based architecture still predicts the target outputs for every frame without any implicit knowledge about the language model. Another approach uses a sequence-to-sequence attention-based encoder-decoder that explicitly uses the history of previous outputs. Chorowski et al. BIBREF9 and Chan et al. BIBREF10 has successfully demonstrated encoder-decoder based ASR frameworks. Unfortunately, most of these works still used the standard spectral features (i.e., Mel-scale spectrogram, MFCC) as the input. The only attempt on end-to-end speech recognition for a raw waveform was recently proposed by BIBREF11 . Their system used a deep CNN and was trained with the automatic segmentation criterion (ASG) as an alternative to CTC. However, similar with CTC, the model did not explicitly use the history of the previous outputs assuming they were conditionally independent of each other. Furthermore, its performance was only reported using a very large data set (about 1000h of audio files).",
"To the best of our knowledge, few studies have explored a single end-to-end ASR architecture trained on raw speech waveforms to directly output text transcription, and none of those models were built based on an encoder-decoder architecture. In this paper, we take a step forward to construct an end-to-end ASR using an attentional-based encoder-decoder model for processing raw speech waveform, naming it as “Attention-based Wav2Text\". We investigate the performance of our proposed models on standard ASR datasets. In practice, optimizing an encoder-decoder framework is more difficult than a standard neural network architecture BIBREF10 . Therefore, we propose a feature transfer learning method to assist the training process for our end-to-end attention-based ASR model."
],
[
"The encoder-decoder model is a neural network that directly models conditional probability INLINEFORM0 where INLINEFORM1 is the source sequence with length INLINEFORM2 and INLINEFORM3 is the target sequence with length INLINEFORM4 . It consists of encoder, decoder and attention modules. The encoder task processes an input sequence INLINEFORM5 and outputs representative information INLINEFORM6 for the decoder. The attention module is an extension scheme that assists the decoder to find relevant information on the encoder side based on the current decoder hidden states BIBREF12 , BIBREF13 . Usually, the attention module produces context information INLINEFORM7 at time INLINEFORM8 based on the encoder and decoder hidden states: DISPLAYFORM0 ",
" There are several variations for score function : DISPLAYFORM0 ",
" where INLINEFORM0 , INLINEFORM1 is the number of hidden units for the encoder and INLINEFORM2 is the number of hidden units for the decoder. Finally, the decoder task, which predicts the target sequence probability at time INLINEFORM3 based on previous output and context information INLINEFORM4 , can be formulated as: DISPLAYFORM0 ",
"The most common input INLINEFORM0 for speech recognition tasks is a sequence of feature vectors such as log Mel-spectral spectrogram and/or MFCC. Therefore, INLINEFORM1 where D is the number of the features and S is the total length of the utterance in frames. The output INLINEFORM2 can be either phoneme or grapheme (character) sequence.",
"In this work, we use the raw waveform as the input representation instead of spectral-based features and a grapheme (character) sequence as the output representation. In contrast to most encoder-decoder architectures, which are purely based on recurrent neural network (RNNs) framework, we construct an encoder with several convolutional layers BIBREF14 followed by NIN layers BIBREF15 as the lower part in the encoder and integrate them with deep bidirectional long short-term memory (Bi-LSTM) BIBREF16 at the higher part. We use convolutional layers because they are suitable for extracting local information from raw speech. We use a striding mechanism to reduce the dimension from the input frames BIBREF17 , while the NIN layer represents more complex structures on the top of the convolutional layers. On the decoder side, we use a standard deep unidirectional LSTM with global attention BIBREF13 that is calculated by a multi-layer perceptron (MLP) as described in Eq. EQREF2 . For more details, we illustrate our architecture in Figure FIGREF4 ."
],
[
"Deep learning is well known for its ability to learn directly from low-level feature representation such as raw speech BIBREF1 , BIBREF3 . However, in most cases such models are already conditioned on a fixed input size and a single target output (i.e., predicting one phoneme class for each input frame). In the attention-based encoder-decoder model, the training process is not as easy as in a standard neural network model BIBREF10 because the attention-based model needs to jointly optimize three different modules simultaneously: (1) an encoder module for producing representative information from a source sequence; (2) an attention module for calculating the correct alignment; and (3) a decoder module for generating correct transcriptions. If one of these modules has difficulty fulfilling its own tasks, then the model will fail to produce good results.",
"To ease the burden on training the whole encoder-decoder architecture directly to predict the text transcription given the raw speech waveform, we utilize a transfer learning method on the encoder part. Specifically, we only train the encoder's lower layers consisting of the convolutional and NIN layers to predict the spectral features given the corresponding raw waveform. In this work, we utilize two widely used spectral features: MFCC and log Mel-scale spectrogram as the transfer learning target. Figure FIGREF5 shows our feature transfer learning architecture. First, given segmented raw speech waveform INLINEFORM0 , we extract corresponding INLINEFORM1 -dimensional spectral features INLINEFORM2 . Then we process raw speech INLINEFORM3 with several convolutions, followed by NIN layers in the encoder part. In the last NIN-layer, we set a fixed number of channels as INLINEFORM4 channels and apply mean-pooling across time. Finally, we get predictions for corresponding spectral features INLINEFORM5 and optimize all of the parameters by minimizing the mean squared error between predicted spectral features INLINEFORM6 and target spectral features INLINEFORM7 : DISPLAYFORM0 ",
"In this paper, we also explore multi target feature transfer using a similar structure as in Figure FIGREF5 but with two parallel NIN layers, followed by mean-polling at the end. One of the output layers is used to predicts log Mel-scale spectrogram and another predicts MFCC features. We modify the single target loss function from Eq. EQREF6 into the following: DISPLAYFORM0 ",
" where INLINEFORM0 are the predicted Mel-scale spectrogram and the MFCC values, and INLINEFORM1 are the real Mel-scale spectrogram and MFCC features for frame INLINEFORM2 . After optimizing all the convolutional and NIN layer parameters, we transfer the trained layers and parameters and integrate them with the Bi-LSTM encoder. Finally, we jointly optimize the whole structure together."
],
[
"In this study, we investigate the performance of our proposed models on WSJ BIBREF5 . We used the same definitions of the training, development and test set as the Kaldi s5 recipe BIBREF18 . The raw speech waveforms were segmented into multiple frames with a 25ms window size and a 10ms step size. We normalized the raw speech waveform into the range -1 to 1. For spectral based features such as MFCC and log Mel-spectrogram, we normalized the features for each dimension into zero mean and unit variance. For WSJ, we separated into two experiments by using WSJ-SI84 only and WSJ-SI284 data. We used dev_93 for our validation set and eval_92 for our test set. We used the character sequence as our decoder target and followed the preprocessing step proposed by BIBREF19 . The text from all the utterances was mapped into a 32-character set: 26 (a-z) alphabet, apostrophe, period, dash, space, noise, and “eos\"."
],
[
"Our attention-based Wav2Text architecture uses four convolutional layers, followed by two NIN layers at the lower part of the encoder module. For all the convolutional layers, we used a leaky rectifier unit (LReLU) BIBREF20 activation function with leakiness INLINEFORM0 . Inside the first NIN layers, we stacked three consecutive filters with LReLU activation function. For the second NIN layers, we stacked two consecutive filters with tanh and identity activation function. For the feature transfer learning training phase, we used Momentum SGD with a learning rate of 0.01 and momentum of 0.9. Table TABREF11 summarizes the details of the layer settings for the convolutional and NIN layers.",
"On the top layers of the encoder after the transferred convolutional and NIN layers, we put three bidirectional LSTMs (Bi-LSTM) with 256 hidden units (total 512 units for both directions). To reduce the computational time, we used hierarchical subsampling BIBREF21 , BIBREF22 , BIBREF10 . We applied subsampling on all the Bi-LSTM layers and reduced the length by a factor of 8.",
"On the decoder side, the previous input phonemes / characters were converted into real vectors by a 128-dimensional embedding matrix. We used one unidirectional LSTM with 512 hidden units and followed by a softmax layer to output the character probability. For the end-to-end training phase, we froze the parameter values from the transferred layers from epoch 0 to epoch 10, and after epoch 10 we jointly optimized all the parameters together until the end of training (a total 40 epochs). We used an Adam BIBREF23 optimizer with a learning rate of 0.0005.",
"In the decoding phase, we used a beam search strategy with beam size INLINEFORM0 and we adjusted the score by dividing with the transcription length to prevent the decoder from favoring shorter transcriptions. We did not use any language model or lexicon dictionary for decoding. All of our models were implemented on the PyTorch framework .",
"For comparison, we also evaluated the standard attention-based encoder decoder with Mel-scale spectrogram input as the baseline. Here, we used similar settings as the proposed model, except we replaced the convolutional and NIN layers with a feedforward layer (512 hidden units)."
],
[
"An example of our transfer learning results is shown in Figure FIGREF8 , and Table TABREF14 shows the speech recognition performance in CER for both the WSJ-SI84 and WSJ-SI284 datasets. We compared our method with several published models like CTC, Attention Encoder-Decoder and Joint CTC-Attention model that utilize CTC for training the encoder part. Besides, we also train our own baseline Attention Encoder-Decoder with Mel-scale spectrogram. The difference between our Attention Encoder-Decoder (“Att Enc-Dec (ours)\", “Att Enc-Dec Wav2Text\") with Attention Encoder-Decoder from BIBREF24 (“Att Enc-Dec Content\", “Att Enc-Dec Location\") is we used the current hidden states to generate the attention vector instead of the previous hidden states. Another addition is we utilized “input feedback\" method BIBREF13 by concatenating the previous context vector into the current input along with the character embedding vector. By using those modifications, we are able to improve the baseline performance.",
"Our proposed Wav2Text models without any transfer learning failed to converge. In contrast, with transfer learning, they significantly surpassed the performance of the CTC and encoder-decoder from Mel-scale spectrogram features. This suggests that by using transfer learning for initializing the lower part of the encoder parameters, our model also performed better then their original features."
],
[
"Transfer learning is the ability of a learning algorithm to convey knowledge across different tasks. The initial idea is to reuse previously obtained knowledge to enhance the learning for new things. The standard procedure are : first, train the model on a base dataset and task, then the learned features and/or parameters are reused for learning a second target dataset and task. Bengio et al. BIBREF25 provided deep reviews about multi-task and transfer learning on deep learning models. Jason et al. BIBREF26 showed that a model with transferred parameter consistently outperformed a randomly initialized one.",
"In speech recognition research, transfer learning has been studied for many years, including successful cases of speaker adaptation and cross-lingual acoustic modeling BIBREF27 . One popular scheme for utilizing DNNs for transfer learning within ASR frameworks is a tandem approach BIBREF28 . This idea first trains a DNN with a narrow hidden bottleneck layer to perform phoneme classification at the frame level and then reuses the activations from the narrow hidden bottleneck layer as discriminative features in conventional GMM-HMM or hybrid DNN-HMM models BIBREF29 . Another study introduced a convolutional bottleneck network as an alternative tandem bottleneck feature architecture BIBREF30 . However, although such a feature transfer learning framework provides many advantages in ASR, the usage in an end-to-end attention-based ASR framework has not been explored.",
"This study performs feature transfer learning on the encoder part of the end-to-end attention-based ASR architecture. We train the convolutional encoder to predict the spectral features given the corresponding raw speech waveform. After that, we transfer the trained layers and parameters, integrate them with the LSTM encoder-decoder, and eventually optimize the whole structure to predict the correct output text transcription given the raw speech waveform."
],
[
"This paper described the first attempt to build an end-to-end attention-based encoder-decoder speech recognition that directly predicts the text transcription given raw speech input. We also proposed feature transfer learning to assist the encoder-decoder model training process and presented a novel architecture that combined convolutional, NIN and Bi-LSTM layers into a single encoder part for raw speech recognition. Our results suggest that transfer learning is a very helpful method for constructing an end-to-end system from such low-level features as raw speech signals. With transferred parameters, our proposed attention-based Wav2Text models converged and matched the performance with the attention-based encoder-decoder model trained on standard spectral-based features. The best performance was achieved by Wav2Text models with transfer learning from multi target scheme."
],
[
"Part of this work was supported by JSPS KAKENHI Grant Numbers JP17H06101 and JP 17K00237. "
]
],
"section_name": [
"Introduction",
"Attention-based Encoder Decoder for Raw Speech Recognition",
"Feature Transfer Learning",
"Speech Data",
"Model Architectures",
"Result",
"Related Work",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"877bb992dca3930c942dde49a5fa3093b1f8c2b8",
"8e9701a99dcdcd6c13811e281bf23cc3d1409bb3"
],
"answer": [
{
"evidence": [
"In this work, we use the raw waveform as the input representation instead of spectral-based features and a grapheme (character) sequence as the output representation. In contrast to most encoder-decoder architectures, which are purely based on recurrent neural network (RNNs) framework, we construct an encoder with several convolutional layers BIBREF14 followed by NIN layers BIBREF15 as the lower part in the encoder and integrate them with deep bidirectional long short-term memory (Bi-LSTM) BIBREF16 at the higher part. We use convolutional layers because they are suitable for extracting local information from raw speech. We use a striding mechanism to reduce the dimension from the input frames BIBREF17 , while the NIN layer represents more complex structures on the top of the convolutional layers. On the decoder side, we use a standard deep unidirectional LSTM with global attention BIBREF13 that is calculated by a multi-layer perceptron (MLP) as described in Eq. EQREF2 . For more details, we illustrate our architecture in Figure FIGREF4 ."
],
"extractive_spans": [
"we construct an encoder with several convolutional layers BIBREF14 followed by NIN layers BIBREF15 as the lower part in the encoder and integrate them with deep bidirectional long short-term memory (Bi-LSTM) BIBREF16 at the higher part",
"On the decoder side, we use a standard deep unidirectional LSTM with global attention BIBREF13 that is calculated by a multi-layer perceptron (MLP)"
],
"free_form_answer": "",
"highlighted_evidence": [
" In contrast to most encoder-decoder architectures, which are purely based on recurrent neural network (RNNs) framework, we construct an encoder with several convolutional layers BIBREF14 followed by NIN layers BIBREF15 as the lower part in the encoder and integrate them with deep bidirectional long short-term memory (Bi-LSTM) BIBREF16 at the higher part.",
"On the decoder side, we use a standard deep unidirectional LSTM with global attention BIBREF13 that is calculated by a multi-layer perceptron (MLP) as described in Eq. EQREF2 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"On the top layers of the encoder after the transferred convolutional and NIN layers, we put three bidirectional LSTMs (Bi-LSTM) with 256 hidden units (total 512 units for both directions). To reduce the computational time, we used hierarchical subsampling BIBREF21 , BIBREF22 , BIBREF10 . We applied subsampling on all the Bi-LSTM layers and reduced the length by a factor of 8.",
"On the decoder side, the previous input phonemes / characters were converted into real vectors by a 128-dimensional embedding matrix. We used one unidirectional LSTM with 512 hidden units and followed by a softmax layer to output the character probability. For the end-to-end training phase, we froze the parameter values from the transferred layers from epoch 0 to epoch 10, and after epoch 10 we jointly optimized all the parameters together until the end of training (a total 40 epochs). We used an Adam BIBREF23 optimizer with a learning rate of 0.0005."
],
"extractive_spans": [],
"free_form_answer": "In encoder they use convolutional, NIN and bidirectional LSTM layers and in decoder they use unidirectional LSTM ",
"highlighted_evidence": [
"On the top layers of the encoder after the transferred convolutional and NIN layers, we put three bidirectional LSTMs (Bi-LSTM) with 256 hidden units (total 512 units for both directions).",
"On the decoder side, the previous input phonemes / characters were converted into real vectors by a 128-dimensional embedding matrix. We used one unidirectional LSTM with 512 hidden units and followed by a softmax layer to output the character probability. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"93bd1cd5ca13bd0ca1be065adb2054f442901731",
"d561c851065ba5e7a415d9f86155c9cbf50237b5"
],
"answer": [
{
"evidence": [
"where INLINEFORM0 , INLINEFORM1 is the number of hidden units for the encoder and INLINEFORM2 is the number of hidden units for the decoder. Finally, the decoder task, which predicts the target sequence probability at time INLINEFORM3 based on previous output and context information INLINEFORM4 , can be formulated as: DISPLAYFORM0"
],
"extractive_spans": [
"decoder task, which predicts the target sequence probability at time INLINEFORM3 based on previous output and context information"
],
"free_form_answer": "",
"highlighted_evidence": [
"Finally, the decoder task, which predicts the target sequence probability at time INLINEFORM3 based on previous output and context information INLINEFORM4 , can be formulated as: DISPLAYFORM0"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"where INLINEFORM0 , INLINEFORM1 is the number of hidden units for the encoder and INLINEFORM2 is the number of hidden units for the decoder. Finally, the decoder task, which predicts the target sequence probability at time INLINEFORM3 based on previous output and context information INLINEFORM4 , can be formulated as: DISPLAYFORM0",
"The most common input INLINEFORM0 for speech recognition tasks is a sequence of feature vectors such as log Mel-spectral spectrogram and/or MFCC. Therefore, INLINEFORM1 where D is the number of the features and S is the total length of the utterance in frames. The output INLINEFORM2 can be either phoneme or grapheme (character) sequence.",
"In the decoding phase, we used a beam search strategy with beam size INLINEFORM0 and we adjusted the score by dividing with the transcription length to prevent the decoder from favoring shorter transcriptions. We did not use any language model or lexicon dictionary for decoding. All of our models were implemented on the PyTorch framework ."
],
"extractive_spans": [],
"free_form_answer": "Decoder predicts the sequence of phoneme or grapheme at each time based on the previous output and context information with a beam search strategy",
"highlighted_evidence": [
"Finally, the decoder task, which predicts the target sequence probability at time INLINEFORM3 based on previous output and context information INLINEFORM4 , can be formulated as: DISPLAYFORM0",
"The output INLINEFORM2 can be either phoneme or grapheme (character) sequence.",
"In the decoding phase, we used a beam search strategy with beam size INLINEFORM0 and we adjusted the score by dividing with the transcription length to prevent the decoder from favoring shorter transcriptions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"68466f3b3a82ea3fd18ea7d58c75ae75421a3920",
"823020dac00a05e9b5134f5d01ea4c3cdd1e1cf5"
],
"answer": [
{
"evidence": [
"In this study, we investigate the performance of our proposed models on WSJ BIBREF5 . We used the same definitions of the training, development and test set as the Kaldi s5 recipe BIBREF18 . The raw speech waveforms were segmented into multiple frames with a 25ms window size and a 10ms step size. We normalized the raw speech waveform into the range -1 to 1. For spectral based features such as MFCC and log Mel-spectrogram, we normalized the features for each dimension into zero mean and unit variance. For WSJ, we separated into two experiments by using WSJ-SI84 only and WSJ-SI284 data. We used dev_93 for our validation set and eval_92 for our test set. We used the character sequence as our decoder target and followed the preprocessing step proposed by BIBREF19 . The text from all the utterances was mapped into a 32-character set: 26 (a-z) alphabet, apostrophe, period, dash, space, noise, and “eos\"."
],
"extractive_spans": [
"WSJ"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this study, we investigate the performance of our proposed models on WSJ BIBREF5 . "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"An example of our transfer learning results is shown in Figure FIGREF8 , and Table TABREF14 shows the speech recognition performance in CER for both the WSJ-SI84 and WSJ-SI284 datasets. We compared our method with several published models like CTC, Attention Encoder-Decoder and Joint CTC-Attention model that utilize CTC for training the encoder part. Besides, we also train our own baseline Attention Encoder-Decoder with Mel-scale spectrogram. The difference between our Attention Encoder-Decoder (“Att Enc-Dec (ours)\", “Att Enc-Dec Wav2Text\") with Attention Encoder-Decoder from BIBREF24 (“Att Enc-Dec Content\", “Att Enc-Dec Location\") is we used the current hidden states to generate the attention vector instead of the previous hidden states. Another addition is we utilized “input feedback\" method BIBREF13 by concatenating the previous context vector into the current input along with the character embedding vector. By using those modifications, we are able to improve the baseline performance."
],
"extractive_spans": [
"WSJ-SI84",
"WSJ-SI284"
],
"free_form_answer": "",
"highlighted_evidence": [
"An example of our transfer learning results is shown in Figure FIGREF8 , and Table TABREF14 shows the speech recognition performance in CER for both the WSJ-SI84 and WSJ-SI284 datasets."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which architecture do they use for the encoder and decoder?",
"How does their decoder generate text?",
"Which dataset do they use?"
],
"question_id": [
"1b23c4535a6c10eb70bbc95313c465e4a547db5e",
"0a75a52450ed866df3a304077769e1725a995bb7",
"fd0a3e9c210163a55d3ed791e95ae3875184b8f8"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Attention-based Wav2Text architecture.",
"Fig. 2. Feature transfer learning: train lower layers of the encoder (convolutional and NIN layers) to predict spectral features given corresponding raw waveform; then transfer the trained layers and parameters (marked by orange square) into attention-based encoder decoder model (see Figure 1).",
"Fig. 3. Example of our transfer learning model output: top is the original Mel-spectrogram, and bottom is the predicted Melspectrogram.",
"Table 1. Layer setting details for convolutional and NIN layers. Sorted from the input layer to the output layer.",
"Table 2. Character error rate (CER) result from baseline and proposed models on WSJ0 and WSJ1 dataset. All of these results are produced without using language model or lexicon dictionary. Word error rate (WER) for Att Wav2Text + transfer multi-target is 17.04%, compared to Joint CTC+Att (MTL)[25] 18.2% and standard Enc-Dec Att [23] 18.6%."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Table1-1.png",
"5-Table2-1.png"
]
} | [
"Which architecture do they use for the encoder and decoder?",
"How does their decoder generate text?"
] | [
[
"1709.07814-Model Architectures-1",
"1709.07814-Model Architectures-2",
"1709.07814-Attention-based Encoder Decoder for Raw Speech Recognition-4"
],
[
"1709.07814-Model Architectures-3",
"1709.07814-Attention-based Encoder Decoder for Raw Speech Recognition-3"
]
] | [
"In encoder they use convolutional, NIN and bidirectional LSTM layers and in decoder they use unidirectional LSTM ",
"Decoder predicts the sequence of phoneme or grapheme at each time based on the previous output and context information with a beam search strategy"
] | 88 |
1610.09225 | Sentiment Analysis of Twitter Data for Predicting Stock Market Movements | Predicting stock market movements is a well-known problem of interest. Now-a-days social media is perfectly representing the public sentiment and opinion about current events. Especially, twitter has attracted a lot of attention from researchers for studying the public sentiments. Stock market prediction on the basis of public sentiments expressed on twitter has been an intriguing field of research. Previous studies have concluded that the aggregate public mood collected from twitter may well be correlated with Dow Jones Industrial Average Index (DJIA). The thesis of this work is to observe how well the changes in stock prices of a company, the rises and falls, are correlated with the public opinions being expressed in tweets about that company. Understanding author's opinion from a piece of text is the objective of sentiment analysis. The present paper have employed two different textual representations, Word2vec and N-gram, for analyzing the public sentiments in tweets. In this paper, we have applied sentiment analysis and supervised machine learning principles to the tweets extracted from twitter and analyze the correlation between stock market movements of a company and sentiments in tweets. In an elaborate way, positive news and tweets in social media about a company would definitely encourage people to invest in the stocks of that company and as a result the stock price of that company would increase. At the end of the paper, it is shown that a strong correlation exists between the rise and falls in stock prices with the public sentiments in tweets. | {
"paragraphs": [
[
"Earlier studies on stock market prediction are based on the historical stock prices. Later studies have debunked the approach of predicting stock market movements using historical prices. Stock market prices are largely fluctuating. The efficient market hypothesis (EMH) states that financial market movements depend on news, current events and product releases and all these factors will have a significant impact on a company's stock value BIBREF0 . Because of the lying unpredictability in news and current events, stock market prices follow a random walk pattern and cannot be predicted with more than 50% accuracy BIBREF1 .",
"With the advent of social media, the information about public feelings has become abundant. Social media is transforming like a perfect platform to share public emotions about any topic and has a significant impact on overall public opinion. Twitter, a social media platform, has received a lot of attention from researchers in the recent times. Twitter is a micro-blogging application that allows users to follow and comment other users thoughts or share their opinions in real time BIBREF2 . More than million users post over 140 million tweets every day. This situation makes Twitter like a corpus with valuable data for researchers BIBREF3 .Each tweet is of 140 characters long and speaks public opinion on a topic concisely. The information exploited from tweets are very useful for making predictions BIBREF4 .",
"In this paper, we contribute to the field of sentiment analysis of twitter data. Sentiment classification is the task of judging opinion in a piece of text as positive, negative or neutral.",
"There are many studies involving twitter as a major source for public-opinion analysis. Asur and Huberman BIBREF5 have predicted box office collections for a movie prior to its release based on public sentiment related to movies, as expressed on Twitter. Google flu trends are being widely studied along with twitter for early prediction of disease outbreaks. Eiji et al. BIBREF6 have studied the twitter data for catching the flu outbreaks. Ruiz et al. BIBREF7 have used time-constrained graphs to study the problem of correlating the Twitter micro-blogging activity with changes in stock prices and trading volumes. Bordino et al. BIBREF8 have shown that trading volumes of stocks traded in NASDAQ-100 are correlated with their query volumes (i.e., the number of users requests submitted to search engines on the Internet). Gilbert and Karahalios BIBREF9 have found out that increases in expressions of anxiety, worry and fear in weblogs predict downward pressure on the S&P 500 index. Bollen BIBREF10 showed that public mood analyzed through twitter feeds is well correlated with Dow Jones Industrial Average (DJIA). All these studies showcased twitter as a valuable source and a powerful tool for conducting studies and making predictions.",
"Rest of the paper is organized as follows. Section 2 describes the related works and Section 3 discusses the data portion demonstrating the data collection and pre-processing part. In Section 4 we discuss the sentiment analysis part in our work followed by Section 5 which examines the correlation part of extracted sentiment with stocks. In Section 6 we present the results, accuracy and precision of our sentiment analyzer followed by the accuracy of correlation analyzer. In Section 7 we present our conclusions and Section 8 deals with our future work plan.",
"",
""
],
[
"The most well-known publication in this area is by Bollen BIBREF10 . They investigated whether the collective mood states of public (Happy, calm, Anxiety) derived from twitter feeds are correlated to the value of the Dow Jones Industrial Index. They used a Fuzzy neural network for their prediction. Their results show that public mood states in twitter are strongly correlated with Dow Jones Industrial Index. Chen and Lazer BIBREF11 derived investment strategies by observing and classifying the twitter feeds. Bing et al. BIBREF12 studied the tweets and concluded the predictability of stock prices based on the type of industry like Finance, IT etc. Zhang BIBREF13 found out a high negative correlation between mood states like hope, fear and worry in tweets with the Dow Jones Average Index. Recently, Brian et al. BIBREF14 investigated the correlation of sentiments of public with stock increase and decreases using Pearson correlation coefficient for stocks. In this paper, we took a novel approach of predicting rise and fall in stock prices based on the sentiments extracted from twitter to find the correlation. The core contribution of our work is the development of a sentiment analyzer which works better than the one in Brian's work and a novel approach to find the correlation. Sentiment analyzer is used to classify the sentiments in tweets extracted.The human annotated dataset in our work is also exhaustive. We have shown that a strong correlation exists between twitter sentiments and the next day stock prices in the results section. We did so by considering the tweets and stock opening and closing prices of Microsoft over a year."
],
[
"A total of 2,50,000 tweets over a period of August 31st, 2015 to August 25th,2016 on Microsoft are extracted from twitter API BIBREF15 . Twitter4J is a java application which helps us to extract tweets from twitter. The tweets were collected using Twitter API and filtered using keywords like $ MSFT, # Microsoft, #Windows etc. Not only the opinion of public about the company's stock but also the opinions about products and services offered by the company would have a significant impact and are worth studying. Based on this principle, the keywords used for filtering are devised with extensive care and tweets are extracted in such a way that they represent the exact emotions of public about Microsoft over a period of time. The news on twitter about Microsoft and tweets regarding the product releases were also included. Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016 are obtained from Yahoo! Finance BIBREF16 ."
],
[
"Stock prices data collected is not complete understandably because of weekends and public holidays when the stock market does not function. The missing data is approximated using a simple technique by Goel BIBREF17 . Stock data usually follows a concave function. So, if the stock value on a day is x and the next value present is y with some missing in between. The first missing value is approximated to be (y+x)/2 and the same method is followed to fill all the gaps.",
"Tweets consists of many acronyms, emoticons and unnecessary data like pictures and URL's. So tweets are preprocessed to represent correct emotions of public. For preprocessing of tweets we employed three stages of filtering: Tokenization, Stopwords removal and regex matching for removing special characters.",
"Tweets are split into individual words based on the space and irrelevant symbols like emoticons are removed. We form a list of individual words for each tweet.",
"Words that do not express any emotion are called Stopwords. After splitting a tweet, words like a,is, the, with etc. are removed from the list of words.",
"Regex matching in Python is performed to match URL’s and are replaced by the term URL. Often tweets consists of hashtags(#) and @ addressing other users. They are also replaced suitably. For example, #Microsoft is replaced with Microsoft and @Billgates is replaced with USER. Prolonged word showing intense emotions like coooooooool! is replaced with cool! After these stages the tweets are ready for sentiment classification."
],
[
"Sentiment analysis task is very much field specific. There is lot of research on sentiment analysis of movie reviews and news articles and many sentiment analyzers are available as an open source. The main problem with these analyzers is that they are trained with a different corpus. For instance, Movie corpus and stock corpus are not equivalent. So, we developed our own sentiment analyzer.",
"Tweets are classified as positive, negative and neutral based on the sentiment present BIBREF18 . 3,216 tweets out of the total tweets are examined by humans and annotated as 1 for Positive, 0 for Neutral and 2 for Negative emotions. For classification of nonhuman annotated tweets a machine learning model is trained whose features are extracted from the human annotated tweets."
],
[
"Textual representations are done using two methods:n-grams and Word2vec",
"N-gram representation is known for its specificity to match the corpus of text being studied. In these techniques a full corpus of related text is parsed which are tweets in the present work, and every appearing word sequence of length n is extracted from the tweets to form a dictionary of words and phrases. For example the text “Microsoft is launching a new product\" has the following 3-gram word features:“Microsoft is launching\", “is launching a\", “launching a new\" and “a new product\". In our case, N-grams for all the tweets form the corpus. In this representation, tweet is split into N-grams and the features to the model are a string of 1’s and 0’s where 1 represents the presence of that N-gram of the tweet in the corpus and a 0 indicates the absence.",
"Word2vec representation is far better, advanced and a recent technique which functions by mapping words to a 300 dimensional vector representations. Once every word of the language has been mapped to a unique vector, vectors of words can be summed up yielding a resultant vector for any given collection of words BIBREF19 . Relationship between the words is exactly retained in this form of representation. Word vectors difference between Rome and Italy is very close to the difference between vectors of France and Paris This sustained relationship between word concepts makes word2vec model very attractive for textual analysis. In this representation, resultant vector which is sum of 300 dimensional vectors of all words in a tweet acts as features to the model."
],
[
"The features extracted using the above methods for the human annotated tweets are fed to the classifier and trained using random forest algorithm. Both the textual representations performed well and the results are comparable. Out of the two, model trained with word2vec representation is picked because of its sustainability of meaning and promising performance over large datasets. The results of sentiment classification are discussed in the following sections. The devised classifier is used to predict the emotions of non-human annotated tweets. Table-1 shows a sample of annotated tweets by the sentiment analyzer."
],
[
"The stock price data of Microsoft are labeled suitably for training using a simple program. If the previous day stock price is more than the current day stock price, the current day is marked with a numeric value of 0, else marked with a numeric value of 1. Now, this correlation analysis turns out to be a classification problem. The total positive, negative and neutral emotions in tweets in a 3 day period are calculated successively which are used as features for the classifier model and the output is the labeled next day value of stock 0 or 1.The window size is experimented and best results are achieved when the sentiment values precede 3 days to the stock price. A total of 355 instances, each with 3 attributes are fed to the classifier with a split proportions of 80% train dataset and the remaining dataset for testing. The accuracy of the classifier is discussed in the results section."
],
[
"This section gives an overview of accuracy rates of the trained classifiers. All the calculations are done in Weka tool which runs on java virtual machine BIBREF20 "
],
[
"The above sections discussed the method followed to train the classifier used for sentiment analysis of tweets. The classifier with features as Word2vec representations of human annotated tweets trained on Random Forest algorithm with a split percentage of 90 for training the model and remaining for testing the model showed an accuracy of 70.2%. With N-gram representations, the classifier model with same algorithm and with same dataset showed an accuracy of 70.5%. Though the results are very close, model trained with word2vec representations is picked to classify the nonhuman annotated tweets because of its promising accuracy for large datasets and the sustainability in word meaning. Numerous studies have been conducted on people and they concluded that the rate of human concordance, that is the degree of agreement among humans on the sentiment of a text, is between 70% and 79% BIBREF21 . They have also synthesized that sentiment analyzers above 70% are very accurate in most of the cases. Provided this information, the results we obtained from the sentiment classification can be observed as very good figures while predicting the sentiments in short texts, tweets, less than 140 characters in length. Table-2 depicts the results of sentiment classification including accuracy, precision, F-measure and recall when trained with different machine learning algorithms. ROC curves are plotted for detailed analysis."
],
[
"A classifier is presented in the previous sections that is trained with aggregate sentiment values for 3-day period as features and the increase/decrease in stock price represented by 1/0 as the output. Total data is split into two parts, 80 percent to train the model and remaining for testing operations. The classifier results show an accuracy value of 69.01% when trained using Logistic regression algorithm and the accuracy rate varied with the training set. When the model with LibSVM is trained with 90 percent of data, it gave a result of 71.82%. These results give a significant edge to the investors and they show good correlation between stock market movements and the sentiments of public expressed in twitter. This trend shows that with increasing dataset the models are performing well. We would like to incorporate more data in our future work."
],
[
"In this paper, we have shown that a strong correlation exists between rise/fall in stock prices of a company to the public opinions or emotions about that company expressed on twitter through tweets. The main contribution of our work is the development of a sentiment analyzer that can judge the type of sentiment present in the tweet. The tweets are classified into three categories: positive, negative and neutral. At the beginning, we claimed that positive emotions or sentiment of public in twitter about a company would reflect in its stock price. Our speculation is well supported by the results achieved and seems to have a promising future in research."
],
[
"In this work, we have considered only twitter data for analyzing people's sentiment which may be biased because not all the people who trade in stocks share their opinions on twitter. Stocktwits BIBREF22 is a financial communication platform designed solely for sharing ideas and insights of investors, entrepreneurs and traders. The current study can be extended by incorporating stocktwits data. In addition to this, data from news can also be included for an exhaustive public opinion collection.",
"While training the sentiment analyzer, 3,216 tweets are used which is comparatively a less number to train a sentiment analyzer. In future, we look forward to human annotate more than 10,000 tweets and train the classifiers. With increasing size of training datasets, the models tend to perform better."
],
[
"The authors would like to thank the students of IIT Bhubaneswar who contributed to the human annotation of tweets."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data Collection",
"Data Pre-Processing",
"Sentiment Analysis",
"Feature Extraction",
"Model Training",
"Correlation Analysis of Price and Sentiment ",
"Results and Discussion",
"Sentiment Analyzer Results",
"Stock Price and Sentiment Correlation Results",
"Conclusion",
"Future Work",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"7d44fc30613500f317a3a35deab74903ad85f5a5",
"ff29318c9c5545f99c4306c04595c3f01e27c52c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Data Pre-Processing",
"Stock prices data collected is not complete understandably because of weekends and public holidays when the stock market does not function. The missing data is approximated using a simple technique by Goel BIBREF17 . Stock data usually follows a concave function. So, if the stock value on a day is x and the next value present is y with some missing in between. The first missing value is approximated to be (y+x)/2 and the same method is followed to fill all the gaps."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Data Pre-Processing\nStock prices data collected is not complete understandably because of weekends and public holidays when the stock market does not function. The missing data is approximated using a simple technique by Goel BIBREF17 . Stock data usually follows a concave function. So, if the stock value on a day is x and the next value present is y with some missing in between. The first missing value is approximated to be (y+x)/2 and the same method is followed to fill all the gaps."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"6f5545ce74c54ca7399ece882f170e5710e3c100",
"de4ed5a5b68ddc8e22c35ea2612f7be4f781551e"
],
"answer": [
{
"evidence": [
"Word2vec representation is far better, advanced and a recent technique which functions by mapping words to a 300 dimensional vector representations. Once every word of the language has been mapped to a unique vector, vectors of words can be summed up yielding a resultant vector for any given collection of words BIBREF19 . Relationship between the words is exactly retained in this form of representation. Word vectors difference between Rome and Italy is very close to the difference between vectors of France and Paris This sustained relationship between word concepts makes word2vec model very attractive for textual analysis. In this representation, resultant vector which is sum of 300 dimensional vectors of all words in a tweet acts as features to the model."
],
"extractive_spans": [
"300"
],
"free_form_answer": "",
"highlighted_evidence": [
"Word2vec representation is far better, advanced and a recent technique which functions by mapping words to a 300 dimensional vector representations."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Word2vec representation is far better, advanced and a recent technique which functions by mapping words to a 300 dimensional vector representations. Once every word of the language has been mapped to a unique vector, vectors of words can be summed up yielding a resultant vector for any given collection of words BIBREF19 . Relationship between the words is exactly retained in this form of representation. Word vectors difference between Rome and Italy is very close to the difference between vectors of France and Paris This sustained relationship between word concepts makes word2vec model very attractive for textual analysis. In this representation, resultant vector which is sum of 300 dimensional vectors of all words in a tweet acts as features to the model."
],
"extractive_spans": [
"300"
],
"free_form_answer": "",
"highlighted_evidence": [
"Word2vec representation is far better, advanced and a recent technique which functions by mapping words to a 300 dimensional vector representations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"d03bc79638cb60b59ac4b7c9714e11b6ed56e136",
"e7e7b0b7a2ecb7db738187755f4f71de067d1dd8"
],
"answer": [
{
"evidence": [
"A total of 2,50,000 tweets over a period of August 31st, 2015 to August 25th,2016 on Microsoft are extracted from twitter API BIBREF15 . Twitter4J is a java application which helps us to extract tweets from twitter. The tweets were collected using Twitter API and filtered using keywords like $ MSFT, # Microsoft, #Windows etc. Not only the opinion of public about the company's stock but also the opinions about products and services offered by the company would have a significant impact and are worth studying. Based on this principle, the keywords used for filtering are devised with extensive care and tweets are extracted in such a way that they represent the exact emotions of public about Microsoft over a period of time. The news on twitter about Microsoft and tweets regarding the product releases were also included. Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016 are obtained from Yahoo! Finance BIBREF16 ."
],
"extractive_spans": [
"2,50,000 tweets",
"Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016"
],
"free_form_answer": "",
"highlighted_evidence": [
"A total of 2,50,000 tweets over a period of August 31st, 2015 to August 25th,2016 on Microsoft are extracted from twitter API BIBREF15 .",
"The news on twitter about Microsoft and tweets regarding the product releases were also included. Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016 are obtained from Yahoo! Finance BIBREF16 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"A total of 2,50,000 tweets over a period of August 31st, 2015 to August 25th,2016 on Microsoft are extracted from twitter API BIBREF15 . Twitter4J is a java application which helps us to extract tweets from twitter. The tweets were collected using Twitter API and filtered using keywords like $ MSFT, # Microsoft, #Windows etc. Not only the opinion of public about the company's stock but also the opinions about products and services offered by the company would have a significant impact and are worth studying. Based on this principle, the keywords used for filtering are devised with extensive care and tweets are extracted in such a way that they represent the exact emotions of public about Microsoft over a period of time. The news on twitter about Microsoft and tweets regarding the product releases were also included. Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016 are obtained from Yahoo! Finance BIBREF16 ."
],
"extractive_spans": [],
"free_form_answer": "Collected tweets and opening and closing stock prices of Microsoft.",
"highlighted_evidence": [
"A total of 2,50,000 tweets over a period of August 31st, 2015 to August 25th,2016 on Microsoft are extracted from twitter API BIBREF15 .",
"Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016 are obtained from Yahoo! Finance BIBREF16 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Do they remove seasonality from the time series?",
"What is the dimension of the embeddings?",
"What dataset is used to train the model?"
],
"question_id": [
"3611a72f754de1e256fbd25b012197e1c24e8470",
"4c07c33dfaf4f3e6db55e377da6fa69825d0ba15",
"b1ce129678e37070e69f01332f1a8587e18e06b0"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1: Flow Chart of the proposed analysis",
"TABLE I: Sample tweets sentiment labeling by the model",
"TABLE II: Sentiment Analysis Results"
],
"file": [
"3-Figure1-1.png",
"3-TableI-1.png",
"5-TableII-1.png"
]
} | [
"What dataset is used to train the model?"
] | [
[
"1610.09225-Data Collection-0"
]
] | [
"Collected tweets and opening and closing stock prices of Microsoft."
] | 92 |
1805.04508 | Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems | Automatic machine learning systems can inadvertently accentuate and perpetuate inappropriate human biases. Past work on examining inappropriate biases has largely focused on just individual systems. Further, there is no benchmark dataset for examining inappropriate biases in systems. Here for the first time, we present the Equity Evaluation Corpus (EEC), which consists of 8,640 English sentences carefully chosen to tease out biases towards certain races and genders. We use the dataset to examine 219 automatic sentiment analysis systems that took part in a recent shared task, SemEval-2018 Task 1 'Affect in Tweets'. We find that several of the systems show statistically significant bias; that is, they consistently provide slightly higher sentiment intensity predictions for one race or one gender. We make the EEC freely available. | {
"paragraphs": [
[
"[0]leftmargin=* [0]leftmargin=*",
"Automatic systems have had a significant and beneficial impact on all walks of human life. So much so that it is easy to overlook their potential to benefit society by promoting equity, diversity, and fairness. For example, machines do not take bribes to do their jobs, they can determine eligibility for a loan without being influenced by the color of the applicant's skin, and they can provide access to information and services without discrimination based on gender or sexual orientation. Nonetheless, as machine learning systems become more human-like in their predictions, they can also perpetuate human biases. Some learned biases may be beneficial for the downstream application (e.g., learning that humans often use some insect names, such as spider or cockroach, to refer to unpleasant situations). Other biases can be inappropriate and result in negative experiences for some groups of people. Examples include, loan eligibility and crime recidivism prediction systems that negatively assess people belonging to a certain pin/zip code (which may disproportionately impact people of a certain race) BIBREF0 and resumé sorting systems that believe that men are more qualified to be programmers than women BIBREF1 . Similarly, sentiment and emotion analysis systems can also perpetuate and accentuate inappropriate human biases, e.g., systems that consider utterances from one race or gender to be less positive simply because of their race or gender, or customer support systems that prioritize a call from an angry male over a call from the equally angry female.",
"Predictions of machine learning systems have also been shown to be of higher quality when dealing with information from some groups of people as opposed to other groups of people. For example, in the area of computer vision, gender classification systems perform particularly poorly for darker skinned females BIBREF2 . Natural language processing (NLP) systems have been shown to be poor in understanding text produced by people belonging to certain races BIBREF3 , BIBREF4 . For NLP systems, the sources of the bias often include the training data, other corpora, lexicons, and word embeddings that the machine learning algorithm may leverage to build its prediction model.",
"Even though there is some recent work highlighting such inappropriate biases (such as the work mentioned above), each such past work has largely focused on just one or two systems and resources. Further, there is no benchmark dataset for examining inappropriate biases in natural language systems. In this paper, we describe how we compiled a dataset of 8,640 English sentences carefully chosen to tease out biases towards certain races and genders. We will refer to it as the Equity Evaluation Corpus (EEC). We used the EEC as a supplementary test set in a recent shared task on predicting sentiment and emotion intensity in tweets, SemEval-2018 Task 1: Affect in Tweets BIBREF5 . In particular, we wanted to test a hypothesis that a system should equally rate the intensity of the emotion expressed by two sentences that differ only in the gender/race of a person mentioned. Note that here the term system refers to the combination of a machine learning architecture trained on a labeled dataset, and possibly using additional language resources. The bias can originate from any or several of these parts. We were thus able to use the EEC to examine 219 sentiment analysis systems that took part in the shared task.",
"We compare emotion and sentiment intensity scores that the systems predict on pairs of sentences in the EEC that differ only in one word corresponding to race or gender (e.g., `This man made me feel angry' vs. `This woman made me feel angry'). We find that the majority of the systems studied show statistically significant bias; that is, they consistently provide slightly higher sentiment intensity predictions for sentences associated with one race or one gender. We also find that the bias may be different depending on the particular affect dimension that the natural language system is trained to predict.",
"Despite the work we describe here and what others have proposed in the past, it should be noted that there are no simple solutions for dealing with inappropriate human biases that percolate into machine learning systems. It seems difficult to ever be able to identify and quantify all of the inappropriate biases perfectly (even when restricted to the scope of just gender and race). Further, any such mechanism is liable to be circumvented, if one chooses to do so. Nonetheless, as developers of sentiment analysis systems, and NLP systems more broadly, we cannot absolve ourselves of the ethical implications of the systems we build. Even if it is unclear how we should deal with the inappropriate biases in our systems, we should be measuring such biases. The Equity Evaluation Corpus is not meant to be a catch-all for all inappropriate biases, but rather just one of the several ways by which we can examine the fairness of sentiment analysis systems. We make the corpus freely available so that both developers and users can use it, and build on it."
],
[
"Recent studies have demonstrated that the systems trained on the human-written texts learn human-like biases BIBREF1 , BIBREF6 . In general, any predictive model built on historical data may inadvertently inherit human biases based on gender, ethnicity, race, or religion BIBREF7 , BIBREF8 . Discrimination-aware data mining focuses on measuring discrimination in data as well as on evaluating performance of discrimination-aware predictive models BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 .",
"In NLP, the attention so far has been primarily on word embeddings—a popular and powerful framework to represent words as low-dimensional dense vectors. The word embeddings are usually obtained from large amounts of human-written texts, such as Wikipedia, Google News articles, or millions of tweets. Bias in sentiment analysis systems has only been explored in simple systems that make use of pre-computed word embeddings BIBREF13 . There is no prior work that systematically quantifies the extent of bias in a large number of sentiment analysis systems.",
"This paper does not examine the differences in accuracies of systems on text produced by different races or genders, as was done by hovy2015demographic,blodgett2016demographic,jurgens2017incorporating,buolamwini2018gender. Approaches on how to mitigate inappropriate biases BIBREF14 , BIBREF1 , BIBREF15 , BIBREF16 , BIBREF13 , BIBREF17 , BIBREF18 are also beyond the scope of this paper. See also the position paper by hovy2016social, which identifies socio-ethical implications of the NLP systems in general."
],
[
"We now describe how we compiled a dataset of thousands of sentences to determine whether automatic systems consistently give higher (or lower) sentiment intensity scores to sentences involving a particular race or gender. There are several ways in which such a dataset may be compiled. We present below the choices that we made.",
"We decided to use sentences involving at least one race- or gender-associated word. The sentences were intended to be short and grammatically simple. We also wanted some sentences to include expressions of sentiment and emotion, since the goal is to test sentiment and emotion systems. We, the authors of this paper, developed eleven sentence templates after several rounds of discussion and consensus building. They are shown in Table TABREF3 . The templates are divided into two groups. The first type (templates 1–7) includes emotion words. The purpose of this set is to have sentences expressing emotions. The second type (templates 8–11) does not include any emotion words. The purpose of this set is to have non-emotional (neutral) sentences.",
"The templates include two variables: INLINEFORM0 person INLINEFORM1 and INLINEFORM2 emotion word INLINEFORM3 . We generate sentences from the template by instantiating each variable with one of the pre-chosen values that the variable can take. Each of the eleven templates includes the variable INLINEFORM4 person INLINEFORM5 . INLINEFORM6 person INLINEFORM7 can be instantiated by any of the following noun phrases:",
"",
"For our study, we chose ten names of each kind from the study by Caliskan:2017 (see Table TABREF7 ). The full lists of noun phrases representing females and males, used in our study, are shown in Table TABREF8 .",
"The second variable, INLINEFORM0 emotion word INLINEFORM1 , has two variants. Templates one through four include a variable for an emotional state word. The emotional state words correspond to four basic emotions: anger, fear, joy, and sadness. Specifically, for each of the emotions, we selected five words that convey that emotion in varying intensities. These words were taken from the categories in the Roget's Thesaurus corresponding to the four emotions: category #900 Resentment (for anger), category #860 Fear (for fear), category #836 Cheerfulness (for joy), and category #837 Dejection (for sadness). Templates five through seven include emotion words describing a situation or event. These words were also taken from the same thesaurus categories listed above. The full lists of emotion words (emotional state words and emotional situation/event words) are shown in Table TABREF10 .",
"We generated sentences from the templates by replacing INLINEFORM0 person INLINEFORM1 and INLINEFORM2 emotion word INLINEFORM3 variables with the values they can take. In total, 8,640 sentences were generated with the various combinations of INLINEFORM4 person INLINEFORM5 and INLINEFORM6 emotion word INLINEFORM7 values across the eleven templates. We manually examined the sentences to make sure they were grammatically well-formed. Notably, one can derive pairs of sentences from the EEC such that they differ only in one word corresponding to gender or race (e.g., `My daughter feels devastated' and `My son feels devastated'). We refer to the full set of 8,640 sentences as Equity Evaluation Corpus."
],
[
"The race and gender bias evaluation was carried out on the output of the 219 automatic systems that participated in SemEval-2018 Task 1: Affect in Tweets BIBREF5 . The shared task included five subtasks on inferring the affectual state of a person from their tweet: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. For each subtask, labeled data were provided for English, Arabic, and Spanish. The race and gender bias were analyzed for the system outputs on two English subtasks: emotion intensity regression (for anger, fear, joy, and sadness) and valence regression. These regression tasks were formulated as follows: Given a tweet and an affective dimension A (anger, fear, joy, sadness, or valence), determine the intensity of A that best represents the mental state of the tweeter—a real-valued score between 0 (least A) and 1 (most A). Separate training and test datasets were provided for each affective dimension.",
"Training sets included tweets along with gold intensity scores. Two test sets were provided for each task: 1. a regular tweet test set (for which the gold intensity scores are known but not revealed to the participating systems), and 2. the Equity Evaluation Corpus (for which no gold intensity labels exist). Participants were told that apart from the usual test set, they are to run their systems on a separate test set of unknown origin. The participants were instructed to train their system on the tweets training sets provided, and that they could use any other resources they may find or create. They were to run the same final system on the two test sets. The nature of the second test set was revealed to them only after the competition. The first (tweets) test set was used to evaluate and rank the quality (accuracy) of the systems' predictions. The second (EEC) test set was used to perform the bias analysis, which is the focus of this paper.",
"Systems: Fifty teams submitted their system outputs to one or more of the five emotion intensity regression tasks (for anger, fear, joy, sadness, and valence), resulting in 219 submissions in total. Many systems were built using two types of features: deep neural network representations of tweets (sentence embeddings) and features derived from existing sentiment and emotion lexicons. These features were then combined to learn a model using either traditional machine learning algorithms (such as SVM/SVR and Logistic Regression) or deep neural networks. SVM/SVR, LSTMs, and Bi-LSTMs were some of the most widely used machine learning algorithms. The sentence embeddings were obtained by training a neural network on the provided training data, a distant supervision corpus (e.g., AIT2018 Distant Supervision Corpus that has tweets with emotion-related query terms), sentiment-labeled tweet corpora (e.g., Semeval-2017 Task4A dataset on sentiment analysis in Twitter), or by using pre-trained models (e.g., DeepMoji BIBREF20 , Skip thoughts BIBREF21 ). The lexicon features were often derived from the NRC emotion and sentiment lexicons BIBREF22 , BIBREF23 , BIBREF24 , AFINN BIBREF25 , and Bing Liu Lexicon BIBREF26 .",
"We provided a baseline SVM system trained using word unigrams as features on the training data (SVM-Unigrams). This system is also included in the current analysis.",
"Measuring bias: To examine gender bias, we compared each system's predicted scores on the EEC sentence pairs as follows:",
"",
"Thus, eleven pairs of scores (ten pairs of scores from ten noun phrase pairs and one pair of scores from the averages on name subsets) were examined for each template–emotion word instantiation. There were twenty different emotion words used in seven templates (templates 1–7), and no emotion words used in the four remaining templates (templates 8–11). In total, INLINEFORM0 pairs of scores were compared.",
"Similarly, to examine race bias, we compared pairs of system predicted scores as follows:",
"",
"Thus, one pair of scores was examined for each template–emotion word instantiation. In total, INLINEFORM0 pairs of scores were compared.",
"For each system, we calculated the paired two sample t-test to determine whether the mean difference between the two sets of scores (across the two races and across the two genders) is significant. We set the significance level to 0.05. However, since we performed 438 assessments (219 submissions evaluated for biases in both gender and race), we applied Bonferroni correction. The null hypothesis that the true mean difference between the paired samples was zero was rejected if the calculated p-value fell below INLINEFORM0 ."
],
[
"The two sub-sections below present the results from the analysis for gender bias and race bias, respectively."
],
[
"Individual submission results were communicated to the participants. Here, we present the summary results across all the teams. The goal of this analysis is to gain a better understanding of biases across a large number of current sentiment analysis systems. Thus, we partition the submissions into three groups according to the bias they show:",
"",
"F=M not significant: submissions that showed no statistically significant difference in intensity scores predicted for corresponding female and male noun phrase sentences,",
"",
"F INLINEFORM0 –M INLINEFORM1 significant: submissions that consistently gave higher scores for sentences with female noun phrases than for corresponding sentences with male noun phrases,",
"",
"F INLINEFORM0 –M INLINEFORM1 significant: submissions that consistently gave lower scores for sentences with female noun phrases than for corresponding sentences with male noun phrases.",
"",
"For each system and each sentence pair, we calculate the score difference INLINEFORM0 as the score for the female noun phrase sentence minus the score for the corresponding male noun phrase sentence. Table TABREF24 presents the summary results for each of the bias groups. It has the following columns:",
"",
"#Subm.: number of submissions in each group.",
"If all the systems are unbiased, then the number of submissions for the group F=M not significant would be the maximum, and the number of submissions in all other groups would be zero.",
"",
"Avg. score difference F INLINEFORM0 –M INLINEFORM1 : the average INLINEFORM2 for only those pairs where the score for the female noun phrase sentence is higher. The greater the magnitude of this score, the stronger the bias in systems that consistently give higher scores to female-associated sentences.",
"",
"Avg. score difference F INLINEFORM0 –M INLINEFORM1 : the average INLINEFORM2 for only those pairs where the score for the female noun phrase sentence is lower. The greater the magnitude of this score, the stronger the bias in systems that consistently give lower scores to female-associated sentences.",
"",
"Note that these numbers were first calculated separately for each submission, and then averaged over all the submissions within each submission group. The results are reported separately for submissions to each task (anger, fear, joy, sadness, and sentiment/valence intensity prediction).",
"Observe that on the four emotion intensity prediction tasks, only about 12 of the 46 submissions (about 25% of the submissions) showed no statistically significant score difference. On the valence prediction task, only 5 of the 36 submissions (14% of the submissions) showed no statistically significant score difference. Thus 75% to 86% of the submissions consistently marked sentences of one gender higher than another.",
"When predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21–25) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8–13). (Recall that higher valence means more positive sentiment.) In contrast, on the fear task, most submissions tended to assign higher scores to sentences with male noun phrases (23) as compared to the number of systems giving higher scores to sentences with female noun phrases (12). When predicting sadness, the number of submissions that mostly assigned higher scores to sentences with female noun phrases (18) is close to the number of submissions that mostly assigned higher scores to sentences with male noun phrases (16). These results are in line with some common stereotypes, such as females are more emotional, and situations involving male agents are more fearful BIBREF27 .",
"Figure FIGREF25 shows the score differences ( INLINEFORM0 ) for individual systems on the valence regression task. Plots for the four emotion intensity prediction tasks are shown in Figure FIGREF31 in the Appendix. Each point ( ▲, ▼, ●) on the plot corresponds to the difference in scores predicted by the system on one sentence pair. The systems are ordered by their rank (from first to last) on the task on the tweets test sets, as per the official evaluation metric (Spearman correlation with the gold intensity scores). We will refer to the difference between the maximal value of INLINEFORM1 and the minimal value of INLINEFORM2 for a particular system as the INLINEFORM3 –spread. Observe that the INLINEFORM4 –spreads for many systems are rather large, up to 0.57. Depending on the task, the top 10 or top 15 systems as well as some of the worst performing systems tend to have smaller INLINEFORM5 –spreads while the systems with medium to low performance show greater sensitivity to the gender-associated words. Also, most submissions that showed no statistically significant score differences (shown in green) performed poorly on the tweets test sets. Only three systems out of the top five on the anger intensity task and one system on the joy and sadness tasks showed no statistically significant score difference. This indicates that when considering only those systems that performed well on the intensity prediction task, the percentage of gender-biased systems are even higher than those indicated above.",
"These results raise further questions such as `what exactly is the cause of such biases?' and `why is the bias impacted by the emotion task under consideration?'. Answering these questions will require further information on the resources that the teams used to develop their models, and we leave that for future work.",
"Average score differences: For submissions that showed statistically significant score differences, the average score difference F INLINEFORM0 –M INLINEFORM1 and the average score difference F INLINEFORM2 –M INLINEFORM3 were INLINEFORM4 . Since the intensity scores range from 0 to 1, 0.03 is 3% of the full range. The maximal score difference ( INLINEFORM5 ) across all the submissions was as high as 0.34. Note, however, that these INLINEFORM6 s are the result of changing just one word in a sentence. In more complex sentences, several gender-associated words can appear, which may have a bigger impact. Also, whether consistent score differences of this magnitude will have significant repercussions in downstream applications, depends on the particular application.",
"Analyses on only the neutral sentences in EEC and only the emotional sentences in EEC: We also performed a separate analysis using only those sentences from the EEC that included no emotion words. Recall that there are four templates that contain no emotion words. Tables TABREF26 shows these results. We observe similar trends as in the analysis on the full set. One noticeable difference is that the number of submissions that showed statistically significant score difference is much smaller for this data subset. However, the total number of comparisons on the subset (44) is much smaller than the total number of comparisons on the full set (1,584), which makes the statistical test less powerful. Note also that the average score differences on the subset (columns 3 and 4 in Table TABREF26 ) tend to be higher than the differences on the full set (columns 3 and 4 in Table TABREF24 ). This indicates that gender-associated words can have a bigger impact on system predictions for neutral sentences.",
"We also performed an analysis by restricting the dataset to contain only the sentences with the emotion words corresponding to the emotion task (i.e., submissions to the anger intensity prediction task were evaluated only on sentences with anger words). The results (not shown here) were similar to the results on the full set."
],
[
"We did a similar analysis for race as we did for gender. For each submission on each task, we calculated the difference between the average predicted score on the set of sentences with African American (AA) names and the average predicted score on the set of sentences with European American (EA) names. Then, we aggregated the results over all such sentence pairs in the EEC.",
"Table TABREF29 shows the results. The table has the same form and structure as the gender result tables. Observe that the number of submissions with no statistically significant score difference for sentences pertaining to the two races is about 5–11 (about 11% to 24%) for the four emotions and 3 (about 8%) for valence. These numbers are even lower than what was found for gender.",
"The majority of the systems assigned higher scores to sentences with African American names on the tasks of anger, fear, and sadness intensity prediction. On the joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names. These tendencies reflect some common stereotypes that associate African Americans with more negative emotions BIBREF28 .",
"Figure FIGREF28 shows the score differences for individual systems on race sentence pairs on the valence regression task. Plots for the four emotion intensity prediction tasks are shown in Figure FIGREF32 in the Appendix. Here, the INLINEFORM0 –spreads are smaller than on the gender sentence pairs—from 0 to 0.15. As in the gender analysis, on the valence task the top 13 systems as well as some of the worst performing systems have smaller INLINEFORM1 –spread while the systems with medium to low performance show greater sensitivity to the race-associated names. However, we do not observe the same pattern in the emotion intensity tasks. Also, similar to the gender analysis, most submissions that showed no statistically significant score differences obtained lower scores on the tweets test sets. Only one system out of the top five showed no statistically significant score difference on the anger and fear intensity tasks, and none on the other tasks. Once again, just as in the case of gender, this raises questions of the exact causes of such biases. We hope to explore this in future work."
],
[
"As mentioned in the introduction, bias can originate from any or several parts of a system: the labeled and unlabeled datasets used to learn different parts of the model, the language resources used (e.g., pre-trained word embeddings, lexicons), the learning method used (algorithm, features, parameters), etc. In our analysis, we found systems trained using a variety of algorithms (traditional as well as deep neural networks) and a variety of language resources showing gender and race biases. Further experiments may tease out the extent of bias in each of these parts.",
"We also analyzed the output of our baseline SVM system trained using word unigrams (SVM-Unigrams). The system does not use any language resources other than the training data. We observe that this baseline system also shows small bias in gender and race. The INLINEFORM0 -spreads for this system were quite small: 0.09 to 0.2 on the gender sentence pairs and less than 0.002 on the race sentence pairs. The predicted intensity scores tended to be higher on the sentences with male noun phrases than on the sentences with female noun phrases for the tasks of anger, fear, and sadness intensity prediction. This tendency was reversed on the task of valence prediction. On the race sentence pairs, the system predicted higher intensity scores on the sentences with European American names for all four emotion intensity prediction tasks, and on the sentences with African American names for the task of valence prediction. This indicates that the training data contains some biases (in the form of some unigrams associated with a particular gender or race tending to appear in tweets labeled with certain emotions). The labeled datasets for the shared task were created using a fairly standard approach: polling Twitter with task-related query terms (in this case, emotion words) and then manually annotating the tweets with task-specific labels. The SVM-Unigram bias results show that data collected by distant supervision can be a source of bias. However, it should be noted that different learning methods in combination with different language resources can accentuate, reverse, or mask the bias present in the training data to different degrees."
],
[
"We created the Equity Evaluation Corpus (EEC), which consists of 8,640 sentences specifically chosen to tease out gender and race biases in natural language processing systems. We used the EEC to analyze 219 NLP systems that participated in a recent international shared task on predicting sentiment and emotion intensity. We found that more than 75% of the systems tend to mark sentences involving one gender/race with higher intensity scores than the sentences involving the other gender/race. We found such biases to be more widely prevalent for race than for gender. We also found that the bias can be different depending on the particular affect dimension involved.",
"We found the score differences across genders and across races to be somewhat small on average ( INLINEFORM0 , which is INLINEFORM1 of the 0 to 1 score range). However, for some systems the score differences reached as high as 0.34 (34%). What impact a consistent bias, even with an average magnitude INLINEFORM2 , might have in downstream applications merits further investigation.",
"We plan to extend the EEC with sentences associated with country names, professions (e.g., doctors, police officers, janitors, teachers, etc.), fields of study (e.g., arts vs. sciences), as well as races (e.g., Asian, mixed, etc.) and genders (e.g., agender, androgyne, trans, queer, etc.) not included in the current study. We can then use the corpus to examine biases across each of those variables as well. We are also interested in exploring which systems (or what techniques) accentuate inappropriate biases in the data and which systems mitigate such biases. Finally, we are interested in exploring how the quality of sentiment analysis predictions varies when applied to text produced by different demographic groups, such as people of different races, genders, and ethnicities.",
"The Equity Evaluation Corpus and the proposed methodology to examine bias are not meant to be comprehensive. However, using several approaches and datasets such as the one proposed here can bring about a more thorough examination of inappropriate biases in modern machine learning systems."
],
[
"Figures FIGREF31 and FIGREF32 show box plots of the score differences for each system on the four emotion intensity regression tasks on the gender and race sentence pairs, respectively. Each point on a plot corresponds to the difference in scores predicted by the system on one sentence pair. The systems are ordered by their performance rank (from first to last) on the task as per the official evaluation metric on the tweets test sets."
]
],
"section_name": [
"Introduction",
"Related Work",
"The Equity Evaluation Corpus",
"Measuring Race and Gender Bias in Automatic Sentiment Analysis Systems",
"Results",
"Gender Bias Results",
"Race Bias Results",
"Discussion",
"Conclusions and Future Work",
"Appendix"
]
} | {
"answers": [
{
"annotation_id": [
"899bbcbc1152dead3df2608f46221c22ef8251c6",
"9a91609cbbd3fe10645513060870df3db024e74d"
],
"answer": [
{
"evidence": [
"When predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21–25) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8–13). (Recall that higher valence means more positive sentiment.) In contrast, on the fear task, most submissions tended to assign higher scores to sentences with male noun phrases (23) as compared to the number of systems giving higher scores to sentences with female noun phrases (12). When predicting sadness, the number of submissions that mostly assigned higher scores to sentences with female noun phrases (18) is close to the number of submissions that mostly assigned higher scores to sentences with male noun phrases (16). These results are in line with some common stereotypes, such as females are more emotional, and situations involving male agents are more fearful BIBREF27 .",
"The majority of the systems assigned higher scores to sentences with African American names on the tasks of anger, fear, and sadness intensity prediction. On the joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names. These tendencies reflect some common stereotypes that associate African Americans with more negative emotions BIBREF28 ."
],
"extractive_spans": [],
"free_form_answer": "Females are given higher sentiment intensity when predicting anger, joy or valence, but males are given higher sentiment intensity when predicting fear.\nAfrican American names are given higher score on the tasks of anger, fear, and sadness intensity prediction, but European American names are given higher scores on joy and valence task.",
"highlighted_evidence": [
"When predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21–25) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8–13). (Recall that higher valence means more positive sentiment.) In contrast, on the fear task, most submissions tended to assign higher scores to sentences with male noun phrases (23) as compared to the number of systems giving higher scores to sentences with female noun phrases (12). When predicting sadness, the number of submissions that mostly assigned higher scores to sentences with female noun phrases (18) is close to the number of submissions that mostly assigned higher scores to sentences with male noun phrases (16). These results are in line with some common stereotypes, such as females are more emotional, and situations involving male agents are more fearful BIBREF27 ",
"The majority of the systems assigned higher scores to sentences with African American names on the tasks of anger, fear, and sadness intensity prediction. On the joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names. These tendencies reflect some common stereotypes that associate African Americans with more negative emotions BIBREF28 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"When predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21–25) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8–13). (Recall that higher valence means more positive sentiment.) In contrast, on the fear task, most submissions tended to assign higher scores to sentences with male noun phrases (23) as compared to the number of systems giving higher scores to sentences with female noun phrases (12). When predicting sadness, the number of submissions that mostly assigned higher scores to sentences with female noun phrases (18) is close to the number of submissions that mostly assigned higher scores to sentences with male noun phrases (16). These results are in line with some common stereotypes, such as females are more emotional, and situations involving male agents are more fearful BIBREF27 .",
"The majority of the systems assigned higher scores to sentences with African American names on the tasks of anger, fear, and sadness intensity prediction. On the joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names. These tendencies reflect some common stereotypes that associate African Americans with more negative emotions BIBREF28 ."
],
"extractive_spans": [
" the number of systems consistently giving higher scores to sentences with female noun phrases",
"higher scores to sentences with African American names on the tasks of anger, fear, and sadness",
" joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names"
],
"free_form_answer": "",
"highlighted_evidence": [
"When predicting anger, joy, or valence, the number of systems consistently giving higher scores to sentences with female noun phrases (21–25) is markedly higher than the number of systems giving higher scores to sentences with male noun phrases (8–13).",
"The majority of the systems assigned higher scores to sentences with African American names on the tasks of anger, fear, and sadness intensity prediction. On the joy and valence tasks, most submissions tended to assign higher scores to sentences with European American names."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7709ba027124b84ae2c981958319684af87e20ee",
"bf40e3662481c948787db4db5ece1798621b9238"
],
"answer": [
{
"evidence": [
"We decided to use sentences involving at least one race- or gender-associated word. The sentences were intended to be short and grammatically simple. We also wanted some sentences to include expressions of sentiment and emotion, since the goal is to test sentiment and emotion systems. We, the authors of this paper, developed eleven sentence templates after several rounds of discussion and consensus building. They are shown in Table TABREF3 . The templates are divided into two groups. The first type (templates 1–7) includes emotion words. The purpose of this set is to have sentences expressing emotions. The second type (templates 8–11) does not include any emotion words. The purpose of this set is to have non-emotional (neutral) sentences."
],
"extractive_spans": [],
"free_form_answer": "Sentences involving at least one race- or gender-associated word, sentence have to be short and grammatically simple, sentence have to include expressions of sentiment and emotion.",
"highlighted_evidence": [
"We decided to use sentences involving at least one race- or gender-associated word. The sentences were intended to be short and grammatically simple. We also wanted some sentences to include expressions of sentiment and emotion, since the goal is to test sentiment and emotion systems."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We generated sentences from the templates by replacing INLINEFORM0 person INLINEFORM1 and INLINEFORM2 emotion word INLINEFORM3 variables with the values they can take. In total, 8,640 sentences were generated with the various combinations of INLINEFORM4 person INLINEFORM5 and INLINEFORM6 emotion word INLINEFORM7 values across the eleven templates. We manually examined the sentences to make sure they were grammatically well-formed. Notably, one can derive pairs of sentences from the EEC such that they differ only in one word corresponding to gender or race (e.g., `My daughter feels devastated' and `My son feels devastated'). We refer to the full set of 8,640 sentences as Equity Evaluation Corpus."
],
"extractive_spans": [
"generated with the various combinations of INLINEFORM4 person INLINEFORM5 and INLINEFORM6 emotion word INLINEFORM7 values across the eleven templates",
"differ only in one word corresponding to gender or race"
],
"free_form_answer": "",
"highlighted_evidence": [
"We generated sentences from the templates by replacing INLINEFORM0 person INLINEFORM1 and INLINEFORM2 emotion word INLINEFORM3 variables with the values they can take. In total, 8,640 sentences were generated with the various combinations of INLINEFORM4 person INLINEFORM5 and INLINEFORM6 emotion word INLINEFORM7 values across the eleven templates. We manually examined the sentences to make sure they were grammatically well-formed. Notably, one can derive pairs of sentences from the EEC such that they differ only in one word corresponding to gender or race (e.g., `My daughter feels devastated' and `My son feels devastated')."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"Which race and gender are given higher sentiment intensity predictions?",
"What criteria are used to select the 8,640 English sentences?"
],
"question_id": [
"cc354c952b5aaed2d4d1e932175e008ff2d801dd",
"0f12dc077fe8e5b95ca9163cea1dd17195c96929"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"bias",
"bias"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: Sentence templates used in this study.",
"Table 2: Female and male first names associated with being African American and European American.",
"Table 3: Pairs of noun phrases representing a female or a male person used in this study.",
"Table 4: Emotion words used in this study.",
"Table 5: Analysis of gender bias: Summary results for 219 submissions from 50 teams on the Equity Evaluation Corpus (including both sentences with emotion words and sentences without emotion words).",
"Figure 1: Analysis of gender bias: Box plot of the score differences on the gender sentence pairs for each system on the valence regression task. Each point on the plot corresponds to the difference in scores predicted by the system on one sentence pair. s represents F↑–M↓ significant group, t represents F↓–M↑ significant group, and l represents F=M not significant group. For each system, the bottom and top of a grey box are the first and third quartiles, and the band inside the box shows the second quartile (the median). The whiskers extend to 1.5 times the interquartile range (IQR = Q3 - Q1) from the edge of the box. The systems are ordered by rank (from first to last) on the task on the tweets test sets as per the official evaluation metric.",
"Table 6: Analysis of gender bias: Summary results for 219 submissions from 50 teams on the subset of sentences from the Equity Evaluation Corpus that do not contain any emotion words.",
"Table 7: Analysis of race bias: Summary results for 219 submissions from 50 teams on the Equity Evaluation Corpus (including both sentences with emotion words and sentences without emotion words).",
"Figure 2: Analysis of race bias: Box plot of the score differences on the race sentence pairs for each system on the valence regression task. Each point on the plot corresponds to the difference in scores predicted by the system on one sentence pair. s represents AA↑–EA↓ significant group, t represents AA↓–EA↑ significant group, and l represents AA=EA not significant group. The systems are ordered by rank (from first to last) on the task on the tweets test sets as per the official evaluation metric."
],
"file": [
"3-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png",
"6-Table5-1.png",
"7-Figure1-1.png",
"7-Table6-1.png",
"8-Table7-1.png",
"9-Figure2-1.png"
]
} | [
"Which race and gender are given higher sentiment intensity predictions?",
"What criteria are used to select the 8,640 English sentences?"
] | [
[
"1805.04508-Race Bias Results-2",
"1805.04508-Gender Bias Results-19"
],
[
"1805.04508-The Equity Evaluation Corpus-6",
"1805.04508-The Equity Evaluation Corpus-1"
]
] | [
"Females are given higher sentiment intensity when predicting anger, joy or valence, but males are given higher sentiment intensity when predicting fear.\nAfrican American names are given higher score on the tasks of anger, fear, and sadness intensity prediction, but European American names are given higher scores on joy and valence task.",
"Sentences involving at least one race- or gender-associated word, sentence have to be short and grammatically simple, sentence have to include expressions of sentiment and emotion."
] | 95 |
1904.03288 | Jasper: An End-to-End Convolutional Neural Acoustic Model | In this paper, we report state-of-the-art results on LibriSpeech among end-to-end speech recognition models without any external training data. Our model, Jasper, uses only 1D convolutions, batch normalization, ReLU, dropout, and residual connections. To improve training, we further introduce a new layer-wise optimizer called NovoGrad. Through experiments, we demonstrate that the proposed deep architecture performs as well or better than more complex choices. Our deepest Jasper variant uses 54 convolutional layers. With this architecture, we achieve 2.95% WER using a beam-search decoder with an external neural language model and 3.86% WER with a greedy decoder on LibriSpeech test-clean. We also report competitive results on the Wall Street Journal and the Hub5'00 conversational evaluation datasets. | {
"paragraphs": [
[
"Conventional automatic speech recognition (ASR) systems typically consist of several independently learned components: an acoustic model to predict context-dependent sub-phoneme states (senones) from audio, a graph structure to map senones to phonemes, and a pronunciation model to map phonemes to words. Hybrid systems combine hidden Markov models to model state dependencies with neural networks to predict states BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Newer approaches such as end-to-end (E2E) systems reduce the overall complexity of the final system.",
"Our research builds on prior work that has explored using time-delay neural networks (TDNN), other forms of convolutional neural networks, and Connectionist Temporal Classification (CTC) loss BIBREF4 , BIBREF5 , BIBREF6 . We took inspiration from wav2letter BIBREF6 , which uses 1D-convolution layers. Liptchinsky et al. BIBREF7 improved wav2letter by increasing the model depth to 19 convolutional layers and adding Gated Linear Units (GLU) BIBREF8 , weight normalization BIBREF9 and dropout.",
"By building a deeper and larger capacity network, we aim to demonstrate that we can match or outperform non end-to-end models on the LibriSpeech and 2000hr Fisher+Switchboard tasks. Like wav2letter, our architecture, Jasper, uses a stack of 1D-convolution layers, but with ReLU and batch normalization BIBREF10 . We find that ReLU and batch normalization outperform other activation and normalization schemes that we tested for convolutional ASR. As a result, Jasper's architecture contains only 1D convolution, batch normalization, ReLU, and dropout layers – operators highly optimized for training and inference on GPUs.",
"It is possible to increase the capacity of the Jasper model by stacking these operations. Our largest version uses 54 convolutional layers (333M parameters), while our small model uses 34 (201M parameters). We use residual connections to enable this level of depth. We investigate a number of residual options and propose a new residual connection topology we call Dense Residual (DR).",
"Integrating our best acoustic model with a Transformer-XL BIBREF11 language model allows us to obtain new state-of-the-art (SOTA) results on LibriSpeech BIBREF12 test-clean of 2.95% WER and SOTA results among end-to-end models on LibriSpeech test-other. We show competitive results on Wall Street Journal (WSJ), and 2000hr Fisher+Switchboard (F+S). Using only greedy decoding without a language model we achieve 3.86% WER on LibriSpeech test-clean.",
"This paper makes the following contributions:"
],
[
"Jasper is a family of end-to-end ASR models that replace acoustic and pronunciation models with a convolutional neural network. Jasper uses mel-filterbank features calculated from 20ms windows with a 10ms overlap, and outputs a probability distribution over characters per frame. Jasper has a block architecture: a Jasper INLINEFORM0 x INLINEFORM1 model has INLINEFORM2 blocks, each with INLINEFORM3 sub-blocks. Each sub-block applies the following operations: a 1D-convolution, batch norm, ReLU, and dropout. All sub-blocks in a block have the same number of output channels.",
"Each block input is connected directly into the last sub-block via a residual connection. The residual connection is first projected through a 1x1 convolution to account for different numbers of input and output channels, then through a batch norm layer. The output of this batch norm layer is added to the output of the batch norm layer in the last sub-block. The result of this sum is passed through the activation function and dropout to produce the output of the sub-block.",
"The sub-block architecture of Jasper was designed to facilitate fast GPU inference. Each sub-block can be fused into a single GPU kernel: dropout is not used at inference-time and is eliminated, batch norm can be fused with the preceding convolution, ReLU clamps the result, and residual summation can be treated as a modified bias term in this fused operation.",
"All Jasper models have four additional convolutional blocks: one pre-processing and three post-processing. See Figure FIGREF7 and Table TABREF8 for details.",
"We also build a variant of Jasper, Jasper Dense Residual (DR). Jasper DR follows DenseNet BIBREF15 and DenseRNet BIBREF16 , but instead of having dense connections within a block, the output of a convolution block is added to the inputs of all the following blocks. While DenseNet and DenseRNet concatenates the outputs of different layers, Jasper DR adds them in the same way that residuals are added in ResNet. As explained below, we find addition to be as effective as concatenation."
],
[
"In our study, we evaluate performance of models with:",
"3 types of normalization: batch norm BIBREF10 , weight norm BIBREF9 , and layer norm BIBREF17 ",
"3 types of rectified linear units: ReLU, clipped ReLU (cReLU), and leaky ReLU (lReLU)",
"2 types of gated units: gated linear units (GLU) BIBREF8 , and gated activation units (GAU) BIBREF18 ",
"All experiment results are shown in Table TABREF15 . We first experimented with a smaller Jasper5x3 model to pick the top 3 settings before training on larger Jasper models. We found that layer norm with GAU performed the best on the smaller model. Layer norm with ReLU and batch norm with ReLU came second and third in our tests. Using these 3, we conducted further experiments on a larger Jasper10x4. For larger models, we noticed that batch norm with ReLU outperformed other choices. Thus, leading us to decide on batch normalization and ReLU for our architecture.",
"During batching, all sequences are padded to match the longest sequence. These padded values caused issues when using layer norm. We applied a sequence mask to exclude padding values from the mean and variance calculation. Further, we computed mean and variance over both the time dimension and channels similar to the sequence-wise normalization proposed by Laurent et al. BIBREF19 . In addition to masking layer norm, we additionally applied masking prior to the convolution operation, and masking the mean and variance calculation in batch norm. These results are shown in Table TABREF16 . Interestingly, we found that while masking before convolution gives a lower WER, using masks for both convolutions and batch norm results in worse performance.",
"As a final note, we found that training with weight norm was very unstable leading to exploding activations."
],
[
"For models deeper than Jasper 5x3, we observe consistently that residual connections are necessary for training to converge. In addition to the simple residual and dense residual model described above, we investigated DenseNet BIBREF15 and DenseRNet BIBREF16 variants of Jasper. Both connect the outputs of each sub-block to the inputs of following sub-blocks within a block. DenseRNet, similar to Dense Residual, connects the output of each output of each block to the input of all following blocks. DenseNet and DenseRNet combine residual connections using concatenation whereas Residual and Dense Residual use addition. We found that Dense Residual and DenseRNet perform similarly with each performing better on specific subsets of LibriSpeech. We decided to use Dense Residual for subsequent experiments. The main reason is that due to concatenation, the growth factor for DenseNet and DenseRNet requires tuning for deeper models whereas Dense Residual simply just repeats a sub-blocks."
],
[
"A language model (LM) is a probability distribution over arbitrary symbol sequences INLINEFORM0 such that more likely sequences are assigned high probabilities. LMs are frequently used to condition beam search. During decoding, candidates are evaluated using both acoustic scores and LM scores. Traditional N-gram LMs have been augmented with neural LMs in recent work BIBREF20 , BIBREF21 , BIBREF22 .",
"We experiment with statistical N-gram language models BIBREF23 and neural Transformer-XL BIBREF11 models. Our best results use acoustic and word-level N-gram language models to generate a candidate list using beam search with a width of 2048. Next, an external Transformer-XL LM rescores the final list. All LMs were trained on datasets independently from acoustic models. We show results with the neural LM in our Results section. We observed a strong correlation between the quality of the neural LM (measured by perplexity) and WER as shown in Figure FIGREF20 ."
],
[
"For training, we use either Stochastic Gradient Descent (SGD) with momentum or our own NovoGrad, an optimizer similar to Adam BIBREF14 , except that its second moments are computed per layer instead of per weight. Compared to Adam, it reduces memory consumption and we find it to be more numerically stable.",
"At each step INLINEFORM0 , NovoGrad computes the stochastic gradient INLINEFORM1 following the regular forward-backward pass. Then the second-order moment INLINEFORM2 is computed for each layer INLINEFORM3 similar to ND-Adam BIBREF27 : DISPLAYFORM0 ",
"The second-order moment INLINEFORM0 is used to re-scale gradients INLINEFORM1 before calculating the first-order moment INLINEFORM2 : DISPLAYFORM0 ",
"If L2-regularization is used, a weight decay INLINEFORM0 is added to the re-scaled gradient (as in AdamW BIBREF28 ): DISPLAYFORM0 ",
"Finally, new weights are computed using the learning rate INLINEFORM0 : DISPLAYFORM0 ",
"Using NovoGrad instead of SGD with momentum, we decreased the WER on dev-clean LibriSpeech from 4.00% to 3.64%, a relative improvement of 9% for Jasper DR 10x5. We will further analyze NovoGrad in forthcoming work."
],
[
"We evaluate Jasper across a number of datasets in various domains. In all experiments, we use dropout and weight decay as regularization. At training time, we use speed perturbation with fixed +/-10% BIBREF29 for LibriSpeech. For WSJ and Hub5'00, we use a random speed perturbation factor between [-10%, 10%] as each utterance is fed into the model. All models have been trained on NVIDIA DGX-1 in mixed precision BIBREF30 using OpenSeq2Seq BIBREF31 . Source code, training configurations, and pretrained models are available. "
],
[
"We evaluated the performance of Jasper on two read speech datasets: LibriSpeech and Wall Street Journal (WSJ). For LibriSpeech, we trained Jasper DR 10x5 using our NovoGrad optimizer for 400 epochs. We achieve SOTA performance on the test-clean subset and SOTA among end-to-end speech recognition models on test-other.",
"We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 ."
],
[
"We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 .",
"We obtain good results for SWB. However, there is work to be done to improve WER on harder tasks such as CHM."
],
[
"We have presented a new family of neural architectures for end-to-end speech recognition. Inspired by wav2letter's convolutional approach, we build a deep and scalable model, which requires a well-designed residual topology, effective regularization, and a strong optimizer. As our architecture studies demonstrated, a combination of standard components leads to SOTA results on LibriSpeech and competitive results on other benchmarks. Our Jasper architecture is highly efficient for training and inference, and serves as a good baseline approach on top of which to explore more sophisticated regularization, data augmentation, loss functions, language models, and optimization strategies. We are interested to see if our approach can continue to scale to deeper models and larger datasets."
]
],
"section_name": [
"Introduction",
"Jasper Architecture",
"Normalization and Activation",
"Residual Connections",
"Language Model",
"NovoGrad",
"Results",
"Read Speech",
"Conversational Speech",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"775f5fedc264202e997b595d469c9b94a6119ea8",
"ee1ca1d772bc7bad8283ba5af15c8b8f4e2efbea"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 .",
"FLOAT SELECTED: Table 7: Hub5’00, WER (%)"
],
"extractive_spans": [],
"free_form_answer": "LF-MMI Attention\nSeq2Seq \nRNN-T \nChar E2E LF-MMI \nPhone E2E LF-MMI \nCTC + Gram-CTC",
"highlighted_evidence": [
" We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 .",
"FLOAT SELECTED: Table 7: Hub5’00, WER (%)"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"aff7fa16b6668b6c2b4a2c863be2bef4e080128c",
"f21ee77b87ea6540292a61d1b5565b5983f31fd7"
],
"answer": [
{
"evidence": [
"We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .",
"FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)",
"FLOAT SELECTED: Table 7: Hub5’00, WER (%)",
"We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."
],
"extractive_spans": [],
"free_form_answer": "In case of read speech datasets, their best model got the highest nov93 score of 16.1 and the highest nov92 score of 13.3.\nIn case of Conversational Speech, their best model got the highest SWB of 8.3 and the highest CHM of 19.3. ",
"highlighted_evidence": [
"We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .",
"FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)",
"FLOAT SELECTED: Table 7: Hub5’00, WER (%)",
"We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)",
"FLOAT SELECTED: Table 7: Hub5’00, WER (%)",
"We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .",
"We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."
],
"extractive_spans": [],
"free_form_answer": "On WSJ datasets author's best approach achieves 9.3 and 6.9 WER compared to best results of 7.5 and 4.1 on nov93 and nov92 subsets.\nOn Hub5'00 datasets author's best approach achieves WER of 7.8 and 16.2 compared to best result of 7.3 and 14.2 on Switchboard (SWB) and Callhome (CHM) subsets.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)",
"FLOAT SELECTED: Table 7: Hub5’00, WER (%)",
"We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .",
"We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"what were the baselines?",
"what competitive results did they obtain?"
],
"question_id": [
"2ddb51b03163d309434ee403fef42d6b9aecc458",
"e587559f5ab6e42f7d981372ee34aebdc92b646e"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: JasperBxRmodel: B - number of blocks,R - number of sub-blocks.",
"Figure 2: Jasper Dense Residual",
"Table 1: Jasper 10x5: 10 blocks, each consisting of 5 1Dconvolutional sub-blocks, plus 4 additional blocks.",
"Table 4: Residual Connections: Greedy WER, LibriSpeech for Jasper 10x3 after 400 epochs. All models sized to have roughly the same parameter count.",
"Table 2: Normalization and Activation: Greedy WER, LibriSpeech after 50 epochs",
"Table 3: Sequence Masking: Greedy WER, LibriSpeech for Jasper 10x4 after 50 epochs",
"Figure 3: LM perplexity vs WER. LibriSpeech dev-other. Varying perplexity is achieved by taking earlier or later snapshots during training.",
"Table 5: LibriSpeech, WER (%)",
"Table 6: WSJ End-to-End Models, WER (%)",
"Table 7: Hub5’00, WER (%)"
],
"file": [
"2-Figure1-1.png",
"2-Figure2-1.png",
"2-Table1-1.png",
"3-Table4-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"3-Figure3-1.png",
"4-Table5-1.png",
"4-Table6-1.png",
"4-Table7-1.png"
]
} | [
"what were the baselines?",
"what competitive results did they obtain?"
] | [
[
"1904.03288-Conversational Speech-0",
"1904.03288-4-Table7-1.png"
],
[
"1904.03288-Read Speech-1",
"1904.03288-4-Table6-1.png",
"1904.03288-4-Table7-1.png",
"1904.03288-Conversational Speech-0"
]
] | [
"LF-MMI Attention\nSeq2Seq \nRNN-T \nChar E2E LF-MMI \nPhone E2E LF-MMI \nCTC + Gram-CTC",
"On WSJ datasets author's best approach achieves 9.3 and 6.9 WER compared to best results of 7.5 and 4.1 on nov93 and nov92 subsets.\nOn Hub5'00 datasets author's best approach achieves WER of 7.8 and 16.2 compared to best result of 7.3 and 14.2 on Switchboard (SWB) and Callhome (CHM) subsets."
] | 96 |
1909.13714 | Towards Multimodal Understanding of Passenger-Vehicle Interactions in Autonomous Vehicles: Intent/Slot Recognition Utilizing Audio-Visual Data | Understanding passenger intents from spoken interactions and car's vision (both inside and outside the vehicle) are important building blocks towards developing contextual dialog systems for natural interactions in autonomous vehicles (AV). In this study, we continued exploring AMIE (Automated-vehicle Multimodal In-cabin Experience), the in-cabin agent responsible for handling certain multimodal passenger-vehicle interactions. When the passengers give instructions to AMIE, the agent should parse such commands properly considering available three modalities (language/text, audio, video) and trigger the appropriate functionality of the AV system. We had collected a multimodal in-cabin dataset with multi-turn dialogues between the passengers and AMIE using a Wizard-of-Oz scheme via realistic scavenger hunt game. In our previous explorations, we experimented with various RNN-based models to detect utterance-level intents (set destination, change route, go faster, go slower, stop, park, pull over, drop off, open door, and others) along with intent keywords and relevant slots (location, position/direction, object, gesture/gaze, time-guidance, person) associated with the action to be performed in our AV scenarios. In this recent work, we propose to discuss the benefits of multimodal understanding of in-cabin utterances by incorporating verbal/language input (text and speech embeddings) together with the non-verbal/acoustic and visual input from inside and outside the vehicle (i.e., passenger gestures and gaze from in-cabin video stream, referred objects outside of the vehicle from the road view camera stream). Our experimental results outperformed text-only baselines and with multimodality, we achieved improved performances for utterance-level intent detection and slot filling. | {
"paragraphs": [
[
"Understanding passenger intents from spoken interactions and car's vision (both inside and outside the vehicle) are important building blocks towards developing contextual dialog systems for natural interactions in autonomous vehicles (AV). In this study, we continued exploring AMIE (Automated-vehicle Multimodal In-cabin Experience), the in-cabin agent responsible for handling certain multimodal passenger-vehicle interactions. When the passengers give instructions to AMIE, the agent should parse such commands properly considering available three modalities (language/text, audio, video) and trigger the appropriate functionality of the AV system. We had collected a multimodal in-cabin dataset with multi-turn dialogues between the passengers and AMIE using a Wizard-of-Oz scheme via realistic scavenger hunt game. In our previous explorations BIBREF0, BIBREF1, we experimented with various RNN-based models to detect utterance-level intents (set destination, change route, go faster, go slower, stop, park, pull over, drop off, open door, and others) along with intent keywords and relevant slots (location, position/direction, object, gesture/gaze, time-guidance, person) associated with the action to be performed in our AV scenarios. In this recent work, we propose to discuss the benefits of multimodal understanding of in-cabin utterances by incorporating verbal/language input (text and speech embeddings) together with the non-verbal/acoustic and visual input from inside and outside the vehicle (i.e., passenger gestures and gaze from in-cabin video stream, referred objects outside of the vehicle from the road view camera stream). Our experimental results outperformed text-only baselines and with multimodality, we achieved improved performances for utterance-level intent detection and slot filling."
],
[
"We explored leveraging multimodality for the NLU module in the SDS pipeline. As our AMIE in-cabin dataset has video and audio recordings, we investigated 3 modalities for the NLU: text, audio, and video. For text (language) modality, our previous work BIBREF1 presents the details of our best-performing Hierarchical & Joint Bi-LSTM models BIBREF3, BIBREF4, BIBREF5, BIBREF6 (H-Joint-2, see SECREF5) and the results for utterance-level intent recognition and word-level slot filling via transcribed and recognized (ASR output) textual data, using word embeddings (GloVe BIBREF7) as features. This study explores the following multimodal features:",
"Speech Embeddings: We incorporated pre-trained speech embeddings (Speech2Vec BIBREF8) as features, trained on a corpus of 500 hours of speech from LibriSpeech. Speech2Vec is considered as a speech version of Word2Vec BIBREF9 which is compared with Word2Vec vectors trained on the transcript of the same speech corpus. We experimented with concatenating word and speech embeddings by using pre-trained GloVe embeddings (6B tokens, 400K vocab, dim=100), Speech2Vec embeddings (37.6K vocab, dim=100), and its Word2Vec counterpart (37.6K vocab, dim=100).",
"Audio Features: Using openSMILE BIBREF10, 1582 audio features are extracted for each utterance using the segmented audio clips from in-cabin AMIE dataset. These are the INTERSPEECH 2010 Paralinguistic Challenge features (IS10) including PCM loudness, MFCC, log Mel Freq. Band, LSP, etc. BIBREF11.",
"Video Features: Using the feature extraction process described in BIBREF12, we extracted intermediate CNN features for each segmented video clip from AMIE dataset. For any given input video clip (segmented for each utterance), one frame per second is sampled and its visual descriptor is extracted from the activations of the intermediate convolution layers of a pre-trained CNN. We used the pre-trained Inception-ResNet-v2 model BIBREF13 and generated 4096-dim features for each sample. We experimented with adding 2 sources of visual information: (i) cabin/passenger view from the BackDriver RGB camera recordings, (ii) road/outside view from the DashCam RGB video streams."
],
[
"For incorporating speech embeddings experiments, performance results of NLU models on in-cabin data with various feature concatenations can be found in Table TABREF3, using our previous hierarchical joint model (H-Joint-2). When used in isolation, Word2Vec and Speech2Vec achieves comparable performances, which cannot reach GloVe performance. This was expected as the pre-trained Speech2Vec vectors have lower vocabulary coverage than GloVe. Yet, we observed that concatenating GloVe + Speech2Vec, and further GloVe + Word2Vec + Speech2Vec yields better NLU results: F1-score increased from 0.89 to 0.91 for intent recognition, from 0.96 to 0.97 for slot filling.",
"For multimodal (audio & video) features exploration, performance results of the compared models with varying modality/feature concatenations can be found in Table TABREF4. Since these audio/video features are extracted per utterance (on segmented audio & video clips), we experimented with the utterance-level intent recognition task only, using hierarchical joint learning (H-Joint-2). We investigated the audio-visual feature additions on top of text-only and text+speech embedding models. Adding openSMILE/IS10 features from audio, as well as incorporating intermediate CNN/Inception-ResNet-v2 features from video brought slight improvements to our intent models, reaching 0.92 F1-score. These initial results using feature concatenations may need further explorations, especially for certain intent-types such as stop (audio intensity) or relevant slots such as passenger gestures/gaze (from cabin video) and outside objects (from road video)."
],
[
"In this study, we present our initial explorations towards multimodal understanding of passenger utterances in autonomous vehicles. We briefly show that our experimental results outperformed certain baselines and with multimodality, we achieved improved overall F1-scores of 0.92 for utterance-level intent detection and 0.97 for word-level slot filling. This ongoing research has a potential impact of exploring real-world challenges with human-vehicle-scene interactions for autonomous driving support with spoken utterances."
],
[
"AMIE In-cabin Dataset: We obtained 1331 utterances having commands to AMIE agent from our in-cabin dataset. Annotation results for utterance-level intent types, slots and intent keywords can be found in Table TABREF7 and Table TABREF8.",
"Hierarchical & Joint Model (H-Joint-2): 2-level hierarchical joint learning model that detects/extracts intent keywords & slots using seq2seq Bi-LSTMs first (Level-1), then only the words that are predicted as intent keywords & valid slots are fed into Joint-2 model (Level-2), which is another seq2seq Bi-LSTM network for utterance-level intent detection (jointly trained with slots & intent keywords) BIBREF1."
]
],
"section_name": [
"Introduction",
"Methodology",
"Experimental Results",
"Conclusion",
"Appendices"
]
} | {
"answers": [
{
"annotation_id": [
"a6874431f79fc80de3fda92272b2ceed0a16a281",
"c9de8ef4c988093e1e187179ff6581865cc0a45b"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models"
],
"extractive_spans": [],
"free_form_answer": "by 2.3-6.8 points in f1 score for intent recognition and 0.8-3.5 for slot filling",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For incorporating speech embeddings experiments, performance results of NLU models on in-cabin data with various feature concatenations can be found in Table TABREF3, using our previous hierarchical joint model (H-Joint-2). When used in isolation, Word2Vec and Speech2Vec achieves comparable performances, which cannot reach GloVe performance. This was expected as the pre-trained Speech2Vec vectors have lower vocabulary coverage than GloVe. Yet, we observed that concatenating GloVe + Speech2Vec, and further GloVe + Word2Vec + Speech2Vec yields better NLU results: F1-score increased from 0.89 to 0.91 for intent recognition, from 0.96 to 0.97 for slot filling.",
"For multimodal (audio & video) features exploration, performance results of the compared models with varying modality/feature concatenations can be found in Table TABREF4. Since these audio/video features are extracted per utterance (on segmented audio & video clips), we experimented with the utterance-level intent recognition task only, using hierarchical joint learning (H-Joint-2). We investigated the audio-visual feature additions on top of text-only and text+speech embedding models. Adding openSMILE/IS10 features from audio, as well as incorporating intermediate CNN/Inception-ResNet-v2 features from video brought slight improvements to our intent models, reaching 0.92 F1-score. These initial results using feature concatenations may need further explorations, especially for certain intent-types such as stop (audio intensity) or relevant slots such as passenger gestures/gaze (from cabin video) and outside objects (from road video)."
],
"extractive_spans": [],
"free_form_answer": "F1 score increased from 0.89 to 0.92",
"highlighted_evidence": [
"Yet, we observed that concatenating GloVe + Speech2Vec, and further GloVe + Word2Vec + Speech2Vec yields better NLU results: F1-score increased from 0.89 to 0.91 for intent recognition, from 0.96 to 0.97 for slot filling.",
"We investigated the audio-visual feature additions on top of text-only and text+speech embedding models. Adding openSMILE/IS10 features from audio, as well as incorporating intermediate CNN/Inception-ResNet-v2 features from video brought slight improvements to our intent models, reaching 0.92 F1-score."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7e5419c6b25e7716602c332e8c58d977157a56b0",
"ca287d4be9159ea451deb9671d160167d623b47a"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We explored leveraging multimodality for the NLU module in the SDS pipeline. As our AMIE in-cabin dataset has video and audio recordings, we investigated 3 modalities for the NLU: text, audio, and video. For text (language) modality, our previous work BIBREF1 presents the details of our best-performing Hierarchical & Joint Bi-LSTM models BIBREF3, BIBREF4, BIBREF5, BIBREF6 (H-Joint-2, see SECREF5) and the results for utterance-level intent recognition and word-level slot filling via transcribed and recognized (ASR output) textual data, using word embeddings (GloVe BIBREF7) as features. This study explores the following multimodal features:"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"As our AMIE in-cabin dataset has video and audio recordings, we investigated 3 modalities for the NLU: text, audio, and video. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"By how much is performance improved with multimodality?",
"Is collected multimodal in cabin dataset public?"
],
"question_id": [
"f68508adef6f4bcdc0cc0a3ce9afc9a2b6333cc5",
"5563a3538d311c979c2fb83c1cc9afc66ff6fffc"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"computer vision",
"computer vision"
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models",
"Table 2: Multimodal (Audio & Video) Features Exploration: Precision/Recall/F1-scores (%) of Intent Recognition",
"Table 3: AMIE In-cabin Dataset Statistics: Intents",
"Table 4: AMIE In-cabin Dataset Statistics: Slots"
],
"file": [
"2-Table1-1.png",
"2-Table2-1.png",
"3-Table3-1.png",
"3-Table4-1.png"
]
} | [
"By how much is performance improved with multimodality?"
] | [
[
"1909.13714-Experimental Results-1",
"1909.13714-Experimental Results-0",
"1909.13714-2-Table1-1.png"
]
] | [
"F1 score increased from 0.89 to 0.92"
] | 97 |
1909.03405 | Symmetric Regularization based BERT for Pair-wise Semantic Reasoning | The ability of semantic reasoning over the sentence pair is essential for many natural language understanding tasks, e.g., natural language inference and machine reading comprehension. A recent significant improvement in these tasks comes from BERT. As reported, the next sentence prediction (NSP) in BERT, which learns the contextual relationship between two sentences, is of great significance for downstream problems with sentence-pair input. Despite the effectiveness of NSP, we suggest that NSP still lacks the essential signal to distinguish between entailment and shallow correlation. To remedy this, we propose to augment the NSP task to a 3-class categorization task, which includes a category for previous sentence prediction (PSP). The involvement of PSP encourages the model to focus on the informative semantics to determine the sentence order, thereby improves the ability of semantic understanding. This simple modification yields remarkable improvement against vanilla BERT. To further incorporate the document-level information, the scope of NSP and PSP is expanded into a broader range, i.e., NSP and PSP also include close but nonsuccessive sentences, the noise of which is mitigated by the label-smoothing technique. Both qualitative and quantitative experimental results demonstrate the effectiveness of the proposed method. Our method consistently improves the performance on the NLI and MRC benchmarks, including the challenging HANS dataset~\cite{hans}, suggesting that the document-level task is still promising for the pre-training. | {
"paragraphs": [
[
"The ability of semantic reasoning is essential for advanced natural language understanding (NLU) systems. Many NLU tasks that take sentence pairs as input, such as natural language inference (NLI) and machine reading comprehension (MRC), heavily rely on the ability of sophisticated semantic reasoning. For instance, the NLI task aims to determine whether the hypothesis sentence (e.g., a woman is sleeping) can be inferred from the premise sentence (e.g., a woman is talking on the phone). This requires the model to read and understand sentence pairs to make the specific semantic inference.",
"Bidirectional Encoder Representations from Transformer (BERT) BIBREF1 has shown strong ability in semantic reasoning. It was recently proposed and obtained impressive results on many tasks, ranging from text classification, natural language inference, and machine reading comprehension. BERT achieves this by employing two objectives in the pre-training, i.e., the masked language modeling (Masked LM) and the next sentence prediction (NSP). Intuitively, the Masked LM task concerns word-level knowledge, and the NSP task captures the global document-level information. The goal of NSP is to identify whether an input sentence is next to another input sentence. From the ablation study BIBREF1, the NSP task is quite useful for the downstream NLI and MRC tasks (e.g., +3.5% absolute gain on the Question NLI (QNLI) BIBREF2 task).",
"Despite its usefulness, we suggest that BERT has not made full use of the document-level knowledge. The sentences in the negative samples used in NSP are randomly drawn from other documents. Therefore, to discriminate against these sentences, BERT is prone to aggregating the shallow semantic, e.g., topic, neglecting context clues useful for detailed reasoning. In other words, the canonical NSP task would encourage the model to recognize the correlation between sentences, rather than obtaining the ability of semantic entailment. This setting weakens the BERT model from learning specific semantic for inference. Another issue that renders NSP less effective is that BERT is order-sensitive. Performance degradation was observed on typical NLI tasks when the order of two input sentences are reversed during the BERT fine-tuning phase. It is reasonable as the NSP task can be roughly analogy to the NLI task when the input comes as (premise, hypothesis), considering the causal order among sentences. However, this identity between NSP and NLI is compromised when the sentences are swapped.",
"Based on these considerations, we propose a simple yet effective method, i.e., introducing a IsPrev category to the classification task, which is a symmetric label of IsNext of NSP. The input of samples with IsPrev is the reverse of those with IsNext label. The advantages of using this previous sentence prediction (PSP) are three folds. (1) Learning the contrast between NSP and PSP forces the model to extract more detailed semantic, thereby the model is more capable of discriminating the correlation and entailment. (2) NSP and PSP are symmetric. This symmetric regularization alleviates the influence of the order of the input pair. (3) Empirical results indicate that our method is beneficial for all the semantic reasoning tasks that take sentence pair as input.",
"In addition, to further incorporating the document-level knowledge, NSP and PSP are extended with non-successive sentences, where the label smoothing technique is adopted. The proposed method yields a considerable improvement in our experiments. We evaluate the ability of semantic reasoning on standard NLI and MRC benchmarks, including the challenging HANS dataset BIBREF0. Analytical work on the HANS dataset provides a more comprehensible perspective towards the proposed method. Furthermore, the results on the Chinese benchmarks are provided to demonstrate its generality.",
"In summary, this work makes the following contributions:",
"The supervision signal from the original NSP task is weak for semantic inference. Therefore, a novel method is proposed to remedy the asymmetric issue and enhance the reasoning ability.",
"Both empirical and analytical evaluations are provided on the NLI and MRC datasets, which verifies the effectiveness of using more document-level knowledge."
],
[
"Many NLU tasks seek to model the relationship between two sentences. Semantic reasoning is performed on the sentence pair for the task-specific inference. Pair-wise semantic reasoning tasks have drawn a lot of attention from the NLP community as they largely require the comprehension ability of the learning systems. Recently, the significant improvement on these benchmarks comes from the pre-training models, e.g., BERT, StructBERT BIBREF3, ERNIE BIBREF4, BIBREF5, RoBERTa BIBREF6 and XLNet BIBREF7. These models learn from unsupervised/self-supervised objectives and perform excellently in the downstream tasks. Among these models, BERT adopts NSP as one of the objectives in the pre-training and shows that the NSP task has a positive effect on the NLI and MRC tasks. Although the primary study of XLNet and RoBERTa suggests that NSP is ineffective when the model is trained with a large sequence length of 512, the effect of NSP on the NLI problems should still be emphasized. The inefficiency of NSP is likely because the expected context length will be halved for Masked LM when taking a sentence pair as the input. The models derived from BERT, e.g., StructBERT and ERNIE 1.0/2.0, aim to incorporating more knowledge by elaborating pre-training objectives. This work aims to enhance the NSP task and verifies whether document-level information is helpful for the pre-training. To probe whether our method achieves a better regularization ability, our approach is also evaluated on the HANS BIBREF0 dataset, which contains hard data samples constructed by three heuristics. Previous advanced models such as BERT fail on the HANS dataset, and the test accuracy can barely exceed 0% in the subset of test examples."
],
[
"In recent years, many unsupervised pre-training methods have been proposed in the NLP fields to extract knowledge among sentences DBLP:conf/nips/KirosZSZUTF15,DBLP:conf/emnlp/ConneauKSBB17,DBLP:conf/iclr/LogeswaranL18,DBLP:journals/corr/abs-1903-09424. The prediction of surrounding sentences endows the model with the ability to model the sentence-level coherence. Skip-Thought BIBREF8 consists of an encoder and two decoders. When a sentence is given and encoded into a vector by the encoder, the decoders are trained to predict the next sentence and the previous sentence. The goal is to obtain a better sentence representation that is useful for reconstructing the surrounding context. Considering that the estimation of the likelihood of sequences is computationally expensive and time-consuming, the Quick-Thought method BIBREF9 simplifies this in a manner similar to sampled softmax BIBREF10, which classifies the input sentences between surrounding sentences and the other. Note that Quick-Thought does not distinguish between the previous and next sentence as it is functionally rotation invariant. However, BERT is order-dependent, and the discrimination can provide more supervision signal for semantic learning. InferSent BIBREF11 instead pre-trains the model in a manner of supervised learning. It uses a large-scale NLI dataset as the pre-training task to learn the sentence representation. In our work, we focus on designing a more effective document-level objective, extended from the NSP task. The proposed method will be described in the following section and validated by providing extensive experimental results in the experiment part."
],
[
"Our method follows the same input format and the model architecture with original BERT. The proposed method solely concerns the NSP task. The NSP task is a binary classification task, which takes two sentences (A and B) as input and determines whether B is the next sentence of A. Although it has been proven to be very effective for BERT, there are two major deficiencies. (1) Discrimination between IsNext and DiffDoc (the label of the sentences drawn from different documents via negative sampling) is semantically shallow as the signal of sentence order is absent. The correlation between two successive sentences could be obvious, due to, for example, lexical overlap or the conjunction used at the beginning of the second sentence. As reported BIBREF1, the final pre-trained model is able to achieve 97%-98% accuracy on the NSP task. (2) BERT is order-sensitive, i.e., $f_{\\text{BERT}}( \\texttt {A}, \\texttt {B}) \\ne f_{\\text{BERT}}(\\texttt {B}, \\texttt {A})$, while NSP is uni-directional. When the order of the input NLI pair is reversed, the performance will degrade. For instance, the accuracy decreases by about 0.5% on MNLI BIBREF12 and 0.4% on QNLI after swapping the sentences in our experiments .",
"Motivated by these problems, we propose to extend the NSP task with previous sentence prediction (PSP). Despite its simplicity, empirical results show that this is beneficial for downstream tasks, including both NLI and MRC tasks. To further incorporate the document-level information, the scope is also expanded to include more surrounding sentences, not just the adjacent. The method is briefly illustrated in Fig. FIGREF6."
],
[
"Learning to recognize the previous sentence enables the model to capture more compact context information. One would argue that IsPrev (the label of PSP) is redundant as it plays a similar role of IsNext (the label of NSP). In fact, Quick-Thought uses the sampled softmax to approximate the sentence likelihood estimation of Skip-Thought, and it actually does not differentiate between the previous and next sentences. However, we suggest the order discrimination is essential for BERT pre-training. Quick-Thought aims at extracting sentence embedding, and it uses a rotating symmetric function, which makes IsPrev redundant in Quick-Thought. In contrast, BERT is order-sensitive, and learning the symmetric regularization is rather necessary. Another advantage of PSP is to enhance document-level supervision. In order to tell the difference between NSP and PSP, the model has to extract the detailed semantic for inference."
],
[
"Beyond NSP and PSP, which enable the model to learn the short-term dependency between sentences, we also propose to expand the scope of discrimination task to further incorporate the document-level information.",
"Specifically, we also include the in-adjacent sentences in the sentence-pair classification task. The in-adjacent sentences next to the IsPrev and IsNext sentences are sampled, labeled as IsPrevInadj and IsNextInadj (cf. the bottom of Fig. FIGREF6). Note that these in-adjacent sentences will introduce much more training noise to the model. Therefore, the label smoothing technique is adopted to reduce the noise of these additional samples. It achieves this by relaxing our confidence on the labels, e.g., transforming the target probability from (1.0, 0.0) to (0.8, 0.2) in a binary classification problem.",
"In summary, when A is given, the pre-training example for each label is constructed as follows:",
"IsNext: Choosing the adjacent following sentence as B.",
"IsPrev: Choosing the adjacent previous sentence as B.",
"IsNextInadj: Choosing the in-adjacent following sentence as B. There is a sentence between A and B.",
"IsPrevInadj: Choosing the in-adjacent previous sentence as B. There is a sentence between A and B.",
"DiffDoc: Drawing B randomly from a different document."
],
[
"This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768.",
"To accelerate the training speed, two-phase training BIBREF1 is adopted. The first phase uses a maximal sentence length of 128, and 512 for the second phase. The numbers of training steps of two phases are 50K and 40K for the BERTBase model. We used AdamW BIBREF13 optimizer with a learning rate of 1e-4, a $\\beta _1$ of 0.9, a $\\beta _2$ of 0.999 and a L2 weight decay rate of $0.01$. The first 10% of the total steps are used for learning rate warming up, followed by the linear decay schema. We used a dropout probability of 0.1 on all layers. The data used for pre-training is the same as BERT, i.e., English Wikipedia (2500M words) and BookCorpus (800M words) BIBREF14. For the Masked LM task, we followed the same masking rate and settings as in BERT.",
"We explore three method settings for comparison.",
"BERT-PN: The NSP task in BERT is replaced by a 3-class task with IsNext, IsPrev and DiffDoc. The label distribution is 1:1:1.",
"BERT-PN5cls: The NSP task in BERT is replaced by a 5-class task with two additional labels IsNextInadj, IsPrevInadj. The label distribution is 1:1:1:1:1.",
"BERT-PNsmth: It uses the same data with BERT-PN5cls, except that the IsPrevInadj (IsNextInadj) label is mapped to IsPrev (IsNext) with a label smoothing factor of 0.8.",
"BERT-PN is used to verify the feasibility of PSP. The comparison with BERT-PN5cls illustrates whether more document-level information helps. BERT-PNsmth, which is the label-smoothed version of BERT-PN5cls, is used to compare with BERT-PN5cls to see whether the noise reduction is necessary.",
"In the following, we first show that BERT is order-sensitive and the use of PSP remedies this problem. Then we provide experimental results on the NLI and MRC tasks to verify the effectiveness of the proposed method. At last, the proposed method is evaluated on several Chinese datasets."
],
[
"NSP in the pre-training is useful for NLI and MRC task BIBREF1. However, we suggested that BERT trained with NSP is order-sensitive, i.e., the performance of BERT depends on the order of the input sentence pair. To verify our assumption, a primary experiment was conducted. The order of the input pair of NLI samples is reversed in the fine-tuning phase, and other hyper-parameters and settings keep the same with the BERT paper. Table TABREF19 shows the accuracy on the validation set of the MNLI and QNLI datasets. For the BERTBase model, when the sentences are swapped, the accuracy decreases by 0.5% on the MNLI task and 0.4% on the QNLI task. These results confirm that BERT trained with NSP only is indeed affected by the input order. This phenomenon motivates us to make the NSP task symmetric. The results of BERT-PN verify that BERT-PN is order-invariant. When the input order is reversed, the performance of BERT-PN remains stable. These results indicate that our method is able to remedy the order-sensitivity problem."
],
[
"A popular benchmark for evaluation of language understanding is GLUE BIBREF2, which is a collection of three NLI tasks (MNLI, QNLI and RTE), three semantic textual similarity (STS) tasks (QQP, STS-B and MRPC), two text classification (TC) tasks (SST-2 and CoLA). Although the method is motivated for pair-wise reasoning, the results of other problems are also listed.",
"Our implementation follows the same way that BERT performs in these tasks. The fine-tuning was conducted for 3 epochs for all the tasks, with a learning rate of 2e-5. The predictions were obtained by evaluating the training checkpoint with the best validation performance.",
"Table TABREF21 illustrates the experimental results, showing that our method is beneficial for all of NLI tasks. The improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase. Besides NLI, our model also performs better than BERTBase in the STS task. The STS tasks are semantically similar to the NLI tasks, and hence able to take advantage of PSP as well. Actually, the proposed method has a positive effect whenever the input is a sentence pair. The improvements suggest that the PSP task encourages the model to learn more detailed semantics in the pre-training, which improves the model on the downstream learning tasks. Moreover, our method is surprisingly able to achieve slightly better results in the single-sentence problem. The improvement should be attributed to better semantic representation.",
"When comparing between PN and PN5cls, PN5cls achieves better results than PN. This indicates that including a broader range of the context is effective for improving inference ability. Considering that the representation of IsNext and IsNextInadj should be coherent, we propose BERTBase-PNsmth to mitigate this problem. PNsmth further improves the performance and obtains an averaged score of 81.0."
],
[
"Although BERT has shown its effectiveness in the NLI tasks. BIBREF0 pointed out that BERT is still vulnerable in the NLI task as it is prone to adopting fallible heuristics. Therefore, they released a dataset, called The Heuristic Analysis for NLI Systems (HANS), to probe whether the model learns inappropriate inductive bias from the training set. It is constructed by three heuristics, i.e., lexical overlap heuristic, sub-sequence heuristic, and constituent heuristic. The first heuristic assumes that a premise entails all hypotheses constructed from words in the premise, the second assumes that a premise entails all of its contiguous sub-sequences and the third assumes that a premise entails all complete sub-trees in its parse tree. BERT and other advanced models fail on this dataset and barely exceeds 0% accuracy in most cases BIBREF0.",
"Fig. FIGREF23 illustrates the accuracy of BERTBase and BERTBase-PNsmth on the HANS dataset. The evaluation is made upon the model trained on the MNLI dataset and the predicted neutral and contradiction labels are mapped into non-entailment. The BERTBase-PNsmth evidently outperforms the BERTBase with the non-entailment examples. For the non-entailment samples constructed using the lexical overlap heuristic, our model achieves 160% relative improvement over the BERTBase model. Some samples are constructed by swapping the entities in the sentence (e.g., The doctor saw the lawyer $\\nrightarrow $ The lawyer saw the doctor) and our method outperforms BERTBase by 20% in accuracy. We suggest that the Masked LM task can hardly model the relationship between two entities and NSP only is too semantically shallow to capture the precise meaning. However, the discrimination between NSP and PSP enhances the model to realize the role of entities in a given sentence. For example, to determine that A (X is beautiful) rather than $\\bar{\\texttt {A}}$ (Y is beautiful) is the previous sentence of B (Y loves X), the model have to recognize the relationship between X and Y. In contrast, when PSP is absent, NSP can be probably inferred by learning the occurrence between beautiful and loves, regardless of the sentence structure. The detailed performance of the proposed method on the HANS dataset is illustrated in Fig. FIGREF24. The definition of each heuristic rules can be found in BIBREF0."
],
[
"We also evaluate our method on the MRC tasks. The Stanford Question Answering Dataset (SQuAD v1.1) is a question answering (QA) dataset, which consists of 100K samples BIBREF15. Each data sample has a question and a corresponding Wikipedia passage that contains the answer. The goal is to extract the answer from the passage for the given question.",
"In the fine-tuning procedure, we follow the exact way the BERT performed. The output vectors are used to compute the score of tokens being start and end of the answer span. The valid span that has the maximum score is selected as the prediction. And similarly, the fine-tuning training was performed for 3 epochs with a learning rate of 3e-5.",
"Table TABREF26 demonstrates the results on the SQuAD v1.1 dataset. The comparison between BERTBase-PN and BERTBase indicates that the inclusion of the PSP subtask is beneficial (2.4% absolute improvement). When using BERTBase-PNsmth, another 0.3% increase in EM can be obtained. The experimental results on the SQuAD v2.0 BIBREF16 are also shown in Table. TABREF26. The SQuAD v2.0 differs from SQuAD v1.1 by allowing the question-paragraph pairs that have no answer. For SQuAD v2.0, our method also achieved about 4% absolute improvement in both EM and F1 against BERTBase."
],
[
"The ReAding Comprehension from Examinations (RACE) dataset BIBREF17 consists of 100K questions taken from English exams, and the answers are generated by human experts. This is one of the most challenging MRC datasets that require sophisticated reasoning.",
"In our implementation, the question, document, and option are concatenated as a single sequence, separated by [SEP] token. And each part is truncated by a maximal length of 40/432/40, respectively. The model computes for a concatenation a scalar as the score, which is then used in a softmax layer for the final prediction. The fine-tuning was conducted for 5 epochs, with a batch size of 32 and a learning rate of 5e-5. As shown in Table TABREF28, the proposed method significantly improve the performance on the RACE dataset. BERTBase-PN obtains 2.6% accuracy improvement, and BERTBase-PN5cls further brings 0.4% absolute gain.",
"The comparisons on the SQuAD v1.1, SQuAD v2.0, and RACE dataset demonstrate that the involvement of additional sentence and discourse information is not only beneficial for the NLI task but also the MRC task. This is reasonable as these tasks heavily rely on the global semantic understanding and sophisticated reasoning among sentences. And this ability can be effectively enhanced by our method."
],
[
"The experiments are also conducted on Chinese NLP tasks:",
"XNLI BIBREF19 a multi-lingual dataset. The data sample in XNLI is a sentence pair annotated with textual entailment. The Chinese part is used.",
"LCQMC BIBREF20 is a dataset for sequence matching. A binary label is annotated for a sentence pair in the dataset to indicate whether these two sentences have the same intention.",
"NLPCC-DBQA BIBREF21 formulates the domain-based question answering as a binary classification task. Each data sample is a question-sentence pair. The goal is to identify whether the sentence contains the answer to the question.",
"CMRC-2018 is the Chinese Machine Reading Comprehension dataset. Similar to SQuAD, the system needs to extract fragments from the text as the answer.",
"DRCD BIBREF22 is also a Chinese MRC data set. The data follows the format of SQuAD.",
"For Chinese NLP tasks, we pre-train the model using Chinese corpus. We collected textual data (10879M tokens in total) from the website, consisting of Hudong Baike data (6084M tokens) , Zhihu data(465M tokens) , Sohu News(3937M tokens) and Wikipedia data (393M tokens).",
"For the first 3 Chinese tasks, we follow the settings as in ERNIE BIBREF4. The experimental results are given in Table TABREF29. The proposed method is compared with four models, i.e., BERTBase BIBREF1, BERTBase with whole word masking BIBREF18, ERNIE BIBREF4 and ERNIE 2.0 BIBREF5. Our method achieves comparable or even better results against ERNIE 2.0 BIBREF5. Note that the Chinese ERNIE 2.0 is equipped with 5 different objectives and it uses more training data (14988M tokens in total) than ours. The results indicate that the proposed method is quite effective for the pair-wise semantic reasoning as simply including PSP can achieve the results on par with multiple objectives.",
"The results of CMRC-2018 and DRCD datasets are given in Table TABREF30. Since the CMRC-2018 competition does not release the test set, the comparison on the test set is absent. Our results are obtained using the open-sourced code of BERT-wwm . We keep the hyper-parameters the same with that in ERNIE BIBREF4, except that the batch size is 12 instead of 64 due to the memory limit. Under this setting, we achieved similar results of BERTBase in the BERT-wwm paper BIBREF18. However, this is worse than the results of BERTBase reported in the ERNIE 2.0 paper BIBREF5 by about 1% in F1. This suggests that our results are currently incomparable with ERNIE 2.0. Overall, the results in Table TABREF30 illustrate that our method is also effective for the Chinese QA tasks."
],
[
"This paper aims to enrich the NSP task to provide more document-level information in the pre-training. Motivated by the in-symmetric property of NSP, we propose to differentiate between different sentence orders by including PSP. Despite the simplicity, extensive experiments demonstrate that the model obtains a better ability in pair-wise semantic reasoning. Our work suggests that the document-level objective is effective, at least for the BERTbase model. In the future, we will investigate the way to take advantages of both large-scale training and our method."
]
],
"section_name": [
"Introduction",
"Related Work ::: Pair-wise semantic reasoning",
"Related Work ::: Unsupervised learning from document",
"Method",
"Method ::: Previous Sentence Prediction",
"Method ::: Gathering More Document-level Information",
"Experiment Settings",
"Order-invariant with PSP",
"Results of NLI Tasks ::: GLUE",
"Results of NLI Tasks ::: HANS",
"Results of MRC Tasks ::: SQuAD v1.1 and v2.0",
"Results of MRC Tasks ::: RACE",
"Results of Chinese NLP Tasks",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"bdd8f13b89fc1cee946a27ebf757a5cfa3adee65",
"e7231e31c695f2f6eb91cecb1461453711783208"
],
"answer": [
{
"evidence": [
"Table TABREF21 illustrates the experimental results, showing that our method is beneficial for all of NLI tasks. The improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase. Besides NLI, our model also performs better than BERTBase in the STS task. The STS tasks are semantically similar to the NLI tasks, and hence able to take advantage of PSP as well. Actually, the proposed method has a positive effect whenever the input is a sentence pair. The improvements suggest that the PSP task encourages the model to learn more detailed semantics in the pre-training, which improves the model on the downstream learning tasks. Moreover, our method is surprisingly able to achieve slightly better results in the single-sentence problem. The improvement should be attributed to better semantic representation.",
"FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."
],
"extractive_spans": [
" improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF21 illustrates the experimental results, showing that our method is beneficial for all of NLI tasks. The improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase.",
"FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."
],
"extractive_spans": [],
"free_form_answer": "The average score improved by 1.4 points over the previous best result.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"84b7f522c915feb348946ddbd7409cadc8b2c1cb",
"a56b4a68412b5dfa9c5cd5321ad8adff33c15ba2"
],
"answer": [
{
"evidence": [
"This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768.",
"To accelerate the training speed, two-phase training BIBREF1 is adopted. The first phase uses a maximal sentence length of 128, and 512 for the second phase. The numbers of training steps of two phases are 50K and 40K for the BERTBase model. We used AdamW BIBREF13 optimizer with a learning rate of 1e-4, a $\\beta _1$ of 0.9, a $\\beta _2$ of 0.999 and a L2 weight decay rate of $0.01$. The first 10% of the total steps are used for learning rate warming up, followed by the linear decay schema. We used a dropout probability of 0.1 on all layers. The data used for pre-training is the same as BERT, i.e., English Wikipedia (2500M words) and BookCorpus (800M words) BIBREF14. For the Masked LM task, we followed the same masking rate and settings as in BERT."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768.\n\nTo accelerate the training speed, two-phase training BIBREF1 is adopted. The first phase uses a maximal sentence length of 128, and 512 for the second phase. The numbers of training steps of two phases are 50K and 40K for the BERTBase model. We used AdamW BIBREF13 optimizer with a learning rate of 1e-4, a $\\beta _1$ of 0.9, a $\\beta _2$ of 0.999 and a L2 weight decay rate of $0.01$. The first 10% of the total steps are used for learning rate warming up, followed by the linear decay schema. We used a dropout probability of 0.1 on all layers. The data used for pre-training is the same as BERT, i.e., English Wikipedia (2500M words) and BookCorpus (800M words) BIBREF14."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768.",
"To accelerate the training speed, two-phase training BIBREF1 is adopted. The first phase uses a maximal sentence length of 128, and 512 for the second phase. The numbers of training steps of two phases are 50K and 40K for the BERTBase model. We used AdamW BIBREF13 optimizer with a learning rate of 1e-4, a $\\beta _1$ of 0.9, a $\\beta _2$ of 0.999 and a L2 weight decay rate of $0.01$. The first 10% of the total steps are used for learning rate warming up, followed by the linear decay schema. We used a dropout probability of 0.1 on all layers. The data used for pre-training is the same as BERT, i.e., English Wikipedia (2500M words) and BookCorpus (800M words) BIBREF14. For the Masked LM task, we followed the same masking rate and settings as in BERT."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768.\n\nTo accelerate the training speed, two-phase training BIBREF1 is adopted. The first phase uses a maximal sentence length of 128, and 512 for the second phase. The numbers of training steps of two phases are 50K and 40K for the BERTBase model. We used AdamW BIBREF13 optimizer with a learning rate of 1e-4, a $\\beta _1$ of 0.9, a $\\beta _2$ of 0.999 and a L2 weight decay rate of $0.01$. The first 10% of the total steps are used for learning rate warming up, followed by the linear decay schema. We used a dropout probability of 0.1 on all layers. The data used for pre-training is the same as BERT, i.e., English Wikipedia (2500M words) and BookCorpus (800M words) BIBREF14. For the Masked LM task, we followed the same masking rate and settings as in BERT."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9072dd02640c35084d69eed498fcf48f61a5803a",
"d48e894c134141ef3f35164b3ea477ed5195690c"
],
"answer": [
{
"evidence": [
"This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768."
],
"extractive_spans": [
"BERTbase"
],
"free_form_answer": "",
"highlighted_evidence": [
" The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This section gives detailed experiment settings. The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768."
],
"extractive_spans": [
"BERTbase"
],
"free_form_answer": "",
"highlighted_evidence": [
"The method is evaluated on the BERTbase model, which has 12 layers, 12 self-attention heads with a hidden size of 768."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How much is performance improved on NLI?",
"Do they train their model starting from a checkpoint?",
"What BERT model do they test?"
],
"question_id": [
"bdc91d1283a82226aeeb7a2f79dbbc57d3e84a1a",
"7b4fb6da74e6bd1baea556788a02969134cf0800",
"bc31a3d2f7c608df8c019a64d64cb0ccc5669210"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"BERT",
"BERT",
"BERT"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: An illustration of the proposed method. B denotes the second input sentence. (1) Top: original NSP task. (2) Middle: 3-class categorization task with DiffDoc, IsNext and IsPrev. (3) Bottom: 3-class task, but with a wider scope of NSP and PSP. The in-adjacent sentences are assisted with a label smoothing technique to reduce the noise.",
"Table 1: The accuracy of BERT and BERT-PN on the validation set of the MNLI and QNLI dataset. P&H denotes that the input is (premise, hypothesis), which is the order used in BERT. The reported accuracy is the average after 5 runs.",
"Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs.",
"Table 3: The performance of various BERT models finetuned on the SQuAD v1.1 and v2.0 dataset. EM means the percentage of exact match. The results of RoBERTa is the DOC-SENTENCES version retrieved from Table 2 in (Liu et al. 2019).",
"Figure 2: The accuracy on evaluation set of HANS. It has six sub-components, each defined by its correct label and the heuristic it addresses.",
"Figure 3: Performance on thirty detailed sub-components of the HANS evaluation set (30K instances). Each sub-component is defined by three heuristics, i.e., Lexical overlap, Sub-sequence and Constituent. For instance, in prefix “ln” , “l” denotes lexical overlap heuristic, “n” denotes the non-entailment label. The suffix means a specific syntactic rule, e.g., subject/object swap means in the hypothesis sentence, the subject and the object are swapped.",
"Table 4: The experimental results on test set of the RACE dataset. The results of RoBERTa is the DOC-SENTENCES version retrieved from Table 2 in (Liu et al. 2019). All the listed models are trained on the Wikipedia and the Book Corpus datasets.",
"Table 5: Comparison on the Chinese NLP tasks. All the models are of “base” size. The results of BERT, BERT-wwm are retrieved from literature (Cui et al. 2019), except the results of NLPCC-DBQA which is from ERNIE 2.0 (2019b). The results of ERNIE, ERNIE 2.0 are retrieved from literature (Sun et al. 2019a; 2019b). The best result and the average (in bracket) of 5 runs are reported. The number below the model denotes the number of tokens in the pre-training data.",
"Table 6: Results on the CMRC-2018 and DRCD datasets. Three BERTBase models are reported from our reproduction, BERTwwm paper (Cui et al. 2019) and ERNIE 2.0 paper (Sun et al. 2019b), respectively. The results of BERTBase-wwm are obtained from the paper (Cui et al. 2019). EM denotes the percentage of exact matching. The best result and the average (in bracket) of 5 runs are reported."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"6-Table4-1.png",
"7-Table5-1.png",
"7-Table6-1.png"
]
} | [
"How much is performance improved on NLI?"
] | [
[
"1909.03405-5-Table2-1.png",
"1909.03405-Results of NLI Tasks ::: GLUE-2"
]
] | [
"The average score improved by 1.4 points over the previous best result."
] | 99 |
1801.07887 | Impact of Batch Size on Stopping Active Learning for Text Classification | When using active learning, smaller batch sizes are typically more efficient from a learning efficiency perspective. However, in practice due to speed and human annotator considerations, the use of larger batch sizes is necessary. While past work has shown that larger batch sizes decrease learning efficiency from a learning curve perspective, it remains an open question how batch size impacts methods for stopping active learning. We find that large batch sizes degrade the performance of a leading stopping method over and above the degradation that results from reduced learning efficiency. We analyze this degradation and find that it can be mitigated by changing the window size parameter of how many past iterations of learning are taken into account when making the stopping decision. We find that when using larger batch sizes, stopping methods are more effective when smaller window sizes are used. | {
"paragraphs": [
[
"The use of active learning has received a lot of interest for reducing annotation costs for text classification BIBREF0 , BIBREF1 , BIBREF2 .",
"Active learning sharply increases the performance of iteratively trained machine learning models by selectively determining which unlabeled samples should be annotated. The number of samples that are selected for annotation at each iteration of active learning is called the batch size.",
"An important aspect of the active learning process is when to stop the active learning process. Stopping methods enable the potential benefits of active learning to be achieved in practice. Without stopping methods, the active learning process would continue until all annotations have been labeled, defeating the purpose of using active learning. Accordingly, there has been a lot of interest in the development of active learning stopping methods BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .",
"Another important aspect of the active learning process is what batch size to use. Previous work has shown that using smaller batch sizes leads to greater learning efficiency BIBREF2 , BIBREF7 . There is a tension between using smaller batch sizes to optimize learning efficiency and using larger batch sizes to optimize development speed and ease of annotation. We analyze how batch size affects a leading stopping method and how stopping method parameters can be changed to optimize performance depending on the batch size.",
"We evaluate the effect batch size has on active learning stopping methods for text classification. We use the publicly available 20Newsgroups dataset in our experiments.",
"For our base learner, we use the implementation of a Support Vector Machine from the scikit-learn Python library. For our sampling algorithm, we use the closest-to-hyperplane algorithm BIBREF2 , which has been shown in recent work to compare favorably with other sampling algorithms BIBREF8 . We use a binary bag of words representation and only consider words that show up in the dataset more than three times. We use a stop word list to remove common English words.",
"For analyzing the impact of batch size on stopping methods, we use a method that will stop at the first training iteration that is within a specified percentage of the maximum achievable performance. We denote this method as the Oracle Method, and we will set the percentage to 99 and denote this as Oracle-99. We set the percentage to 99 because it is typical for leading stopping methods to be able to achieve this level of performance (see Table 1 in BIBREF4 ). Although the Oracle Method cannot be used in practice, it is useful for contextualizing the stopping results of practical stopping methods."
],
[
"We considered different batch sizes in our experiments, based on percentages of the entire set of training data. The results for batch sizes corresponding to 1%, 5%, and 10% of the training data for the 20Newsgroups dataset are summarized in Table~ SECREF4 ."
],
[
"Looking at Table~ SECREF4 , one can see that Oracle-99 needs more annotations with larger batch percents to reach approximately the same F-Measure as with smaller batch percents.",
"These results are consistent with past findings that learning efficiency is decreased with larger batch sizes BIBREF2 , BIBREF7 . However, an open question is whether changing the parameters associated with actual stopping methods can make them experience less degradation in performance when larger batch sizes are used. In particular, an important parameter of stopping methods is the window size of previous iterations to consider. The next subsection shows how decreasing the window size parameter can help to reduce the degradation in performance that stopping methods experience with larger batch sizes."
],
[
"We denote the stopping method published in BIBREF4 as BV2009. This stopping method will stop the active learning process if the mean of the three previous kappa agreement values between consecutive models is above a threshold. For larger batch percents, note that BV2009 stops later than the optimal Oracle Method point.",
"We ran BV2009 with smaller window sizes for each of our different batch sizes. Our results are summarized for a window size of one in the row ``BV2009 (Window Size = 1)'' in Table~ SECREF4 . When using a window size of one, BV2009 is able to stop with a smaller number of annotations than when using a window size of three. This is done without losing much F-Measure. The next subsection provides an explanation as to why smaller window sizes are more effective than larger window sizes when larger batch sizes are used."
],
[
"We set INLINEFORM0 to be the window size that the user has defined. Kappa is an agreement metric between two models. Therefore, BV2009 needs INLINEFORM1 models to be generated before it begins to check if the average is above the threshold. This does not necessarily mean it stops after INLINEFORM2 models have been generated. Rather, it represents the first point in the active learning process at which BV2009 even has a chance to stop.",
"When using larger batch percents, fewer models are generated than when using smaller batch percents. This gives any stopping method less points to test whether or not to stop. We also note that kappa agreement scores are generally low between the first few models trained. This, combined with fewer points to stop at, causes BV2009 to stop somewhat sub-optimally when using very large batch percents. Usage of very large batch sizes, such as 10% of the data, is not common so sub-optimal performance of stopping methods in those situations is not a major problem."
],
[
"Active learning has the potential to significantly reduce annotation costs. Two important considerations in the active learning process are when to stop the iterative process of asking for more labeled data and how large of a batch size to use when asking for additional labels during each iteration. We found that stopping methods degrade in performance when larger batch sizes are used. The degradation in performance is larger than the amount that can be explained due to the degradation in learning efficiency that results from using larger batch sizes. An important parameter used by stopping methods is what window size of earlier iterations to consider in making the stopping decision. Our results indicate that making the window size smaller helps to mitigate the degradation in stopping method performance that occurs with larger batch sizes."
],
[
"This work was supported in part by The College of New Jersey Support of Scholarly Activities (SOSA) program, by The College of New Jersey Mentored Undergraduate Summer Experience (MUSE) program, and by usage of The College of New Jersey High Performance Computing System."
]
],
"section_name": [
"Introduction",
"Results",
"Oracle Results",
"Comparing BV2009 with the Oracle Method",
"BV2009 Window Size Discussion",
"Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"85cb86761dd46037ea858600089c6175d6edb18d",
"8a48a39a57127f6758d3b889383b3f639d55af1b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We evaluate the effect batch size has on active learning stopping methods for text classification. We use the publicly available 20Newsgroups dataset in our experiments."
],
"extractive_spans": [
"text classification"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate the effect batch size has on active learning stopping methods for text classification."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"8556c790015d295b32d544b13c995bd3b7abfa03",
"c393ee094d7644db56dd8d0b9b733c54b56b66b7"
],
"answer": [
{
"evidence": [
"Active learning sharply increases the performance of iteratively trained machine learning models by selectively determining which unlabeled samples should be annotated. The number of samples that are selected for annotation at each iteration of active learning is called the batch size."
],
"extractive_spans": [],
"free_form_answer": "A process of training a model when selected unlabeled samples are annotated on each iteration.",
"highlighted_evidence": [
"Active learning sharply increases the performance of iteratively trained machine learning models by selectively determining which unlabeled samples should be annotated. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Active learning sharply increases the performance of iteratively trained machine learning models by selectively determining which unlabeled samples should be annotated. The number of samples that are selected for annotation at each iteration of active learning is called the batch size."
],
"extractive_spans": [],
"free_form_answer": "Active learning is a process that selectively determines which unlabeled samples for a machine learning model should be annotated.",
"highlighted_evidence": [
"Active learning sharply increases the performance of iteratively trained machine learning models by selectively determining which unlabeled samples should be annotated. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What downstream tasks are evaluated?",
"What is active learning?"
],
"question_id": [
"f67b9bda14ec70feba2e0d10c400b2b2025a0a6a",
"1cfed6b0c9b5a079a51166209649a987e7553e4e"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"TABLE I STOPPING METHOD RESULTS ON 20NEWSGROUPS FOR DIFFERENT BATCH SIZES USING VARIOUS WINDOW SIZES. THE TOP NUMBER IN EACH ROW SHOWS THE NUMBER OF ANNOTATIONS AT THE STOPPING POINT AND THE BOTTOM NUMBER SHOWS THE F-MEASURE AT THE STOPPING POINT."
],
"file": [
"2-TableI-1.png"
]
} | [
"What is active learning?"
] | [
[
"1801.07887-Introduction-1"
]
] | [
"Active learning is a process that selectively determines which unlabeled samples for a machine learning model should be annotated."
] | 100 |
1907.03060 | Exploiting Out-of-Domain Parallel Data through Multilingual Transfer Learning for Low-Resource Neural Machine Translation | This paper proposes a novel multilingual multistage fine-tuning approach for low-resource neural machine translation (NMT), taking a challenging Japanese--Russian pair for benchmarking. Although there are many solutions for low-resource scenarios, such as multilingual NMT and back-translation, we have empirically confirmed their limited success when restricted to in-domain data. We therefore propose to exploit out-of-domain data through transfer learning, by using it to first train a multilingual NMT model followed by multistage fine-tuning on in-domain parallel and back-translated pseudo-parallel data. Our approach, which combines domain adaptation, multilingualism, and back-translation, helps improve the translation quality by more than 3.7 BLEU points, over a strong baseline, for this extremely low-resource scenario. | {
"paragraphs": [
[
"Neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 has enabled end-to-end training of a translation system without needing to deal with word alignments, translation rules, and complicated decoding algorithms, which are the characteristics of phrase-based statistical machine translation (PBSMT) BIBREF3 . Although NMT can be significantly better than PBSMT in resource-rich scenarios, PBSMT performs better in low-resource scenarios BIBREF4 . Only by exploiting cross-lingual transfer learning techniques BIBREF5 , BIBREF6 , BIBREF7 , can the NMT performance approach PBSMT performance in low-resource scenarios.",
"However, such methods usually require an NMT model trained on a resource-rich language pair like French INLINEFORM0 English (parent), which is to be fine-tuned for a low-resource language pair like Uzbek INLINEFORM1 English (child). On the other hand, multilingual approaches BIBREF8 propose to train a single model to translate multiple language pairs. However, these approaches are effective only when the parent target or source language is relatively resource-rich like English (En). Furthermore, the parents and children models should be trained on similar domains; otherwise, one has to take into account an additional problem of domain adaptation BIBREF9 .",
"In this paper, we work on a linguistically distant and thus challenging language pair Japanese INLINEFORM0 Russian (Ja INLINEFORM1 Ru) which has only 12k lines of news domain parallel corpus and hence is extremely resource-poor. Furthermore, the amount of indirect in-domain parallel corpora, i.e., Ja INLINEFORM2 En and Ru INLINEFORM3 En, are also small. As we demonstrate in Section SECREF4 , this severely limits the performance of prominent low-resource techniques, such as multilingual modeling, back-translation, and pivot-based PBSMT. To remedy this, we propose a novel multistage fine-tuning method for NMT that combines multilingual modeling BIBREF8 and domain adaptation BIBREF9 .",
"We have addressed two important research questions (RQs) in the context of extremely low-resource machine translation (MT) and our explorations have derived rational contributions (CTs) as follows:",
"To the best of our knowledge, we are the first to perform such an extensive evaluation of extremely low-resource MT problem and propose a novel multilingual multistage fine-tuning approach involving multilingual modeling and domain adaptation to address it."
],
[
"In this paper, we deal with Ja INLINEFORM0 Ru news translation. This language pair is very challenging because the languages involved have completely different writing system, phonology, morphology, grammar, and syntax. Among various domains, we experimented with translations in the news domain, considering the importance of sharing news between different language speakers. Moreover, news domain is one of the most challenging tasks, due to large presence of out-of-vocabulary (OOV) tokens and long sentences. To establish and evaluate existing methods, we also involved English as the third language. As direct parallel corpora are scarce, involving a language such as English for pivoting is quite common BIBREF10 .",
"There has been no clean held-out parallel data for Ja INLINEFORM0 Ru and Ja INLINEFORM1 En news translation. Therefore, we manually compiled development and test sets using News Commentary data as a source. Since the given Ja INLINEFORM2 Ru and Ja INLINEFORM3 En data share many lines in the Japanese side, we first compiled tri-text data. Then, from each line, corresponding parts across languages were manually identified, and unaligned parts were split off into a new line. Note that we have never merged two or more lines. As a result, we obtained 1,654 lines of data comprising trilingual, bilingual, and monolingual segments (mainly sentences) as summarized in Table TABREF8 . Finally, for the sake of comparability, we randomly chose 600 trilingual sentences to create a test set, and concatenated the rest of them and bilingual sentences to form development sets.",
"Our manually aligned development and test sets are publicly available."
],
[
"koehn-knowles:2017:NMT showed that NMT is unable to handle low-resource language pairs as opposed to PBSMT. Transfer learning approaches BIBREF5 , BIBREF6 , BIBREF7 work well when a large helping parallel corpus is available. This restricts one of the source or the target languages to be English which, in our case, is not possible. Approaches involving bi-directional NMT modeling is shown to drastically improve low-resource translation BIBREF11 . However, like most other, this work focuses on translation from and into English.",
"Remaining options include (a) unsupervised MT BIBREF12 , BIBREF13 , BIBREF14 , (b) parallel sentence mining from non-parallel or comparable corpora BIBREF15 , BIBREF16 , (c) generating pseudo-parallel data BIBREF17 , and (d) MT based on pivot languages BIBREF10 . The linguistic distance between Japanese and Russian makes it extremely difficult to learn bilingual knowledge, such as bilingual lexicons and bilingual word embeddings. Unsupervised MT is thus not promising yet, due to its heavy reliance on accurate bilingual word embeddings. Neither does parallel sentence mining, due to the difficulty of obtaining accurate bilingual lexicons. Pseudo-parallel data can be used to augment existing parallel corpora for training, and previous work has reported that such data generated by so-called back-translation can substantially improve the quality of NMT. However, this approach requires base MT systems that can generate somewhat accurate translations. It is thus infeasible in our scenario, because we can obtain only a weak system which is the consequence of an extremely low-resource situation. MT based on pivot languages requires large in-domain parallel corpora involving the pivot languages. This technique is thus infeasible, because the in-domain parallel corpora for Ja INLINEFORM0 En and Ru INLINEFORM1 En pairs are also extremely limited, whereas there are large parallel corpora in other domains. Section SECREF4 empirically confirms the limit of these existing approaches.",
"Fortunately, there are two useful transfer learning solutions using NMT: (e) multilingual modeling to incorporate multiple language pairs into a single model BIBREF8 and (f) domain adaptation to incorporate out-of-domain data BIBREF9 . In this paper, we explore a novel method involving step-wise fine-tuning to combine these two methods. By improving the translation quality in this way, we can also increase the likelihood of pseudo-parallel data being useful to further improve translation quality."
],
[
"This section answers our first research question, [RQ1], about the translation quality that we can achieve using existing methods and in-domain parallel and monolingual data. We then use the strongest model to conduct experiments on generating and utilizing back-translated pseudo-parallel data for augmenting NMT. Our intention is to empirically identify the most effective practices as well as recognize the limitations of relying only on in-domain parallel corpora."
],
[
"To train MT systems among the three languages, i.e., Japanese, Russian, and English, we used all the parallel data provided by Global Voices, more specifically those available at OPUS. Table TABREF9 summarizes the size of train/development/test splits used in our experiments. The number of parallel sentences for Ja INLINEFORM0 Ru is 12k, for Ja INLINEFORM1 En is 47k, and for Ru INLINEFORM2 En is 82k. Note that the three corpora are not mutually exclusive: 9k out of 12k sentences in the Ja INLINEFORM3 Ru corpus were also included in the other two parallel corpora, associated with identical English translations. This puts a limit on the positive impact that the helping corpora can have on the translation quality.",
"Even when one focuses on low-resource language pairs, we often have access to larger quantities of in-domain monolingual data of each language. Such monolingual data are useful to improve quality of MT, for example, as the source of pseudo-parallel data for augmenting training data for NMT BIBREF17 and as the training data for large and smoothed language models for PBSMT BIBREF4 . Table TABREF13 summarizes the statistics on our monolingual corpora for several domains including the news domain. Note that we removed from the Global Voices monolingual corpora those sentences that are already present in the parallel corpus.",
"https://dumps.wikimedia.org/backup-index.html (20180501) http://www.statmt.org/wmt18/translation-task.html https://www.yomiuri.co.jp/database/glossary/ http://www.cs.jhu.edu/~kevinduh/a/multitarget-tedtalks/ http://opus.nlpl.eu/Tatoeba-v2.php",
"We tokenized English and Russian sentences using tokenizer.perl of Moses BIBREF3 . To tokenize Japanese sentences, we used MeCab with the IPA dictionary. After tokenization, we eliminated duplicated sentence pairs and sentences with more than 100 tokens for all the languages."
],
[
"We began with evaluating standard MT paradigms, i.e., PBSMT BIBREF3 and NMT BIBREF1 . As for PBSMT, we also examined two advanced methods: pivot-based translation relying on a helping language BIBREF10 and induction of phrase tables from monolingual data BIBREF14 .",
"As for NMT, we compared two types of encoder-decoder architectures: attentional RNN-based model (RNMT) BIBREF2 and the Transformer model BIBREF18 . In addition to standard uni-directional modeling, to cope with the low-resource problem, we examined two multi-directional models: bi-directional model BIBREF11 and multi-to-multi (M2M) model BIBREF8 .",
"After identifying the best model, we also examined the usefulness of a data augmentation method based on back-translation BIBREF17 .",
"First, we built a PBSMT system for each of the six translation directions. We obtained phrase tables from parallel corpus using SyMGIZA++ with the grow-diag-final heuristics for word alignment, and Moses for phrase pair extraction. Then, we trained a bi-directional MSD (monotone, swap, and discontinuous) lexicalized reordering model. We also trained three 5-gram language models, using KenLM on the following monolingual data: (1) the target side of the parallel data, (2) the concatenation of (1) and the monolingual data from Global Voices, and (3) the concatenation of (1) and all monolingual data in the news domain in Table TABREF13 .",
"Subsequently, using English as the pivot language, we examined the following three types of pivot-based PBSMT systems BIBREF10 , BIBREF19 for each of Ja INLINEFORM0 Ru and Ru INLINEFORM1 Ja.",
"2-step decoding using the source-to-English and English-to-target systems.",
"Obtain a new phrase table from synthetic parallel data generated by translating English side of the target–English training parallel data to the source language with the English-to-source system.",
"Compile a new phrase table combining those for the source-to-English and English-to-target systems.",
"Among these three, triangulation is the most computationally expensive method. Although we had filtered the component phrase tables using the statistical significance pruning method BIBREF20 , triangulation can generate an enormous number of phrase pairs. To reduce the computational cost during decoding and the negative effects of potentially noisy phrase pairs, we retained for each source phrase INLINEFORM0 only the INLINEFORM1 -best translations INLINEFORM2 according to the forward translation probability INLINEFORM3 calculated from the conditional probabilities in the component models as defined in utiyama:07. For each of the retained phrase pairs, we also calculated the backward translation probability, INLINEFORM4 , and lexical translation probabilities, INLINEFORM5 and INLINEFORM6 , in the same manner as INLINEFORM7 .",
"We also investigated the utility of recent advances in unsupervised MT. Even though we began with a publicly available implementation of unsupervised PBSMT BIBREF13 , it crashed due to unknown reasons. We therefore followed another method described in marie:usmt-unmt. Instead of short INLINEFORM0 -grams BIBREF12 , BIBREF13 , we collected a set of phrases in Japanese and Russian from respective monolingual data using the word2phrase algorithm BIBREF21 , as in marie:usmt-unmt. To reduce the complexity, we used randomly selected 10M monolingual sentences, and 300k most frequent phrases made of words among the 300k most frequent words. For each source phrase INLINEFORM1 , we selected 300-best target phrases INLINEFORM2 according to the translation probability as in D18-1549: INLINEFORM3 where INLINEFORM4 stands for a bilingual embedding of a given phrase, obtained through averaging bilingual embeddings of constituent words learned from the two monolingual data using fastText and vecmap. For each of the retained phrase pair, INLINEFORM5 was computed analogously. We also computed lexical translation probabilities relying on those learned from the given small parallel corpus.",
"Up to four phrase tables were jointly exploited by the multiple decoding path ability of Moses. Weights for the features were tuned using KB-MIRA BIBREF22 on the development set; we took the best weights after 15 iterations. Two hyper-parameters, namely, INLINEFORM0 for the number of pivot-based phrase pairs per source phrase and INLINEFORM1 for distortion limit, were determined by a grid search on INLINEFORM2 and INLINEFORM3 . In contrast, we used predetermined hyper-parameters for phrase table induction from monolingual data, following the convention: 200 for the dimension of word and phrase embeddings and INLINEFORM4 .",
"We used the open-source implementation of the RNMT and the Transformer models in tensor2tensor. A uni-directional model for each of the six translation directions was trained on the corresponding parallel corpus. Bi-directional and M2M models were realized by adding an artificial token that specifies the target language to the beginning of each source sentence and shuffling the entire training data BIBREF8 .",
"Table TABREF22 contains some specific hyper-parameters for our baseline NMT models. The hyper-parameters not mentioned in this table used the default values in tensor2tensor. For M2M systems, we over-sampled Ja INLINEFORM0 Ru and Ja INLINEFORM1 En training data so that their sizes match the largest Ru INLINEFORM2 En data. To reduce the number of unknown words, we used tensor2tensor's internal sub-word segmentation mechanism. Since we work in a low-resource setting, we used shared sub-word vocabularies of size 16k for the uni- and bi-directional models and 32k for the M2M models. The number of training iterations was determined by early-stopping: we evaluated our models on the development set every 1,000 updates, and stopped training if BLEU score for the development set was not improved for 10,000 updates (10 check-points). Note that the development set was created by concatenating those for the individual translation directions without any over-sampling.",
"Having trained the models, we averaged the last 10 check-points and decoded the test sets with a beam size of 4 and a length penalty which was tuned by a linear search on the BLEU score for the development set.",
"Similarly to PBSMT, we also evaluated “Cascade” and “Synthesize” methods with uni-directional NMT models."
],
[
"We evaluated MT models using case-sensitive and tokenized BLEU BIBREF23 on test sets, using Moses's multi-bleu.perl. Statistical significance ( INLINEFORM0 ) on the difference of BLEU scores was tested by Moses's bootstrap-hypothesis-difference-significance.pl.",
"Tables TABREF27 and TABREF31 show BLEU scores of all the models, except the NMT systems augmented with back-translations. Whereas some models achieved reasonable BLEU scores for Ja INLINEFORM0 En and Ru INLINEFORM1 En translation, all the results for Ja INLINEFORM2 Ru, which is our main concern, were abysmal.",
"Among the NMT models, Transformer models (b INLINEFORM0 ) were proven to be better than RNMT models (a INLINEFORM1 ). RNMT models could not even outperform the uni-directional PBSMT models (c1). M2M models (a3) and (b3) outperformed their corresponding uni- and bi-directional models in most cases. It is worth noting that in this extremely low-resource scenario, BLEU scores of the M2M RNMT model for the largest language pair, i.e., Ru INLINEFORM2 En, were lower than those of the uni- and bi-directional RNMT models as in TACL1081. In contrast, with the M2M Transformer model, Ru INLINEFORM3 En also benefited from multilingualism.",
"Standard PBSMT models (c1) achieved higher BLEU scores than uni-directional NMT models (a1) and (b1), as reported by koehn-knowles:2017:NMT, whereas they underperform the M2M Transformer NMT model (b3). As shown in Table TABREF31 , pivot-based PBSMT systems always achieved higher BLEU scores than (c1). The best model with three phrase tables, labeled “Synthesize / Triangulate / Gold,” brought visible BLEU gains with substantial reduction of OOV tokens (3047 INLINEFORM0 1180 for Ja INLINEFORM1 Ru, 4463 INLINEFORM2 1812 for Ru INLINEFORM3 Ja). However, further extension with phrase tables induced from monolingual data did not push the limit, despite their high coverage; only 336 and 677 OOV tokens were left for the two translation directions, respectively. This is due to the poor quality of the bilingual word embeddings used to extract the phrase table, as envisaged in Section SECREF3 .",
"None of pivot-based approaches with uni-directional NMT models could even remotely rival the M2M Transformer NMT model (b3).",
"Table TABREF46 shows the results of our multistage fine-tuning, where the IDs of each row refer to those described in Section SECREF41 . First of all, the final models of our multistage fine-tuning, i.e., V and VII, achieved significantly higher BLEU scores than (b3) in Table TABREF27 , a weak baseline without using any monolingual data, and #10 in Table TABREF33 , a strong baseline established with monolingual data.",
"The performance of the initial model (I) depends on the language pair. For Ja INLINEFORM0 Ru pair, it cannot achieve minimum level of quality since the model has never seen parallel data for this pair. The performance on Ja INLINEFORM1 En pair was much lower than the two baseline models, reflecting the crucial mismatch between training and testing domains. In contrast, Ru INLINEFORM2 En pair benefited the most and achieved surprisingly high BLEU scores. The reason might be due to the proximity of out-of-domain training data and in-domain test data.",
"The first fine-tuning stage significantly pushed up the translation quality for Ja INLINEFORM0 En and Ru INLINEFORM1 En pairs, in both cases with fine-tuning (II) and mixed fine-tuning (III). At this stage, both models performed only poorly for Ja INLINEFORM2 Ru pair as they have not yet seen Ja INLINEFORM3 Ru parallel data. Either model had a consistent advantage to the other.",
"When these models were further fine-tuned only on the in-domain Ja INLINEFORM0 Ru parallel data (IV and VI), we obtained translations of better quality than the two baselines for Ja INLINEFORM1 Ru pair. However, as a result of complete ignorance of Ja INLINEFORM2 En and Ru INLINEFORM3 En pairs, the models only produced translations of poor quality for these language pairs. In contrast, mixed fine-tuning for the second fine-tuning stage (V and VII) resulted in consistently better models than conventional fine-tuning (IV and VI), irrespective of the choice at the first stage, thanks to the gradual shift of parameters realized by in-domain Ja INLINEFORM4 En and Ru INLINEFORM5 En parallel data. Unfortunately, the translation quality for Ja INLINEFORM6 En and Ru INLINEFORM7 En pairs sometimes degraded from II and III. Nevertheless, the BLEU scores still retain the large margin against two baselines.",
"The last three rows in Table TABREF46 present BLEU scores obtained by the methods with fewer fine-tuning steps. The most naive model I', trained on the balanced mixture of whole five types of corpora from scratch, and the model II', obtained through a single-step conventional fine-tuning of I on all the in-domain data, achieved only BLEU scores consistently worse than VII. In contrast, when we merged our two fine-tuning steps into a single mixed fine-tuning on I, we obtained a model III' which is better for the Ja INLINEFORM0 Ru pair than VII. Nevertheless, they are still comparable to those of VII and the BLEU scores for the other two language pairs are much lower than VII. As such, we conclude that our multistage fine-tuning leads to a more robust in-domain multilingual model."
],
[
"Given that the M2M Transformer NMT model (b3) achieved best results for most of the translation directions and competitive results for the rest, we further explored it through back-translation.",
"We examined the utility of pseudo-parallel data for all the six translation directions, unlike the work of lakew2017improving and lakew2018comparison, which concentrate only on the zero-shot language pair, and the work of W18-2710, which compares only uni- or bi-directional models. We investigated whether each translation direction in M2M models will benefit from pseudo-parallel data and if so, what kind of improvement takes place.",
"First, we selected sentences to be back-translated from in-domain monolingual data (Table TABREF13 ), relying on the score proposed by moore:intelligent via the following procedure.",
"For each language, train two 4-gram language models, using KenLM: an in-domain one on all the Global Voices data, i.e., both parallel and monolingual data, and a general-domain one on the concatenation of Global Voices, IWSLT, and Tatoeba data.",
"For each language, discard sentences containing OOVs according to the in-domain language model.",
"For each translation direction, select the INLINEFORM0 -best monolingual sentences in the news domain, according to the difference between cross-entropy scores given by the in-domain and general-domain language models.",
"Whereas W18-2710 exploited monolingual data much larger than parallel data, we maintained a 1:1 ratio between them BIBREF8 , setting INLINEFORM0 to the number of lines of parallel data of given language pair.",
"Selected monolingual sentences were then translated using the M2M Transformer NMT model (b3) to compose pseudo-parallel data. Then, the pseudo-parallel data were enlarged by over-sampling as summarized in Table TABREF32 . Finally, new NMT models were trained on the concatenation of the original parallel and pseudo-parallel data from scratch in the same manner as the previous NMT models with the same hyper-parameters.",
"Table TABREF33 shows the BLEU scores achieved by several reasonable combinations of six-way pseudo-parallel data. We observed that the use of all six-way pseudo-parallel data (#10) significantly improved the base model for all the translation directions, except En INLINEFORM0 Ru. A translation direction often benefited when the pseudo-parallel data for that specific direction was used."
],
[
"We have evaluated an extensive variation of MT models that rely only on in-domain parallel and monolingual data. However, the resulting BLEU scores for Ja INLINEFORM2 Ru and Ru INLINEFORM3 Ja tasks do not exceed 10 BLEU points, implying the inherent limitation of the in-domain data as well as the difficulty of these translation directions."
],
[
"The limitation of relying only on in-domain data demonstrated in Section SECREF4 motivates us to explore other types of parallel data. As raised in our second research question, [RQ2], we considered the effective ways to exploit out-of-domain data.",
"According to language pair and domain, parallel data can be classified into four categories in Table TABREF40 . Among all the categories, out-of-domain data for the language pair of interest have been exploited in the domain adaptation scenarios (C INLINEFORM0 A) BIBREF9 . However, for Ja INLINEFORM1 Ru, no out-of-domain data is available. To exploit out-of-domain parallel data for Ja INLINEFORM2 En and Ru INLINEFORM3 En pairs instead, we propose a multistage fine-tuning method, which combines two types of transfer learning, i.e., domain adaptation for Ja INLINEFORM4 En and Ru INLINEFORM5 En (D INLINEFORM6 B) and multilingual transfer (B INLINEFORM7 A), relying on the M2M model examined in Section SECREF4 . We also examined the utility of fine-tuning for iteratively generating and using pseudo-parallel data."
],
[
"Simply using NMT systems trained on out-of-domain data for in-domain translation is known to perform badly. In order to effectively use large-scale out-of-domain data for our extremely low-resource task, we propose to perform domain adaptation through either (a) conventional fine-tuning, where an NMT system trained on out-of-domain data is fine-tuned only on in-domain data, or (b) mixed fine-tuning BIBREF9 , where pre-trained out-of-domain NMT system is fine-tuned using a mixture of in-domain and out-of-domain data. The same options are available for transferring from Ja INLINEFORM0 En and Ru INLINEFORM1 En to Ja INLINEFORM2 Ru.",
"We inevitably involve two types of transfer learning, i.e., domain adaptation for Ja INLINEFORM0 En and Ru INLINEFORM1 En and multilingual transfer for Ja INLINEFORM2 Ru pair. Among several conceivable options for managing these two problems, we examined the following multistage fine-tuning.",
"Pre-train a multilingual model only on the Ja INLINEFORM0 En and Ru INLINEFORM1 En out-of-domain parallel data (I), where the vocabulary of the model is determined on the basis of the in-domain parallel data in the same manner as the M2M NMT models examined in Section SECREF4 .",
"Fine-tune the pre-trained model (I) on the in-domain Ja INLINEFORM0 En and Ru INLINEFORM1 En parallel data (fine-tuning, II) or on the mixture of in-domain and out-of-domain Ja INLINEFORM2 En and Ru INLINEFORM3 En parallel data (mixed fine-tuning, III).",
"Further fine-tune the models (each of II and III) for Ja INLINEFORM0 Ru on in-domain parallel data for this language pair only (fine-tuning, IV and VI) or on all the in-domain parallel data (mixed fine-tuning, V and VII).",
"We chose this way due to the following two reasons. First, we need to take a balance between several different parallel corpora sizes. The other reason is division of labor; we assume that solving each sub-problem one by one should enable gradual shift of parameters."
],
[
"As an additional large-scale out-of-domain parallel data for Ja INLINEFORM0 En, we used the cleanest 1.5M sentences from the Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF24 . As for Ru INLINEFORM1 En, we used the UN and Yandex corpora released for the WMT 2018 News Translation Task. We retained Ru INLINEFORM2 En sentence pairs that contain at least one OOV token in both sides, according to the in-domain language model trained in Section SECREF34 . Table TABREF45 summarizes the statistics on the remaining out-of-domain parallel data."
],
[
"Having obtained a better model, we examined again the utility of back-translation. More precisely, we investigated (a) whether the pseudo-parallel data generated by an improved NMT model leads to a further improvement, and (b) whether one more stage of fine-tuning on the mixture of original parallel and pseudo-parallel data will result in a model better than training a new model from scratch as examined in Section SECREF34 .",
"Given an NMT model, we first generated six-way pseudo-parallel data by translating monolingual data. For the sake of comparability, we used the identical monolingual sentences sampled in Section SECREF34 . Then, we further fine-tuned the given model on the mixture of the generated pseudo-parallel data and the original parallel data, following the same over-sampling procedure in Section SECREF34 . We repeated these steps five times.",
"Table TABREF51 shows the results. “new #10” in the second row indicates an M2M Transformer model trained from scratch on the mixture of six-way pseudo-parallel data generated by VII and the original parallel data. It achieved higher BLEU scores than #10 in Table TABREF33 thanks to the pseudo-parallel data of better quality, but underperformed the base NMT model VII. In contrast, our fine-tuned model VIII successfully surpassed VII, and one more iteration (IX) further improved BLEU scores for all translation directions, except Ru INLINEFORM0 En. Although further iterations did not necessarily gain BLEU scores, we came to a much higher plateau compared to the results in Section SECREF4 ."
],
[
"In this paper, we challenged the difficult task of Ja INLINEFORM0 Ru news domain translation in an extremely low-resource setting. We empirically confirmed the limited success of well-established solutions when restricted to in-domain data. Then, to incorporate out-of-domain data, we proposed a multilingual multistage fine-tuning approach and observed that it substantially improves Ja INLINEFORM1 Ru translation by over 3.7 BLEU points compared to a strong baseline, as summarized in Table TABREF53 . This paper contains an empirical comparison of several existing approaches and hence we hope that our paper can act as a guideline to researchers attempting to tackle extremely low-resource translation.",
"In the future, we plan to confirm further fine-tuning for each of specific translation directions. We will also explore the way to exploit out-of-domain pseudo-parallel data, better domain-adaptation approaches, and additional challenging language pairs."
],
[
"This work was carried out when Aizhan Imankulova was taking up an internship at NICT, Japan. We would like to thank the reviewers for their insightful comments. A part of this work was conducted under the program “Promotion of Global Communications Plan: Research, Development, and Social Demonstration of Multilingual Speech Translation Technology” of the Ministry of Internal Affairs and Communications (MIC), Japan."
]
],
"section_name": [
"Introduction",
"Our Japanese–Russian Setting",
"Related Work",
"Limit of Using only In-domain Data",
"Data",
"MT Methods Examined",
"Results",
"Augmentation with Back-translation",
"Summary",
"Exploiting Large Out-of-Domain Data Involving a Helping Language",
"Multistage Fine-tuning",
"Data Selection",
"Further Augmentation with Back-translation",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"8928af6fac3eae91590a41758b6acbb6fb9c48fd",
"d1e37825059afa2ec1cbb317da71a43dba35264a"
],
"answer": [
{
"evidence": [
"We began with evaluating standard MT paradigms, i.e., PBSMT BIBREF3 and NMT BIBREF1 . As for PBSMT, we also examined two advanced methods: pivot-based translation relying on a helping language BIBREF10 and induction of phrase tables from monolingual data BIBREF14 .",
"As for NMT, we compared two types of encoder-decoder architectures: attentional RNN-based model (RNMT) BIBREF2 and the Transformer model BIBREF18 . In addition to standard uni-directional modeling, to cope with the low-resource problem, we examined two multi-directional models: bi-directional model BIBREF11 and multi-to-multi (M2M) model BIBREF8 .",
"After identifying the best model, we also examined the usefulness of a data augmentation method based on back-translation BIBREF17 ."
],
"extractive_spans": [
"pivot-based translation relying on a helping language BIBREF10",
"nduction of phrase tables from monolingual data BIBREF14 ",
"attentional RNN-based model (RNMT) BIBREF2",
"Transformer model BIBREF18",
"bi-directional model BIBREF11",
"multi-to-multi (M2M) model BIBREF8",
"back-translation BIBREF17"
],
"free_form_answer": "",
"highlighted_evidence": [
"We began with evaluating standard MT paradigms, i.e., PBSMT BIBREF3 and NMT BIBREF1 . As for PBSMT, we also examined two advanced methods: pivot-based translation relying on a helping language BIBREF10 and induction of phrase tables from monolingual data BIBREF14 .\n\nAs for NMT, we compared two types of encoder-decoder architectures: attentional RNN-based model (RNMT) BIBREF2 and the Transformer model BIBREF18 . In addition to standard uni-directional modeling, to cope with the low-resource problem, we examined two multi-directional models: bi-directional model BIBREF11 and multi-to-multi (M2M) model BIBREF8 .\n\nAfter identifying the best model, we also examined the usefulness of a data augmentation method based on back-translation BIBREF17 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we challenged the difficult task of Ja INLINEFORM0 Ru news domain translation in an extremely low-resource setting. We empirically confirmed the limited success of well-established solutions when restricted to in-domain data. Then, to incorporate out-of-domain data, we proposed a multilingual multistage fine-tuning approach and observed that it substantially improves Ja INLINEFORM1 Ru translation by over 3.7 BLEU points compared to a strong baseline, as summarized in Table TABREF53 . This paper contains an empirical comparison of several existing approaches and hence we hope that our paper can act as a guideline to researchers attempting to tackle extremely low-resource translation.",
"FLOAT SELECTED: Table 13: Summary of our investigation: BLEU scores of the best NMT systems at each step."
],
"extractive_spans": [],
"free_form_answer": "M2M Transformer",
"highlighted_evidence": [
"Then, to incorporate out-of-domain data, we proposed a multilingual multistage fine-tuning approach and observed that it substantially improves Ja INLINEFORM1 Ru translation by over 3.7 BLEU points compared to a strong baseline, as summarized in Table TABREF53 . ",
"FLOAT SELECTED: Table 13: Summary of our investigation: BLEU scores of the best NMT systems at each step."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"what was the baseline?"
],
"question_id": [
"761de1610e934189850e8fda707dc5239dd58092"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Table 1: Manually aligned News Commentary data.",
"Table 2: Statistics on our in-domain parallel data.",
"Table 3: Number of lines in our monolingual data. Whereas the first four are from the news corpora (indomain), the last two, i.e., “IWSLT” and “Tatoeba,” are from other domains.",
"Table 4: Configuration of uni-, bi-directional, andM2MNMT baseline systems. Arrows in “Parallel data” columns indicate the over-sampling of the parallel data to match the size of the largest parallel data.",
"Table 5: BLEU scores of baseline systems. Bold indicates the best BLEU score for each translation direction.",
"Table 6: BLEU scores of pivot-based systems. “Gold” refers to the phrase table trained on the parallel data. Bold indicates the BLEU score higher than the best one in Table 5. “/” indicates the use of separately trained multiple phrase tables, whereas so does “+” training on the mixture of parallel data.",
"Table 7: Over-sampling criteria for pseudo-parallel data generated by back-translation.",
"Table 8: BLEU scores of M2M Transformer NMT systems trained on the mixture of given parallel corpus and pseudo-parallel data generated by back-translation using (b3). Six “X∗→Y” columns show whether the pseudoparallel data for each translation direction is involved. Bold indicates the scores higher than (b3) and “•” indicates statistical significance of the improvement.",
"Table 9: Classification of parallel data.",
"Table 10: Statistics on our out-of-domain parallel data.",
"Table 11: BLEU scores obtained through multistage fine-tuning. “Initialized” column indicates the model used for initializing parameters that are fine-tuned on the data indicated byX. Bold indicates the best BLEU score for each translation direction. “•” indicates statistical significance of the improvement over (b3).",
"Table 12: BLEU scores achieved through fine-tuning on the mixture of the original parallel data and six-way pseudo-parallel data. “Initialized” column indicates the model used for initializing parameters and so does “BT” column the model used to generate pseudo-parallel data. “•” indicates statistical significance of the improvement over #10.",
"Table 13: Summary of our investigation: BLEU scores of the best NMT systems at each step."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"5-Table4-1.png",
"6-Table5-1.png",
"6-Table6-1.png",
"7-Table7-1.png",
"7-Table8-1.png",
"8-Table9-1.png",
"8-Table10-1.png",
"9-Table11-1.png",
"10-Table12-1.png",
"10-Table13-1.png"
]
} | [
"what was the baseline?"
] | [
[
"1907.03060-MT Methods Examined-1",
"1907.03060-10-Table13-1.png",
"1907.03060-Conclusion-0",
"1907.03060-MT Methods Examined-0",
"1907.03060-MT Methods Examined-2"
]
] | [
"M2M Transformer"
] | 101 |
2002.04095 | Automatic Discourse Segmentation: an evaluation in French | In this article, we describe some discursive segmentation methods as well as a preliminary evaluation of the segmentation quality. Although our experiment were carried for documents in French, we have developed three discursive segmentation models solely based on resources simultaneously available in several languages: marker lists and a statistic POS labeling. We have also carried out automatic evaluations of these systems against the Annodis corpus, which is a manually annotated reference. The results obtained are very encouraging. | {
"paragraphs": [
[
"Rhetorical Structure Theory (RST) BIBREF0 is a technique of Natural Language Processing (NLP), in which a document can be structured hierarchically according to its discourse. The generated hierarchy, a tree, provides information associated with the boundaries of the discourse segments and related to their importance and dependencies. The figure FIGREF1 shows an example of such a rethorical tree. In the rethorical parsing process, the text has been divided into five units. In the figure FIGREF1, the arrow that leaves the unit (2) towards the unit (1) symbolizes that the unit (2) is the satellite of the unit (1), which is the core in a “Concession” relationship. In turn, the units (1) and (2) comprise the nucleus of three “Demonstration” relationships.",
"The discursive analysis of a document normally includes three consecutive steps: 1) discursive segmentation; 2) detection of the discursive relations; 3) construction of the hierarchical rhetorical tree. Regarding the discursive segmentation, there are segmenters in several languages. However, each piece depends on sofisticated linguistic resources, which complicates the reproduction of the experiments in other languages. Consequently, the development of multilingual systems using discursive analysis are yet to be developed. Diverse applications based on the latest technologies require at least one of the three steps mentioned above BIBREF1, BIBREF2, BIBREF3. In this context, the idea of exploring the architecture of a generic system that is able not only of segmenting a text correctly but also of adapting it to any language, was a great motivation of this research work.",
"In this article we show the preliminary results of a generic segmenter composed of several systems (different segmentation strategies). In addition, we describe an automatic evaluation protocol of discursive segmentation. The article is composed by the following sections: state of the art (SECREF2), which presents a brief bibliographic review; Description of the Annodis (SECREF3) corpus used in our tests and of the general architecture of the proposed systems (SECREF4); Segmentation strategies (SECREF5), which characterizes the different methods implemented to segment the text; results of our numerical experiments (sec:experiments); and we conclude with our conclusions and perspectives (SECREF7)."
],
[
"In RST, there are tow discursive units: nuclei and satellites. The nucleus provide information pertinent to the purposes of the author of the text and the satellites add additional information to the nucleu, on which they are dependent on. In the context of RST, possible discursive relationships may be nucleus-satellite and multinuclear. In nucleus-satellite relationships, a satellite depends on one nucleus, whereas in multinuclear relationships, several nuclei (at least two) are regrouped at the same level of importance (tree hierarchy). Thus, in the discursive segmentation proposes to reduce the text into the minimal discursive units called Elementary Discursive Units (EDU), through the use of explicit discursive markers. As an example, we can quote some markers in French:",
"afin de, pour que, donc, quand bien même que, ensuite, de fois que, globativamente, par contre, sinon, à ce moment-là, cependant, subséquemment, puisque, au fur et à mesure que, si, finalement, etc..",
"Markers or particles are often used to connect ideas. Let's consider the sentence below:",
"La ville d'Avignon est la capitale du Vaucluse, qui est un département du sud de la France.",
"qui (which) is a discursive marker because it connects two ideas. The first one, “Avignon City is the capital of Vaucluse” (La ville d'Avignon est la capitale du Vaucluse), and the second one (satellite), “[Vaucluse] is a department in the south of France” ([Vaucluse] est un département du sud de la France). Several research has addressed automatic segmentation in several languages, such as: French BIBREF4, English BIBREF5, Portuguese BIBREF6, Spanish BIBREF7, BIBREF8 and Tahi. BIBREF9. All converge to the idea of using an explicit list of marks in order to segment texts."
],
[
"In this first exploratory work, our tests considered only documents in French from the Annodis corpus. Annodis (ANNOtation DIScursive) is a set of documents in French that were manually enriched with notes of discursive structures. Its main characteristics are:",
"Two annotations: Rhetorical relations and multilevel structures.",
"Documents (687 000 words) taken from four sources: the Est Républicain newspaper (39 articles, 10 000 words); Wikipedia (30 articles + 30 summaries, 242 000 words); Proceedings of the conference Traitement Automatique des Langues Naturelles (TALN) 2008 (25 articles, 169 000 words); Reports from Institut Français de Relations Internationales (32 raports, 266 000 words).",
"The corpora were noted using Glozz.",
"Annodis aims at building an annotated corpus. The proposed annotations are on two levels of analysis, that is, two perspectives:",
"Ascendant: part of EDU are used in the construction of more complex structures, through the relations of discourse;",
"Descending: approaches the text in its entirety and relies on the various shallow indices to identify high-level discursive structures (macro structures).",
"Two types of persons annotated Annodis: linguistic experts and students. The first group constituted a $E$ subcorpus called “specialist” and the second group resulted in a $N$ subcorpus called “naive”. These rhetorically annotated subcorps were used as references in our experiments. (c.f. §SECREF6)."
],
[
"The Figure FIGREF12 shows the general architecture of the proposed discourse segmenter system. The initial input is the raw text encoded in UTF-8. The two initial processes are Part of Speech morphosyntactic Tagging (POS) and the segmentation at the level of the sentences. This last is just a preprocessing step that splits sentences. In the last process the system uses a bank of explicit markers in roder to apply the rules for the final discourse segmentation.",
"For the experiments, we used lists of markers in French, Spanish, English and Portuguese. We also used the Lexiconn BIBREF10 project list, which regroups 328 French-language markers. Another important parameter specifies which segmentation strategy should be applied, according to the POS labelling of the document."
],
[
"The elementary system Segmenter$_{\\mu }$ (baseline) relies solely on a list of discursive markers to perform the segmentation. It replaces the appearance of a marker in the list with a special symbol, for example $\\mu $, which indicates a boundary between the right and left segment. Be the sentence of the preceding example: La ville d'Avignon est la capitale du Vaucluse, qui est un département du Sud de la France.. The Segmenter split the sentence in two parts: the left segment (SE), La ville d'Avignon est la capitale du Vaucluse, and the right segment (SD), est un département du sud de la France."
],
[
"The Segmenter$_{\\mu +}$ system presents an improvement to the Segmenter$_{\\mu }$: inclusion of grammar categories with the TreeTagger tool. The advantage of this system is the detection of certain grammatical forms in order to condition the segmentation. Since it is based on the Segmenter$_{\\mu }$, we try to recognise the opportune conditions to gather two segments when both are part of the same discursive segment. We try to identify more subtly when it is pertinent to leave the two segments separate. The Segmenter$_{\\mu }$ has two distinct strategies:",
"Segmentador$_{\\mu +V}$ (verbal version, V): it relies solely on the presence of verbal forms to the right and left of the discursive marker. The two grammatical rules of this strategy are:",
"If there are no verbs in the left and right segments, regroup them.",
"If there is at least one verb in the left or right segment, the segments will remain separate.",
"Segmenter$_{\\mu +(V-N)}$ (verb-noun version, V-N): it relies on the presence of verbs and nouns. For this version, four rules are considered:",
"If there is no noun in either the left or right segment, we regroup the segments.",
"We regroup the segments if at least one of them has no noun.",
"If at least one noun is present in both segments, they remain independent.",
"If there is no verb-nominal form, the segments remain independent."
],
[
"In this first exploratory work, only documents in French were considered, but the system can be adapted to other languages. The evaluation is based on the correspondence of word pairs representing a border. In this way we compare the Annodis segmentation with the automatically produced segmentation. For each pair of reference segments, a $L_r$ list of word pairs is provided: the last word of the first segment and the first word of the second.",
"For example, considering the reference text wik1_01_02-04-2006.seg, from Annodis corpus:",
"[Le Ban Amendment]$_1$ [Après avoir adopté la Convention,]_2 [un certain nombre de PED et d'associations de défense de l'environnement soutinrent]_3 [que le document n'allait pas assez loin.]_4 [De nombreux pays et ONG militèrent]_5 [en faveur d'une interdiction totale de l'expédition de déchets dangereux à destinations des PED.]_6 [Plus exactement,]_7 [la Convention originale n'interdisait pas l'exportation de déchets,]_8 [excepté vers l'Antarctique.]_9 [Elle n'exigeait]_10 [qu'une procédure de consentement préalable en connaissance de cause]_11 [(PIC, Prior Informed Consent).]_12",
"Here are the word pairs of the created reference list (punctuation marks are disregarded):",
"$L_r$={[Convention – un], [soutinrent – que], [loin – de], [militèrent – en], [exactement – la], [PED – plus], [exactement – la], [déchets – excepté], [Antartique – Elle], [exigeait – qu'une], [cause – PIC] }",
"We decided to count the word pairs instead of the segments, as this is a first version of the evaluation protocol. In fact, the segments may be nested, which complicates the evaluation process. Although there are some errors, word boundaries allow us to detect segments more easily.",
"We have built a second $L_c$ list for the automatically identified segments, following the same criteria of $L_r$. The $L_r$ and $L_c$ lists regroup, pair by pair, the segment border. We then count the common pair intersection of the two lists. Each pair in the $L_c$ list is also present in the $L_r$ reference list and is a correctly assigned to the class pair. A word pair belonging to the $L_c$ list but not belonging to the $L_r$ reference list, will be a pair assigned to the class.",
"For that same text, the $L_c$ list of candidate pairs obtained with the Segmentator$_{\\mu }$ is:",
"$L_r$={[loin–De], [pays–et], [militèrent–en], [dangereux–à], [PED–Plus], [Antarctique–Elle], [préalable–en], [cause–PIC] }",
"We calculate the precision $P$, the recall $R$ and the $F$-score on the text corpus used in our tests, as follow:",
"The precision, the recall and the $F$-score for this example is: $P$ = 5 / 11 = 0.45; $R$ = 5 / 8 = 0.625; F-score = 2 $\\times \\frac{ 0.45 \\times 0.625}{ 0.45 + 0.625} = 0.523$. We used the documents in the Annodis corpus without segmentation, because they had been segmented with the Segmenter$_{\\mu }$ and with the grammar segmenters.",
"Two batch of tests were performed. The first on the $D$ set of documents common to the two subcorpus “specialist” $E$ and “naive” $N$ from Annodis. $D$ contains 38 documents with 13 364 words. This first test allowed to measure the distance between the human markers. In fact, in order to get an idea of the quality of the human segmentations, the cuts in the texts made by the specialists were measured it versus the so-called “naifs” note takers and vice versa. The second series of tests consisted of using all the documents of the subcorpus “specialist” $E$, because the documents of the subcorpus of Annodis are not identical. Then we benchmarked the performance of the three systems automatically."
],
[
"In this section we will compare the results of the different segmentation systems through automatic evaluations. First of all, the human segmentation, from the subcorpus $D$ composed of common documents. The results are presented in the table tab:humains. The first row shows the performance of the $I$ segments, taking the experts as a reference, while the second presents the process in the opposite direction.",
"We have found that segmentation by experts and naive produces two subcorpus $E$ and $N$ with very similar characteristics. This surprised us, as we expected a more important difference between them. In any case, we deduced that, at least in this corpus, it is not necessary to be an expert in linguistics to discursively segment the documents. As far as system evaluations are concerned, we use the 78 $E$ documents as reference. Table TABREF26 shows the results.",
"In the case of the Experts, the grammatical verb-nominal version (V-N) had better F-score performance. The verbal version (V) obtained a better accuracy $P$ than the verb-nominal (V-N). In the case of the Naive, the performance F-score, $P$ and $R$ is very similar from the Experts."
],
[
"The aim of this work was twofold: to design a discursive segmenter using a minimum of resources and to establish an evaluation protocol to measure the performance of segmenters. The results show that we can build a simple version of the baseline, which employs only a list of markers and presents a very encouraging performance. Of course, the quality of the list is a preponderant factor for a correct segmentation.",
"We have studied the impact of the marker which, even though it may seem fringe-worthy, contributes to improving the performance of our segmenters. Thus, it is an interesting marker that we can consider as a discursive marker. The Segmentator$_{\\mu }$ version provides the best results in terms of F-score and recall, followed by the Segmentator$_{\\mu +V}$ version, which passes it in precision. Regarding evaluation, we developed a simple protocol to compare the performance of the systems. This is, to our knowledge, the first automatic evaluation in French.",
"It is necessary to intensify our research in order to propose improvements to our segmenters, as well as to study further the impact of grammar tag rules on segmentation. Since we have a standard evaluation protocol, we intend to carry out tests with Portuguese, Spanish (see BIBREF11), English, etc. For that, we will only need a list of markers for each language.",
"The performance of the systems remains modest, of course, but we must not forget that this is a baseline and its primary objective is to provide standard systems that can be used in testing protocols such as the one we proposed. Despite this evolution, these baselines (or their improved versions) can be used in applications such as automatic document summarisation (e.g., BIBREF12, BIBREF13), or sentences compression BIBREF14.",
"The main feature of the proposed baseline system is its flexibility with respect to the language considered. In fact, it only uses a list of language markers and the grammatical category of words. The first resource, although dependent on each language, is relatively easy to obtain. We have found that, even with lists of moderate size, the results are quite significant. The grammatical categories were obtained with the help of the TreeTagger statistics tool. However, TreeTagger could be replaced by any other tool producing similar results."
],
[
"In this appendix, we present the list of rhetorical connectors in French that constitute our list of markers. We point out that the markers ending in apostrophe such as:",
"près qu', à condition d', etc.",
"are deleted from a regular expression implying 'and': près qu' + près que, à condition d' + à condition de, etc.",
"",
"3",
", / à / à ça près qu' / à ceci près qu' / à cela près qu' / à ce moment-là / à ce point qu' / à ce propos / à cet égard / à condition d' / à condition qu' / à défaut d' / à défaut de / à dire vrai / à élaborer / à en / afin d' / afin qu' / afin que / à force / à force d' / ainsi / à la place / à la réflexion / à l'époque où / à l'heure où / à l'instant où / à l'inverse / alors / alors même qu' / alors qu' / à mesure qu' / à moins d' / à moins qu' / à part ça / à partir du moment où / à part qu' / après / à présent qu' / après qu' / après quoi / après tout / à preuve / à propos / à seule fin d' / à seule fin qu' / à supposer qu' / à telle enseigne qu' / à tel point qu' / attendu qu' / au bout du compte / au cas où / au contraire / au fait / au fur et à mesure qu' / au lieu / au lieu d' / au même titre qu' / au moins / au moment d' / au moment où auparavant / au point d' / au point qu' / aussi / aussi longtemps qu' / aussitôt / aussitôt qu' / autant / autant dire qu' / au total / autrement / autrement dit / avant / avant d' / avant même d' / avant même qu' / avant qu' / à vrai dire / bien qu' / bientôt / bref / car / ceci dit / ceci étant dit / cela dit / cependant / cependant qu' / c'est à dire qu' / c'est pourquoi / cette fois qu' / comme / comme ça / comme quoi / comme si / comparativement / conséquemment / considérant qu' / considéré qu' / corrélativement / d'abord / d'ailleurs / dans ce cas / dans ce cas-là / dans la mesure où / dans le but d' / dans le but qu' / dans le cas où dans le coup / dans le sens où / dans le sens qu' / dans l'espoir d' / dans l'espoir qu' / dans l'hypothèse où / dans l'intention d' / dans l'intention qu' / dans tous les cas / d'autant plus qu' / d'autant qu' / d'autre part / de ce fait / décidément / de façon à / de façon à ce qu' / de façon qu' / de fait / déjà / déjà qu' / de la même façon / de la même façon qu' / de la même manière / de la même manière qu' / de manière à / de manière à ce qu' / de manière qu' / de même / de même qu' / de plus / depuis / depuis qu' / des fois qu' / dès lors / dès lors qu' / de sorte qu' / dès qu' / de telle façon qu' / de telle manière qu' / de toute façon / de toute manière / de toutes façons / de toutes manières / d'ici qu' / dire qu' / donc / d'où / d'où qu' / du coup / du fait qu' / du moins / du moment qu' / d'un autre côté d'un côté / d'un coup / d'une part / d'un seul coup / du reste / du temps où / effectivement / également / en / en admettant qu' / en attendant / en bref / en ce cas / en ce sens qu' / en comparaison / en conséquence / encore / encore qu' / en d'autres termes / en définitive / en dépit du fait qu' / en dépit qu' / en effet / en fait / enfin / en gros / en même temps / en même temps qu' / en outre / en particulier / en plus / en plus d' / en plus de / en réalité / en résumé / en revanche / en somme / ensuite / en supposant qu' / en tous cas en tous les cas / en tout cas / en tout état de cause / en vérité / en vue d' / et / étant donné qu' / et dire qu' / et puis / excepté qu' / faute d' / finalement / globalement / histoire d' / hormis le fait qu' / hormis qu' / instantanément / inversement / jusqu'à / jusqu'à ce qu' / la preuve / le fait est qu' / le jour où / le temps qu' / lorsqu' / maintenant / maintenant qu' / mais / malgré le fait qu' / malgré qu' / malgré tout / malheureusement / même / même qu' / même si / mieux / mis à part le fait qu' / mis à part qu' / néanmoins / nonobstant / nonobstant qu' / or / ou / ou bien / outre qu' / par ailleurs / parallèlement / parce qu' / par comparaison / par conséquent / par contre / par-dessus tout / par exemple / par le fait qu' / par suite / pendant qu' / peu importe plus qu' / plus tard plutôt / plutôt qu' / plutôt que d' / pour / pour autant pour autant qu' / pour commencer / pour conclure / pour finir / pour le coup / pour peu qu' / pour preuve / pour qu' / pour résumer / pourtant / pour terminer / pour une fois qu' / pourvu qu' / premièrement / preuve qu' / puis / puisqu' / quand / quand bien même / quand bien même qu' / quand même / quant à / quitte à / quitte à ce qu' / quoiqu' / quoi qu'il en soit / réciproquement / réflexion faite / remarque / résultat / s' / sachant qu' / sans / sans compter qu' / sans oublier qu' / sans qu' / sauf à / sauf qu' / selon qu' / si / si bien qu' / si ce n'est qu' / simultanément / sinon / sinon qu' / si tant est qu' / sitôt qu' / soit / soit dit en passant / somme toute / soudain / subséquemment / suivant qu' / surtout / surtout qu' / tandis qu' / tant et si bien qu' / tant qu' / total / tout à coup / tout au moins / tout bien considéré / tout compte fait / tout d'abord / tout de même / tout en / une fois qu' / un jour / un jour qu' / un peu plus tard / vu qu' /"
]
],
"section_name": [
"Introduction",
"State-of-the-art",
"Annodis Corpus",
"Discourse Segmenter Overall Description",
"Description of segmentation strategies ::: Segmentation with explicit use of a marker",
"Description of segmentation strategies ::: Segmentation with explicit use of a marker and POS labels",
"Experiments",
"Experiments ::: Results",
"Conclusions, discussion and perspectives",
"Appendix"
]
} | {
"answers": [
{
"annotation_id": [
"8af71dbe299bcb5cda132451c16b74354f89c879",
"9d060afd405e61fa03f26d22fcb1f0fe8b064b01"
],
"answer": [
{
"evidence": [
"Two batch of tests were performed. The first on the $D$ set of documents common to the two subcorpus “specialist” $E$ and “naive” $N$ from Annodis. $D$ contains 38 documents with 13 364 words. This first test allowed to measure the distance between the human markers. In fact, in order to get an idea of the quality of the human segmentations, the cuts in the texts made by the specialists were measured it versus the so-called “naifs” note takers and vice versa. The second series of tests consisted of using all the documents of the subcorpus “specialist” $E$, because the documents of the subcorpus of Annodis are not identical. Then we benchmarked the performance of the three systems automatically.",
"We have found that segmentation by experts and naive produces two subcorpus $E$ and $N$ with very similar characteristics. This surprised us, as we expected a more important difference between them. In any case, we deduced that, at least in this corpus, it is not necessary to be an expert in linguistics to discursively segment the documents. As far as system evaluations are concerned, we use the 78 $E$ documents as reference. Table TABREF26 shows the results.",
"We calculate the precision $P$, the recall $R$ and the $F$-score on the text corpus used in our tests, as follow:"
],
"extractive_spans": [],
"free_form_answer": "Segmentation quality is evaluated by calculating the precision, recall, and F-score of the automatic segmentations in comparison to the segmentations made by expert annotators from the ANNODIS subcorpus.",
"highlighted_evidence": [
"The second series of tests consisted of using all the documents of the subcorpus “specialist” $E$, because the documents of the subcorpus of Annodis are not identical. ",
"As far as system evaluations are concerned, we use the 78 $E$ documents as reference. ",
"We calculate the precision $P$, the recall $R$ and the $F$-score on the text corpus used in our tests, as follow:"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this first exploratory work, only documents in French were considered, but the system can be adapted to other languages. The evaluation is based on the correspondence of word pairs representing a border. In this way we compare the Annodis segmentation with the automatically produced segmentation. For each pair of reference segments, a $L_r$ list of word pairs is provided: the last word of the first segment and the first word of the second."
],
"extractive_spans": [
"we compare the Annodis segmentation with the automatically produced segmentation"
],
"free_form_answer": "",
"highlighted_evidence": [
"The evaluation is based on the correspondence of word pairs representing a border. In this way we compare the Annodis segmentation with the automatically produced segmentation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ea4394112c1549185e6b763d6f36733a9f2ed794",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"How is segmentation quality evaluated?"
],
"question_id": [
"f8da63df16c4c42093e5778c01a8e7e9b270142e"
],
"question_writer": [
"fa716cd87ce6fd6905e2f23f09b262e90413167f"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: A Rhetorical Structure Theory Tree of a document in French.",
"Figure 2: System Architecture Diagram of the proposed Discourse Segmenter.",
"Table 1: Performance of human segmentations",
"Table 2: Performance of Automatic Segmenters vs. Expert"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png"
]
} | [
"How is segmentation quality evaluated?"
] | [
[
"2002.04095-Experiments ::: Results-1",
"2002.04095-Experiments-11",
"2002.04095-Experiments-0",
"2002.04095-Experiments-9"
]
] | [
"Segmentation quality is evaluated by calculating the precision, recall, and F-score of the automatic segmentations in comparison to the segmentations made by expert annotators from the ANNODIS subcorpus."
] | 102 |
1710.04203 | Crowdsourcing for Beyond Polarity Sentiment Analysis A Pure Emotion Lexicon | Sentiment analysis aims to uncover emotions conveyed through information. In its simplest form, it is performed on a polarity basis, where the goal is to classify information with positive or negative emotion. Recent research has explored more nuanced ways to capture emotions that go beyond polarity. For these methods to work, they require a critical resource: a lexicon that is appropriate for the task at hand, in terms of the range of emotions it captures diversity. In the past, sentiment analysis lexicons have been created by experts, such as linguists and behavioural scientists, with strict rules. Lexicon evaluation was also performed by experts or gold standards. In our paper, we propose a crowdsourcing method for lexicon acquisition, which is scalable, cost-effective, and doesn't require experts or gold standards. We also compare crowd and expert evaluations of the lexicon, to assess the overall lexicon quality, and the evaluation capabilities of the crowd. | {
"paragraphs": [
[
"Sentiment analysis aims to uncover the emotion conveyed through information. In online social networks, sentiment analysis is mainly performed for political and marketing purposes, product acceptance and feedback systems. This involves the analysis of various social media information types, such as text BIBREF0 , emoticons and hashtags, or multimedia BIBREF1 . However, to perform sentiment analysis, information has to be labelled with a sentiment. This relationship is defined in a lexicon.",
"Lexicon acquisition is a requirement for sentiment classification. During the acquisition process, individual or grouped information elements are labelled based on a class, usually an emotion. Sentiment classification is the task that uses the acquired lexicon and a classification method to classify a sentence, phrase, or social media submission as a whole, based on the aggregation of its labels. Thus, lexicon quality directly affects sentiment classification accuracy.",
"Both tasks can either be performed automatically BIBREF2 or manually BIBREF3 where the labelling by linguists or researchers themselves BIBREF4 . Apart from experts, manual labbeling can also be performed with the help of a wide network of people, known as crowdsourcing BIBREF5 . Crowdsourcing is widely used for polarity lexicons, but rarely for beyond polarity and never for the discovery of linguistic elements.",
"Sentiment analysis is commonly performed in polarity basis, i.e. the distinction between positive and negative emotion . These poles correspond to agreement and disagreement, or acceptance and disapproval, for candidates and products repsectively BIBREF6 .",
"Beyond polarity (also known as pure emotion) sentiment analysis aims to uncover an exact emotion, based on emotional theories BIBREF7 , BIBREF8 . Applications such as sentiment tracking, marketing, text correction, and text to speech systems can be improved with the use of distinct emotion lexicons.",
"However, beyond polarity studies acquire lexicons based on a set of strict rules, and the evaluation of experts. These lexicons use only a single emotion per term BIBREF9 . The problems of these approaches is the lack of uniformity and contribution freedom when relying on gold standards, and high costs with low scalability when employing experts. Natural Language Processing (NLP) applications that only rely on experts are less comprehensive, restricted, and not scalable, compared to crowdsourced NLP applications BIBREF10 .",
"This paper presents our approach for the acquisition of a multiclass and scalable crowdsourced pure emotion lexicon (PEL), based on Plutchik's eight basic emotions. Furthermore, the crowd is also responsible for identifying linguistic elements, namely intensifiers, negators, and stop words. Usually these elements are pooled from existing lists BIBREF11 created by experts. We also introduce a worker filtering method to identify and exclude dishonest or spamming contributors, that doesn't require gold standards. Our goal is to maintain an end to end automated work-flow for a crowdsourced (annotation and evaluation wise) lexicon acquisition process. Therefore, to highlight crowd's performance on evaluation, we compare evaluations from linguistic experts and the crowd itself."
],
[
"According to BIBREF12 , an emotion is defined with reference to a list. Ekam et al. BIBREF8 proposed the six basic emotions joy, anger, fear, sadness, disgust, and surprise. Years later, Plutchik BIBREF7 proposed the addition of trust and anticipation as basic emotions, and presented a circumplex model of emotions as seen in Figure FIGREF1 , which defines emotional contradictions and some of the possible combinations.",
"Sentiment analysis aims to classify information based on the emotion conveyed. Depending on the number of classes/emotions required, we can separate the analysis into: polarity and beyond polarity.",
"Polarity sentiment analysis studies define two opposite emotional states, positive and negative, or good and bad, with the addition of a neutral state. Furthermore, some researchers have classified information on levels for each pole(e.g. very positive, positive, neutral, negative, very negative etc.), also known as fine grained sentiment analysis BIBREF13 .",
"Beyond polarity, also known as pure emotion, sentiment analysis is a more refined approach to the same problem with a wider range of possible emotion classes, see Figure FIGREF1 . Essentially, any sentiment analysis that involves specific emotional labelling, is considered as a beyond polarity analysis. Examples of emotional labels might be -but are not limited to-: sadness, boredom, joy, sadness, surprise, anger, fear, disgust etc.",
"As discussed in Section 1, one of the core tasks of sentiment analysis is lexicon acquisition. A lexicon can be acquired through manual or automatic annotation. However, natural language has a very subjective nature BIBREF14 which significantly inhibits automated sentiment lexicon aqcuisition methods from achieving relevance equal to manual methods BIBREF15 . Thus a lot of researchers choose to manually annotate their term corpora BIBREF16 , or use established lexicon such as WordNet, SentiWordNet, and various other lexicons BIBREF13 . Other studies combine manual labeling or machine learning with lexicons BIBREF17 .",
"Manual lexicon acquisition is constrained by the number of people contributing to the task, and the number of annotations from each participant. These constraints can be eliminated by increasing the number of people involved, for instance, by using crowdsourcing BIBREF18 . Amazon's Mechanical Turk (MTurk) is a crowdsourcing platform frequently used for polarity sentiment lexicon acquisition via crowdsourcing BIBREF19 . MTurk is also used, for the annotation of one thousand tweets in BIBREF20 , ten thousand terms in BIBREF21 with gold standards, and the annotation of ninety five emoticons out of one thousand total emoticons found in BIBREF22 . While BIBREF23 had one thousand four hundred terms labelled with a supervised machine learning and crowd validators. The challenge is to introduce a work-flow that is scalable, unsupervised and applicable to different information types.",
"The second core part in sentiment analysis, is sentiment classification. A classification that occurs at phrase/sentence/submission level, and is usually based on the aggregation of the term's labeled emotions. As with lexicon aqcuisition, the classification task can be automated BIBREF13 or performed manually BIBREF24 .",
"Regardless of manual or automated sentiment classification, on textual information scenarios, term and phrase sentiment is the main input of the classification method. In some cases the decision might be totally different from the individual term emotion, leading to relabeling of the terms themselves BIBREF25 . Manually labelled classification can achieve high relevance, but it requires additional resources, and is not easily scalable. On the other hand, automated processes are scalable but with lower relevance BIBREF24 ."
],
[
"Our aim is to create an end to end automated work-flow for the creation, evaluation and enrichment of a pure emotion lexicon. The work-flow, Figure FIGREF3 , can be separated in two main components. Pre-processing is the unsupervised process by which we derive the lexicon terms from any textual resource, while crowdsourcing deals with the crowdsourcing aspect of the lexicon. The Pure Emotions Lexicon includes emotional term groups, intensifiers and negators, and stop words.",
"Pre-processing is comprised of 3 unsupervised steps, tokenization, stemming and spell check. Textual content is tokenized as uni-grams, stemmed based on their rooted and checked for spelling. The resulting stems along with their stem groups are stored in a lexicon database. Crowdsourcing is using the lexicon database and the crowd to annotate each entry in the database. Participants submit their answers that go through a filtering process. If the answers are considered valid, they update the lexicon entries. The crowd also evaluates existing annotations, to determine the lexicon quality. As crowd evaluation methods are new in lexicon acquisition tasks, we compare crowd evaluations to those of expert linguists."
],
[
"During January 2017, we performed a keyword based crawl for articles and comments in the Europe subreddit and tweets in Twitter, which contained the word \"Brexit\". The use of a political and controversial term in the query is deliberate, to capture the emotional diversity of a politically oriented corpus.",
"We crawled one hundred articles from Reddit, with more than forty thousand comments and more than three thousand tweets. For the Reddit data, we collected information on location, time, and the number of upvotes. For the Twitter data, we stored the number or re-tweets and favourites, time and location information.",
"Our focus is on single term (also known as unigram) sentiment, thus posts in both networks were processed to a single term list. In total, the number of unique terms in our corpus was 30227. Based on the enchant python library, used in BIBREF26 , and the supported Great British English dictionary, 19193 were validated and 11034 were invalidated. Our analysis will focus on the 19193 valid terms, that follow Zipf's Law with scaling-law coefficient INLINEFORM0 is a good fit.",
"After validation, terms were stemmed with Porter Stemming Algorithm BIBREF27 . Stemming identified 10953 distinct term groups with one or more terms. Stop-words, intensifiers and negators are also included in the valid term groups. Both term validation and stemming are unsupervised, since our goal is to maintain scalability in beyond polarity lexicon acquisition and sentiment classification."
],
[
"The crowdsourcing task, hosted in CrowdFlower, required contributors to label term groups in three different main classes, emotion, intensifier and none, without a golden standard, rules or any participation restrictions . Emotion labelling included the 8 basic emotions as defined by Plutchik. Intensifier class included intensifiers and negators. Finally, none referred to stop-words or words with no particular emotion.",
"Each of the eleven options for the main classes, will be referred to as \"subclass\". Terms are grouped based on their stem. Each term group has a main annotation class defined by majority, and several sub annotation classes, defined by the non majority annotations. However, to aid multi class analysis of the results, every annotation is logged in the lexicon database."
],
[
"The task interface was the result of several experiments. Three major changes implemented based on these experimental interfaces were: the simplification of the task question, Figure FIGREF8 , the inclusion of only three main classes, and the replacement of words positive and negative, with amplifying and weakening in the intensifying class options. All experiments and the annotation task required highly experienced contributors, as defined by Crowdflower platform.",
"As seen in Figure FIGREF8 contributors select one of the three choices. If they choose Emotion evoking, they are presented with a drop-down menu to choose from the eight basic emotions. Similarly, if they select Intensifying context they have to specify whether it was Amplifying or Weakening the context, essentially annotating the intensifiers and negators. Finally if they choose None they are presented with the next term group. To assist contributors with term definitions, every term group had a hyperlink to an English dictionary."
],
[
"More than one hundred eighty contributors performed eighty thousand annotations. By design, each user could not perform more than 660 unique annotations, excluding the assessment questions, to engage at least 100 contributors. Most of the workers annotated the maximum allowed terms, the first half of workers annotated 15% of the term groups in our corpus, while the second half of workers annotate the rest 85%. The simplicity of the task resulted in high overall worker engagement, with mean and median annotations per worker, at 429 and 580 respectively."
],
[
"Based on a set of experiments, we identified 136 term groups that would test the ability of a contributor in all of the three main classes, emotion evoking, intensifying context, and none. As the assessment term groups had more than ten thousand annotations, we analyse it separately from the lexicon.",
"In order for a worker to gain the ability to contribute to the crowdsourcing task and eventually get paid, he/she had to properly annotate 80% of the assessment term groups encountered. The annotations should be within the dominant classes, and not subclasses, as defined from the assessment annotators. E.g., for an assessment term group that received 100 annotations in various emotions, we check if the worker annotates the term group as emotion evoking.",
"Let INLINEFORM0 be the set of workers INLINEFORM1 and and INLINEFORM2 the set of eleven INLINEFORM3 subclasses: eight emotions , two intensifiers, and none of the former. We define INLINEFORM4 as the number of total annotations for each worker INLINEFORM5 . Then: DISPLAYFORM0 ",
"We define INLINEFORM0 be the set of workers INLINEFORM1 in the assessment process, INLINEFORM2 the set of workers INLINEFORM3 in the acquisition process. Then, for INLINEFORM4 we define: DISPLAYFORM0 DISPLAYFORM1 ",
"and: DISPLAYFORM0 DISPLAYFORM1 ",
"The optimal INLINEFORM0 is found for INLINEFORM1 . For this study, the optimal filtering percentage was found at 40%, INLINEFORM2 .",
"Workers from India and Venezuela, who contributed 92% of the task, have annotated more than 30% of the term groups with joy. However, annotations from countries with more than 300 annotations, don't follow the same distribution. Specifically, workers from Philippines, United States, Colombia, Poland, United Kingdom, Russia, and Egypt, performed a smoother distributed emotion annotation. In comparison, the most annotated emotion in BIBREF21 was fear in 18% of the total terms.",
"By further analysing worker annotation distribution, we identified workers that had a significant fraction of their total annotations in a single subclass. E.g. one specific worker annotated 99% of the assessment term groups he encountered as joy. Dishonesty or spamming is a known problem in crowdsourcing BIBREF28 and multiple proposed solutions exist BIBREF28 , but they require gold standards or objective crowdsourcing tasks.",
"As we don't have a gold standard, and the task is more subjective, these spamming elimination methods are not applicable. Our solution is the implementation of a fast and efficient filter, which only relies on the obtained annotations and the assessment. If workers' answers were above a certain percentage on a single subclass for both the assessment and the annotation process, then the user would be flagged as dishonest and the total of their annotations would be discarded. This rule was applied to the whole range of possible answers, including the 8 emotions, 2 intensifiers and \"none\".",
"Prior to implementing the filter, we analysed how it would affect the number of eligible, not considered as spamming, workers. The thick line in Figure FIGREF22 shows the percentage during the assessment term groups, and the dotted line shows the percentage during the lexicon acquisition.",
"The higher the single annotation percentage, the higher the certainty of spamming behaviour. In large part, the exclusion rate was similar for both the assessment and the lexicon annotations. A number of workers had a more cautious behaviour in the test questions, resulting in reduced percentage of exclusions during assessment process. This behaviour is justified, as the first set of questions encounter by a worker are knowingly a set of assessment questions.",
"Each assessment term group was annotated more than 120 times, by 187 annotators. This is a critical mass of contributors and provides valuable findings with regards to task. These are:",
"Workers rarely clicked the informative dictionary link. As a result, they would annotate emotional words to none, probably due to misinterpretation. We avoided the direct inclusion of the dictionary definition, as it could be considered as a form of leverage. E.g. \"vehement, halcyon\" , two -uncommon- emotion baring words, were both annotated as none, and less than 0.2% of the workers (on average) clicked a dictionary link.",
"The concept of intensifiers is understandable but requires critical mass BIBREF29 . A small number of annotators would initially annotate intensifiers/negators with an emotion, but the distribution would slowly shift towards the correct class. E.g. \"reduce, little, plethora\" , were initially annotated as sad sad joy, but after tens of annotations they were annotated as weakening weakening intensifying.",
"All words should be evaluated, even those that seemingly don't carry a specific emotion. As times change, words and emotions acquire new links. E.g. \"anti, serious\" , were both annotated as fear evoking with a great emotional diversity."
],
[
"The lexicon (will be referred as simply \"PEL\") is created after the exclusion of annotations following the 40% single annotation filtering check. We received more than seventy thousands annotations for 10593 term groups, of those only 22 thousand annotations for 9737 term groups are included in the final lexicon, as a result of filtering. Each term group had a mean 2.3 annotations from a total of 95 different annotators. Although the number of mean annotations in lexicon is less than half the mean annotations in the unfiltered corpus, the PEL annotations are considered of higher quality.",
"Each lexicon term group has multiple subclass annotations, and the main subclass is defined by majority. Even after filtering, the dominant emotion in our lexicon is joy, while the least annotated emotion is disgust. Additionally, 148 terms were annotated as intensifiers, 43 terms as negators, and 6801 terms as none. A sample of five terms for each of the three subclasses can be seen in Table TABREF27 . The full lexicon can be found on github.",
"Intensifiers and negators serve as modifiers to the emotional context of a word. Workers identified mostly valid intensifiers and negators that can modify emotion evoking words, in the absence of context. Judging from the received annotations, there is room for improvement on the description of the intensifier class and the provided examples, as a number of non intensifying words were falsely annotated.",
"Terms in our lexicon are grouped based on their stem. Stemming significantly reduced cost (by half) and time-required for the task. Grouping terms may create unnecessary multi-class annotations agreements, for terms in the same term group that might have different meanings. Annotation agreement refers to equal number of annotations in multiple subclasses or emotions. However, the vast majority of term groups in our lexicon, don't display any form of contradicting annotation. Contradicting emotions are portrayed in opposite edges of the circumplex Figure FIGREF1 , while emotional combinations are decribed in BIBREF7 . In the lexicon, only 21% and 20% of the term groups had a subclass and an emotional agreement respectively. With regards to emotion, contradicting or multi-emotion agreement, could be observed in only 8.6% of the total term groups.",
"Let INLINEFORM0 be a term group in the lexicon and INLINEFORM1 the set of eleven INLINEFORM2 subclasses: eight emotions , two intensifiers, and none of the former. We define INLINEFORM3 as the number of annotations for each term group INLINEFORM4 . For each INLINEFORM5 , the annotations for emotion subclasses are INLINEFORM6 , the annotations for intensifying subclasses are INLINEFORM7 , and the number of none annotations is INLINEFORM8 .",
"Therefore, each INLINEFORM0 can have an monotonically increasing finite sequence INLINEFORM1 with INLINEFORM2 , where: DISPLAYFORM0 ",
"We say that term group INLINEFORM0 has subclass agreement if and only if: DISPLAYFORM0 ",
"While INLINEFORM0 has emotional agreement if and only if there is a subclass agreement with the sequence INLINEFORM1 and: DISPLAYFORM0 ",
"Subclass agreement, refers to equal annotations, between emotional subclass(es) and at least one non-emotional subclass, or between multiple non-emotional subclass, Equation EQREF30 . On the other hand, emotional agreement refers to multiple emotion subclasses with equal annotations, Equation EQREF31 .",
"The number of subclasses in agreement and the number of terms in a term group are negatively correlated. Term groups with two terms appear to have the highest subclass agreement with exactly two subclasses. The most common occurring agreements are subclass none paired with an emotion, and joy paired with an emotion. The number of multi-class agreement occurrences is disproportional to the number of terms in a term group. This is a strong indication that stemming didn't confuse workers.",
"Similarly, for emotional agreement, the number of occurrences is disproportionate to the number of terms in the term group. Furthermore, emotional agreement appeared in 10% of the term groups, while subclass agreement was found in 20% of the term groups. In the agreement annotations, joy is the most common emotion. According to Plutchik's circumplex Figure FIGREF1 , each emotion has a contradicting one, and pairs of emotions indicate a more \"complex\" emotion. There are 697 emotional agreeing term groups, of 1434 terms, with exactly two emotions. These emotional dyads BIBREF7 can be combined as seen in Table TABREF32 . Simple basic emotion annotation tasks can indirectly provide complex emotional annotations.",
"Dyadic emotional agreements could be interpreted as the resulting complex emotion, or further annotated to obtain a single dominant emotion. There was a number of term groups with opposite emotion dyads, presented in Table TABREF33 ,but as the number of annotations increases, emotional agreement occurrences -combination or opposition- decreases.",
"In total, the lexicon features 17740 annotated terms with 3 classes and 11 subclasses.The dominant class for 7030 terms was emotion, 191 intensifying, 6801 none, and 3718 in some form of subclass agreement. Lexicon terms are mainly joy annotated, and emotional agreement is prevalent in 10% of the terms. Only 21% of total terms have a subclass agreement."
],
[
"Single annotation reliability agreement is the degree of agreement between annotators, for term groups that have annotation majority in exactly one sub class. In our lexicon, single annotation reliability agreement was low, mainly due to the low number of annotators for each term group in relation to the high number of possible categories.",
"Based on Fleiss Kappa BIBREF30 (simply referred as k), and as seen in Table TABREF35 , term groups with 2 annotations had the lowest reliability agreement, while term groups with 6 annotations the highest reliability agreement. As the number of annotators rises, the number of possible agreement permutations increases but the number of major annotated subclasses decreases. More annotators have a positive effect in both k and certainty of classification.",
"As we restrict our lexicon to emotions, reliability increases for any number of annotators except two. This is explained by the decrease in the number of possible categories. When we restrict our analysis on emotion related annotation the probability for agreement in annotations increases, resulting in a high emotional k. The best way to increase k is to provide additional annotations that will eventually converge to a majority class or a limited group of classes."
],
[
"We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations). The experts are two Ph.D. linguists, while the crowd is made up of random high quality contributors that choose to participate in the task. As a reference, the cost of hiring two experts is equal to the cost of employing nineteen contributors in Crowdflower.",
"Evaluators were given a summary of the annotations received for the term group in the form of:The term group \"inequality inequity\" received annotations as 50.0% sadness, 33.33% disgust, 16.67% anger. Then, they were asked to evaluate on a scale from 1 to 5, how valid these annotations were considered.",
"The summary of the evaluation for both experts and crowd can be seen in Figure FIGREF36 . The first graph presents the validity over the number of annotations in the main class of the term group. Although this information is hidden from the evaluators, high annotational agreement results in high evaluation scores. Both experts and the crowd follow that positive trend. Crowd contributors are more strict in their evaluations, but after four annotations we observe a significant validity increase on both crowd and experts.",
"Likewise, the annotation percentage for the majority class has a positive influence to the evaluation score, with the exception of 100% agreement, second graph Figure FIGREF36 . The weighing factor for term groups with 100% annotation agreement is the reduced number of total annotations, as the mean number of total annotations drops abruptly on the 100%, and total agreement is more frequent in term groups with low number of total annotations. It's worth noting that certain percentages can only occur on specific number of total annotations, e.g. 17% and 83% can only occur when the number of total annotations is six.",
"In emotion annotations, as seen on the third graph of Figure FIGREF36 crowd and experts follow a similar evaluation pattern. Anticipation and joy had the exact same evaluation, while every other emotion and stop words were evaluated lower from the crowd. The only subclasses evaluated higher from the crowd were intensifiers and negators, with a significant difference in the evaluations for the latter. Section 6.3 provides a more detailed evaluation for term groups that received at least one annotation as intensifiers or negators.",
"The final graph in Figure FIGREF36 presents a clear negative correlation of subclass agreement and evaluation scores. The highest number of subclasses that do not affect evaluation scores is three, above that there is a steady decline of the evaluation scores, for both the crowd and the experts.",
"The evaluation results provide some key insights in the importance of the number of annotations. The evaluation scores start to improve after four annotations. Annotational agreement and majority voting are less important. Subclass agreement has a negative effect on three or more subclasses. Most importantly and compared to experts, the crowd is a stricter evaluator with significantly lower costs, and higher scalability. Since strict evaluation leads to higher quality annotations, the evaluation can be performed by the crowd instead of experts. Crowd contributors can be found in high numbers and multiple platforms, compared to expert linguists.",
"Evaluation of intensifiers and negators, was also a batch of evaluation and annotation tasks, as mentioned in Section 6.2. However, the difference was that now evaluators had to answer if a term group included at least one valid intensifier or negator. The evaluation was again performed by experts and the crowd, as described in Section 6.2.1. Based on the annotations received in PEL, we used 541 term groups that had at least one annotation in any of the intensifying subclasses. Although, the particular selection of term groups is statistically significant, we expect relatively low evaluation scores. That is because the number of intensifying annotations is low in most of the selected term groups.",
"In Figure FIGREF40 , we define varying levels of agreement on the validity of the intensifying class, based on the agreement of evaluations. For the experts group, low agreement refers to term groups that received at least one out of two evaluations as valid, while high agreement requires the evaluation agreement of both experts. Similarly for the crowd, low agreement refers to a minimum of two valid evaluations, mid agreement corresponds to at least three, and high agreement requires an absolute agreement of all four evaluators.",
"Experts are far more strict than the crowd in the evaluation of intensifiers and negators. When the validity agreement is low on both evaluation groups, the average valid term group difference is more than 40%, but the high validity agreement the difference is just 5.33%. When high agreement evaluation is applied, the crowd and expert evaluations are almost identical. The number of crowd evaluations is the factor that provides a degree of freedom in the evaluation strictness."
],
[
"Lexicon acquisition is a complex task that includes a mixture of objective and subjective tasks. While annotation of emotions is more subjective, annotation of linguistic elements (such as stop words, emotion shift terms, intensifiers etc.) is purely objective. We presented a novel work flow that provides quality results for both subjective and objective tasks.",
"Subcomponents of the lexicon acquisition could be improved on an individual basis. Spell check can include spelling recommendations, filtering could incorporate rewarding and penalties, evaluation process can include experts and so on.",
"Crowd diversity in the annotation and evaluation process is another limiting factor. Ideally we would prefer a fixed number of individuals, each to annotate and evaluate the whole corpus. However, the uniformity of expert judgement is replaced with the diversity and mass of contributors.",
"The corpus may be limiting the term groups in the lexicon to specific domain-specific subjects. Comparisons with existing lexicons, such as NRC BIBREF21 indicate a moderate overlap with 40% common terms. Additionally, the number of annotations for a number of term groups is relatively low. However, the batch task of evaluation and annotation provided almost ten thousand annotations, and increased the mean number of annotations from 2.3 to 3.2."
],
[
"We demonstrated that the crowd is capable of producing and evaluating a quality pure emotion lexicon without gold standards. Our work-flow is unsupervised, significantly lower costs, and improves scalability. There are however, various parameters that should be taken into account. Spam is very common and quality assessment post-annotations should be implemented.",
"Our approach required workers to label term groups as emotion, intensifiers, and stop words. Agreement is not necessary and multi emotional term groups, with up to three emotions, are considered equally valid to single emotion term groups. The hardest task for the crowd proved to be the classification of intensifiers and negators, probably because it required a certain level of objectivity which contradicted the overall subjectivity of the emotional annotation task. Based on the evaluation of term groups and the results from the assessment, as the number of overall annotators rises the number of valid annotations increases proportionally. This indicates the importance of a critical mass in lexicon acquisition tasks.",
"Stemming reduced time and costs requirements, with minimal emotional and subclass agreement. Costs were reduced by 45%, and multi-emotion classification was lower than 10%. Term groups did not create confusion amongst workers, and only a small fraction of term groups had subclass agreement. On the contrary, including the stem and description in the task confused workers, and were excluded from the interface. We tested several interface designs, and the one that worked best had minimal instructions. Lexicon acquisition interfaces in paid micro-task environments should be further studied, with regards to various other contribution incentives.",
"The crowd is as capable of evaluating lexicons, as experts. Linguistic element evaluation can be efficiently crowdsourced, and the evaluation of emotional or non emotional elements can be as strict as needed. The number of evaluators plays a key role in both emotional and linguistic evaluations. The crowd is strict on emotional evaluations, while the experts are strict in linguistic evaluations. However, a high number of crowd evaluations broadens the strictness freedom, with a small fraction of the experts' hiring costs. Depending on the number of evaluations, varying levels of evaluation agreement can be implemented.",
"Our long term goal is to create a voluntary platform for pure emotion lexicon acquisition, to further study the effects of critical mass in lexicon acquisition. In short term, we will perform the exact same crowdsourcing task in a voluntary platform, Crowd4U or similar platforms, to study the effect of monetary and contribution incentives in pure emotion sentiment annotation. In parallel, we will perform a qualitative analysis with regards to understanding of intensifiers and negators, to create the optimal set of instructions and examples. Finally, we are considering how we can extend the approach to various other linguistic elements, such as words that split the sentence, words that indicate more important parts of a sentence and so on.",
"We believe that beyond polarity sentiment analysis can enhance and extend simple polarity based applications. Sentiment analysis in marketing, politics, health monitoring, online social networks, and evaluation processes would benefit from a crowdsourced pure emotion lexicon."
],
[
"This work was supported by the EU project \"QROWD - Because Big Data Integration is Humanly Possible\"."
]
],
"section_name": [
"Introduction",
"Related Work",
"Our approach",
"Data",
"Crowdsourcing",
"Task Interface",
"Crowd",
"Assessment",
"Lexicon Analysis",
"Reliability",
"Crowd and experts comparison",
"Limitations",
"Conclusion and future work",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"91afbbbf00718a433b50da63ccf3ee20305196e3",
"a3512a65a680fb8d2c01af2a8ac3825e24da7a81"
],
"answer": [
{
"evidence": [
"We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations). The experts are two Ph.D. linguists, while the crowd is made up of random high quality contributors that choose to participate in the task. As a reference, the cost of hiring two experts is equal to the cost of employing nineteen contributors in Crowdflower.",
"Evaluators were given a summary of the annotations received for the term group in the form of:The term group \"inequality inequity\" received annotations as 50.0% sadness, 33.33% disgust, 16.67% anger. Then, they were asked to evaluate on a scale from 1 to 5, how valid these annotations were considered."
],
"extractive_spans": [],
"free_form_answer": "Human evaluators were asked to evaluate on a scale from 1 to 5 the validity of the lexicon annotations made by the experts and crowd contributors.",
"highlighted_evidence": [
"We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations). ",
"Evaluators were given a summary of the annotations received for the term group in the form of:The term group \"inequality inequity\" received annotations as 50.0% sadness, 33.33% disgust, 16.67% anger. Then, they were asked to evaluate on a scale from 1 to 5, how valid these annotations were considered."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations). The experts are two Ph.D. linguists, while the crowd is made up of random high quality contributors that choose to participate in the task. As a reference, the cost of hiring two experts is equal to the cost of employing nineteen contributors in Crowdflower."
],
"extractive_spans": [
"1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We perform a direct comparison of expert and crowd contributors, for 1000 term groups based on the number of total annotations(200 term groups with 2 total annotations, 200 term groups with 3 total annotations, and so on up to term groups with 6 total annotations)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"How do they compare lexicons?"
],
"question_id": [
"c09a92e25e6a81369fcc4ae6045491f2690ccc10"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
""
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: The circumplex of emotions",
"Figure 2: PEL creation work-flow",
"Figure 3: Subclasses based on class selection",
"Figure 4: Percentages of excluded workers over single annotation",
"Table 3: Opposition dyads",
"Figure 5: Validity over: Majority subclass annotations, Majority subclass annotations percentage, Subclasses, Subclass Agreement",
"Figure 6: Intensifying class evaluations"
],
"file": [
"2-Figure1-1.png",
"5-Figure2-1.png",
"6-Figure3-1.png",
"9-Figure4-1.png",
"12-Table3-1.png",
"13-Figure5-1.png",
"14-Figure6-1.png"
]
} | [
"How do they compare lexicons?"
] | [
[
"1710.04203-Crowd and experts comparison-0",
"1710.04203-Crowd and experts comparison-1"
]
] | [
"Human evaluators were asked to evaluate on a scale from 1 to 5 the validity of the lexicon annotations made by the experts and crowd contributors."
] | 103 |
1911.10049 | High Quality ELMo Embeddings for Seven Less-Resourced Languages | Recent results show that deep neural networks using contextual embeddings significantly outperform non-contextual embeddings on a majority of text classification task. We offer precomputed embeddings from popular contextual ELMo model for seven languages: Croatian, Estonian, Finnish, Latvian, Lithuanian, Slovenian, and Swedish. We demonstrate that the quality of embeddings strongly depends on the size of training set and show that existing publicly available ELMo embeddings for listed languages shall be improved. We train new ELMo embeddings on much larger training sets and show their advantage over baseline non-contextual FastText embeddings. In evaluation, we use two benchmarks, the analogy task and the NER task. | {
"paragraphs": [
[
"Word embeddings are representations of words in numerical form, as vectors of typically several hundred dimensions. The vectors are used as an input to machine learning models; for complex language processing tasks these are typically deep neural networks. The embedding vectors are obtained from specialized learning tasks, based on neural networks, e.g., word2vec BIBREF0, GloVe BIBREF1, FastText BIBREF2, ELMo BIBREF3, and BERT BIBREF4. For training, the embeddings algorithms use large monolingual corpora that encode important information about word meaning as distances between vectors. In order to enable downstream machine learning on text understanding tasks, the embeddings shall preserve semantic relations between words, and this is true even across languages.",
"Probably the best known word embeddings are produced by the word2vec method BIBREF5. The problem with word2vec embeddings is their failure to express polysemous words. During training of an embedding, all senses of a given word (e.g., paper as a material, as a newspaper, as a scientific work, and as an exam) contribute relevant information in proportion to their frequency in the training corpus. This causes the final vector to be placed somewhere in the weighted middle of all words' meanings. Consequently, rare meanings of words are poorly expressed with word2vec and the resulting vectors do not offer good semantic representations. For example, none of the 50 closest vectors of the word paper is related to science.",
"The idea of contextual embeddings is to generate a different vector for each context a word appears in and the context is typically defined sentence-wise. To a large extent, this solves the problems with word polysemy, i.e. the context of a sentence is typically enough to disambiguate different meanings of a word for humans and so it is for the learning algorithms. In this work, we describe high-quality models for contextual embeddings, called ELMo BIBREF3, precomputed for seven morphologically rich, less-resourced languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian, and Swedish. ELMo is one of the most successful approaches to contextual word embeddings. At time of its creation, ELMo has been shown to outperform previous word embeddings BIBREF3 like word2vec and GloVe on many NLP tasks, e.g., question answering, named entity extraction, sentiment analysis, textual entailment, semantic role labeling, and coreference resolution.",
"This report is split into further five sections. In section SECREF2, we describe the contextual embeddings ELMo. In Section SECREF3, we describe the datasets used and in Section SECREF4 we describe preprocessing and training of the embeddings. We describe the methodology for evaluation of created vectors and results in Section SECREF5. We present conclusion in Section SECREF6 where we also outline plans for further work."
],
[
"Typical word embeddings models or representations, such as word2vec BIBREF0, GloVe BIBREF1, or FastText BIBREF2, are fast to train and have been pre-trained for a number of different languages. They do not capture the context, though, so each word is always given the same vector, regardless of its context or meaning. This is especially problematic for polysemous words. ELMo (Embeddings from Language Models) embedding BIBREF3 is one of the state-of-the-art pretrained transfer learning models, that remedies the problem and introduces a contextual component.",
"ELMo model`s architecture consists of three neural network layers. The output of the model after each layer gives one set of embeddings, altogether three sets. The first layer is a CNN layer, which operates on a character level. It is context independent, so each word always gets the same embedding, regardless of its context. It is followed by two biLM layers. A biLM layer consists of two concatenated LSTMs. In the first LSTM, we try to predict the following word, based on the given past words, where each word is represented by the embeddings from the CNN layer. In the second LSTM, we try to predict the preceding word, based on the given following words. It is equivalent to the first LSTM, just reading the text in reverse.",
"In NLP tasks, any set of these embeddings may be used; however, a weighted average is usually used. The weights of the average are learned during the training of the model for the specific task. Additionally, an entire ELMo model can be fine-tuned on a specific end task.",
"Although ELMo is trained on character level and is able to handle out-of-vocabulary words, a vocabulary file containing most common tokens is used for efficiency during training and embedding generation. The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words. Later, ELMo models for other languages were trained as well, but limited to larger languages with many resources, like German and Japanese."
],
[
"Recently, ELMoForManyLangs BIBREF6 project released pre-trained ELMo models for a number of different languages BIBREF7. These models, however, were trained on a significantly smaller datasets. They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. The quality of these models is questionable. For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens. The difference of each model on the word analogy task is shown in Figure FIGREF16 in Section SECREF5. As the results of the ELMoForManyLangs embeddings are significantly worse than using the full corpus, we can conclude that these embeddings are not of sufficient quality. For that reason, we computed ELMo embeddings for seven languages on much larger corpora. As this effort requires access to large amount of textual data and considerable computational resources, we made the precomputed models publicly available by depositing them to Clarin repository."
],
[
"We trained ELMo models for seven languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian and Swedish. To obtain high-quality embeddings, we used large monolingual corpora from various sources for each language. Some corpora are available online under permissive licences, others are available only for research purposes or have limited availability. The corpora used in training datasets are a mix of news articles and general web crawl, which we preprocessed and deduplicated. Below we shortly describe the used corpora in alphabetical order of the involved languages. Their names and sizes are summarized in Table TABREF3.",
"Croatian dataset include hrWaC 2.1 corpus BIBREF9, Riznica BIBREF10, and articles of Croatian branch of Styria media house, made available to us through partnership in a joint project. hrWaC was built by crawling the .hr internet domain in 2011 and 2014. Riznica is composed of Croatian fiction and non-fiction prose, poetry, drama, textbooks, manuals, etc. The Styria dataset consists of 570,219 news articles published on the Croatian 24sata news portal and niche portals related to 24sata.",
"Estonian dataset contains texts from two sources, CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, and news articles made available to us by Ekspress Meedia due to partnership in the project. Ekspress Meedia dataset is composed of Estonian news articles between years 2009 and 2019. The CoNLL 2017 corpus is composed of Estonian Wikipedia and webcrawl.",
"Finnish dataset contains articles by Finnish news agency STT, Finnish part of the CoNLL 2017 dataset, and Ylilauta downloadable version BIBREF11. STT news articles were published between years 1992 and 2018. Ylilauta is a Finnish online discussion board; the corpus contains parts of the discussions from 2012 to 2014.",
"Latvian dataset consists only of the Latvian portion of the ConLL 2017 corpus.",
"Lithuanian dataset is composed of Lithuanian Wikipedia articles from 2018, DGT-UD corpus, and LtTenTen. DGT-UD is a parallel corpus of 23 official languages of the EU, composed of JRC DGT translation memory of European law, automatically annotated with UD-Pipe 1.2. LtTenTen is Lithuanian web corpus made up of texts collected from the internet in April 2014 BIBREF12.",
"Slovene dataset is formed from the Gigafida 2.0 corpus BIBREF13. It is a general language corpus composed of various sources, mostly newspapers, internet pages, and magazines, but also fiction and non-fiction prose, textbooks, etc.",
"Swedish dataset is composed of STT Swedish articles and Swedish part of CoNLL 2017. The Finnish news agency STT publishes some of its articles in Swedish language. They were made available to us through partnership in a joint project. The corpus contains those articles from 1992 to 2017."
],
[
"Prior to training the ELMo models, we sentence and word tokenized all the datasets. The text was formatted in such a way that each sentence was in its own line with tokens separated by white spaces. CoNLL 2017, DGT-UD and LtTenTen14 corpora were already pre-tokenized. We tokenized the others using the NLTK library and its tokenizers for each of the languages. There is no tokenizer for Croatian in NLTK library, so we used Slovene tokenizer instead. After tokenization, we deduplicated the datasets for each language separately, using the Onion (ONe Instance ONly) tool for text deduplication. We applied the tool on paragraph level for corpora that did not have sentences shuffled and on sentence level for the rest. We considered 9-grams with duplicate content threshold of 0.9.",
"For each language we prepared a vocabulary file, containing roughly one million most common tokens, i.e. tokens that appear at least $n$ times in the corpus, where $n$ is between 15 and 25, depending on the dataset size. We included the punctuation marks among the tokens. We trained each ELMo model using default values used to train the original English ELMo (large) model."
],
[
"We evaluated the produced ELMo models for all languages using two evaluation tasks: a word analogy task and named entity recognition (NER) task. Below, we first shortly describe each task, followed by the evaluation results."
],
[
"The word analogy task was popularized by mikolov2013distributed. The goal is to find a term $y$ for a given term $x$ so that the relationship between $x$ and $y$ best resembles the given relationship $a : b$. There are two main groups of categories: 5 semantic and 10 syntactic. To illustrate a semantic relationship, consider for example that the word pair $a : b$ is given as “Finland : Helsinki”. The task is to find the term $y$ corresponding to the relationship “Sweden : $y$”, with the expected answer being $y=$ Stockholm. In syntactic categories, the two words in a pair have a common stem (in some cases even same lemma), with all the pairs in a given category having the same morphological relationship. For example, given the word pair “long : longer”, we see that we have an adjective in its base form and the same adjective in a comparative form. That task is then to find the term $y$ corresponding to the relationship “dark : $y$”, with the expected answer being $y=$ darker, that is a comparative form of the adjective dark.",
"In the vector space, the analogy task is transformed into vector arithmetic and search for nearest neighbours, i.e. we compute the distance between vectors: d(vec(Finland), vec(Helsinki)) and search for word $y$ which would give the closest result in distance d(vec(Sweden), vec($y$)). In the analogy dataset the analogies are already pre-specified, so we are measuring how close are the given pairs. In the evaluation below, we use analogy datasets for all tested languages based on the English dataset by BIBREF14 . Due to English-centered bias of this dataset, we used a modified dataset which was first written in Slovene language and then translated into other languages BIBREF15.",
"As each instance of analogy contains only four words, without any context, the contextual models (such as ELMo) do not have enough context to generate sensible embeddings. We therefore used some additional text to form simple sentences using the four analogy words, while taking care that their noun case stays the same. For example, for the words \"Rome\", \"Italy\", \"Paris\" and \"France\" (forming the analogy Rome is to Italy as Paris is to $x$, where the correct answer is $x=$France), we formed the sentence \"If the word Rome corresponds to the word Italy, then the word Paris corresponds to the word France\". We generated embeddings for those four words in the constructed sentence, substituted the last word with each word in our vocabulary and generated the embeddings again. As typical for non-contextual analogy task, we measure the cosine distance ($d$) between the last word ($w_4$) and the combination of the first three words ($w_2-w_1+w_3$). We use the CSLS metric BIBREF16 to find the closest candidate word ($w_4$). If we find the correct word among the five closest words, we consider that entry as successfully identified. The proportion of correctly identified words forms a statistic called accuracy@5, which we report as the result.",
"We first compare existing Latvian ELMo embeddings from ELMoForManyLangs project with our Latvian embeddings, followed by the detailed analysis of our ELMo embeddings. We trained Latvian ELMo using only CoNLL 2017 corpora. Since this is the only language, where we trained the embedding model on exactly the same corpora as ELMoForManyLangs models, we chose it for comparison between our ELMo model with ELMoForManyLangs. In other languages, additional or other corpora were used, so a direct comparison would also reflect the quality of the corpora used for training. In Latvian, however, only the size of the training dataset is different. ELMoForManyLangs uses only 20 million tokens and we use the whole corpus of 270 million tokens.",
"The Latvian ELMo model from ELMoForManyLangs project performs significantly worse than EMBEDDIA ELMo Latvian model on all categories of word analogy task (Figure FIGREF16). We also include the comparison with our Estonian ELMo embeddings in the same figure. This comparison shows that while differences between our Latvian and Estonian embeddings can be significant for certain categories, the accuracy score of ELMoForManyLangs is always worse than either of our models. The comparison of Estonian and Latvian models leads us to believe that a few hundred million tokens is a sufficiently large corpus to train ELMo models (at least for word analogy task), but 20-million token corpora used in ELMoForManyLangs are too small.",
"The results for all languages and all ELMo layers, averaged over semantic and syntactic categories, are shown in Table TABREF17. The embeddings after the first LSTM layer perform best in semantic categories. In syntactic categories, the non-contextual CNN layer performs the best. Syntactic categories are less context dependent and much more morphology and syntax based, so it is not surprising that the non-contextual layer performs well. The second LSTM layer embeddings perform the worst in syntactic categories, though still outperforming CNN layer embeddings in semantic categories. Latvian ELMo performs worse compared to other languages we trained, especially in semantic categories, presumably due to smaller training data size. Surprisingly, the original English ELMo performs very poorly in syntactic categories and only outperforms Latvian in semantic categories. The low score can be partially explained by English model scoring $0.00$ in one syntactic category “opposite adjective”, which we have not been able to explain."
],
[
"For evaluation of ELMo models on a relevant downstream task, we used named entity recognition (NER) task. NER is an information extraction task that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. To allow comparison of results between languages, we used an adapted version of this task, which uses a reduced set of labels, available in NER datasets for all processed languages. The labels in the used NER datasets are simplified to a common label set of three labels (person - PER, location - LOC, organization - ORG). Each word in the NER dataset is labeled with one of the three mentioned labels or a label 'O' (other, i.e. not a named entity) if it does not fit any of the other three labels. The number of words having each label is shown in Table TABREF19.",
"To measure the performance of ELMo embeddings on the NER task we proceeded as follows. We embedded the text in the datasets sentence by sentence, producing three vectors (one from each ELMo layer) for each token in a sentence. We calculated the average of the three vectors and used it as the input of our recognition model. The input layer was followed by a single LSTM layer with 128 LSTM cells and a dropout layer, randomly dropping 10% of the neurons on both the output and the recurrent branch. The final layer of our model was a time distributed softmax layer with 4 neurons.",
"We used ADAM optimiser BIBREF17 with the learning rate 0.01 and $10^{-5}$ learning rate decay. We used categorical cross-entropy as a loss function and trained the model for 3 epochs. We present the results using the Macro $F_1$ score, that is the average of $F_1$-scores for each of the three NE classes (the class Other is excluded).",
"Since the differences between the tested languages depend more on the properties of the NER datasets than on the quality of embeddings, we can not directly compare ELMo models. For this reason, we take the non-contextual fastText embeddings as a baseline and predict named entities using them. The architecture of the model using fastText embeddings is the same as the one using ELMo embeddings, except that the input uses 300 dimensional fastText embedding vectors, and the model was trained for 5 epochs (instead of 3 as for ELMo). In both cases (ELMo and fastText) we trained and evaluated the model five times, because there is some random component involved in initialization of the neural network model. By training and evaluating multiple times, we minimise this random component.",
"The results are presented in Table TABREF21. We included the evaluation of the original ELMo English model in the same table. NER models have little difficulty distinguishing between types of named entities, but recognizing whether a word is a named entity or not is more difficult. For languages with the smallest NER datasets, Croatian and Lithuanian, ELMo embeddings show the largest improvement over fastText embeddings. However, we can observe significant improvements with ELMo also on English and Finnish, which are among the largest datasets (English being by far the largest). Only on Slovenian dataset did ELMo perform slightly worse than fastText, on all other EMBEDDIA languages, the ELMo embeddings improve the results."
],
[
"We prepared precomputed ELMo contextual embeddings for seven languages: Croatian, Estonian, Finnish, Latvian, Lithuanian, Slovenian, and Swedish. We present the necessary background on embeddings and contextual embeddings, the details of training the embedding models, and their evaluation. We show that the size of used training sets importantly affects the quality of produced embeddings, and therefore the existing publicly available ELMo embeddings for the processed languages are inadequate. We trained new ELMo embeddings on larger training sets and analysed their properties on the analogy task and on the NER task. The results show that the newly produced contextual embeddings produce substantially better results compared to the non-contextual fastText baseline. In future work, we plan to use the produced contextual embeddings on the problems of news media industry. The pretrained ELMo models will be deposited to the CLARIN repository by the time of the final version of this paper."
],
[
"The work was partially supported by the Slovenian Research Agency (ARRS) core research programme P6-0411. This paper is supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 825153, project EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media). The results of this publication reflects only the authors' view and the EU Commission is not responsible for any use that may be made of the information it contains."
]
],
"section_name": [
"Introduction",
"ELMo",
"ELMo ::: ELMoForManyLangs",
"Training Data",
"Preprocessing and Training",
"Evaluation",
"Evaluation ::: Word Analogy Task",
"Evaluation ::: Named Entity Recognition",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"9bed1ca952b0a8085eac6990bb3ec7a1f91ef9aa",
"efa246da0f6b51d1fa8eb546b682782fc1eaf592"
],
"answer": [
{
"evidence": [
"Recently, ELMoForManyLangs BIBREF6 project released pre-trained ELMo models for a number of different languages BIBREF7. These models, however, were trained on a significantly smaller datasets. They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. The quality of these models is questionable. For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens. The difference of each model on the word analogy task is shown in Figure FIGREF16 in Section SECREF5. As the results of the ELMoForManyLangs embeddings are significantly worse than using the full corpus, we can conclude that these embeddings are not of sufficient quality. For that reason, we computed ELMo embeddings for seven languages on much larger corpora. As this effort requires access to large amount of textual data and considerable computational resources, we made the precomputed models publicly available by depositing them to Clarin repository."
],
"extractive_spans": [],
"free_form_answer": "By 14 times.",
"highlighted_evidence": [
"They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. ",
"For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Although ELMo is trained on character level and is able to handle out-of-vocabulary words, a vocabulary file containing most common tokens is used for efficiency during training and embedding generation. The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words. Later, ELMo models for other languages were trained as well, but limited to larger languages with many resources, like German and Japanese.",
"FLOAT SELECTED: Table 1: The training corpora used. We report their size (in billions of tokens), and ELMo vocabulary size (in millions of tokens)."
],
"extractive_spans": [],
"free_form_answer": "up to 1.95 times larger",
"highlighted_evidence": [
"The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words.",
"FLOAT SELECTED: Table 1: The training corpora used. We report their size (in billions of tokens), and ELMo vocabulary size (in millions of tokens)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"9b9f69d60dce80720c71b47a59c916c399a0deb9",
"fe73363409db3dbb5b6af42b1c2b4e7e60cf118c"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."
],
"extractive_spans": [],
"free_form_answer": "5 percent points.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."
],
"extractive_spans": [],
"free_form_answer": "0.05 F1",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"How larger are the training sets of these versions of ELMo compared to the previous ones?",
"What is the improvement in performance for Estonian in the NER task?"
],
"question_id": [
"603fee7314fa65261812157ddfc2c544277fcf90",
"09a1173e971e0fcdbf2fbecb1b077158ab08f497"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"ELMo",
"ELMo"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Table 1: The training corpora used. We report their size (in billions of tokens), and ELMo vocabulary size (in millions of tokens).",
"Figure 1: Comparison of Latvian ELMo model by ELMoForManyLangs (blue, latvian-efml), Latvian ELMo model trained by EMBEDDIA team (yellow, latvian-embeddia), and Estonian ELMo model trained by EMBEDDIA team (black, estonian-embeddia). The performance is measured as accuracy@5 on word analogy task, where categories 1 to 5 are semantic, and categories 6 to 15 are syntactic. The embeddings use weights of the first LSTM layer (ie. the second layer overall).",
"Table 2: The embeddings quality measured on the word analogy task, using acc@5 score. Each language is represented with its 2-letter ISO code. Results are shown for each layer separately and are averaged over all semantic (sem) and all syntactic (syn) categories, so that each category has an equal weight (i.e. results are first averaged for each category, and these averages are then averaged with equal weights).",
"Table 3: The number of words labeled with each label (PER, LOC, ORG) and the density of these labels (their sum divided by the number of all words) for datasets in all languages.",
"Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png"
]
} | [
"How larger are the training sets of these versions of ELMo compared to the previous ones?",
"What is the improvement in performance for Estonian in the NER task?"
] | [
[
"1911.10049-ELMo-3",
"1911.10049-3-Table1-1.png",
"1911.10049-ELMo ::: ELMoForManyLangs-0"
],
[
"1911.10049-5-Table4-1.png"
]
] | [
"up to 1.95 times larger",
"0.05 F1"
] | 105 |
1812.06864 | Fully Convolutional Speech Recognition | Current state-of-the-art speech recognition systems build on recurrent neural networks for acoustic and/or language modeling, and rely on feature extraction pipelines to extract mel-filterbanks or cepstral coefficients. In this paper we present an alternative approach based solely on convolutional neural networks, leveraging recent advances in acoustic models from the raw waveform and language modeling. This fully convolutional approach is trained end-to-end to predict characters from the raw waveform, removing the feature extraction step altogether. An external convolutional language model is used to decode words. On Wall Street Journal, our model matches the current state-of-the-art. On Librispeech, we report state-of-the-art performance among end-to-end models, including Deep Speech 2 trained with 12 times more acoustic data and significantly more linguistic data. | {
"paragraphs": [
[
"Recent work on convolutional neural network architectures showed that they are competitive with recurrent architectures even on tasks where modeling long-range dependencies is critical, such as language modeling BIBREF0 , machine translation BIBREF1 , BIBREF2 and speech synthesis BIBREF3 . In end-to-end speech recognition however, recurrent architectures are still prevalent for acoustic and/or language modeling BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 .",
"There is a history of using convolutional networks in speech recognition, but only as part of an otherwise more traditional pipeline. They have been first introduced as TDNNs to predict phoneme classes BIBREF9 , and later to generate HMM posteriorgrams BIBREF10 . They have more recently been used in end-to-end frameworks, but only in combination with recurrent layers BIBREF6 , or n-gram language models BIBREF11 , or for phone recognition BIBREF12 , BIBREF13 . Nonetheless, convolutional architectures are prevalent when learning from the raw waveform BIBREF14 , BIBREF15 , BIBREF16 , BIBREF13 , BIBREF17 , because they naturally model the computation of standard features such as mel-filterbanks. Given the evidence that they are also suitable on long-range dependency tasks, we expect convolutional neural networks to be competitive at all levels of the speech recognition pipeline.",
"In this paper, we present a fully convolutional approach to end-to-end speech recognition. Building on recent advances in convolutional learnable front-ends for speech BIBREF13 , BIBREF17 , convolutional acoustic models BIBREF11 , and convolutional language models BIBREF0 , the paper has four main contributions:"
],
[
"Our approach, described in this section, is illustrated in Fig. FIGREF5 ."
],
[
"Several proposals to learn the front-end of speech recognition systems have been made BIBREF15 , BIBREF16 , BIBREF13 , BIBREF17 . Following the comparison in BIBREF17 , we consider their best architecture, called \"scattering based\" (hereafter refered to as learnable front-end). The learnable front-end contains first a convolution of width 2 that emulates the pre-emphasis step used in mel-filterbanks. It is followed by a complex convolution of width 25ms and INLINEFORM0 filters. After taking the squared absolute value, a low-pass filter of width 25ms and stride 10ms performs decimation. The front-end finally applies a log-compression and a per-channel mean-variance normalization (equivalent to an instance normalization layer BIBREF18 ). Following BIBREF17 , the \"pre-emphasis\" convolution is initialized to INLINEFORM1 , and then trained with the rest of the network. The low-pass filter is kept constant to a squared Hanning window, and the complex convolutional layer is initialized randomly. In addition to the INLINEFORM2 filters used by BIBREF17 , we experiment with INLINEFORM3 filters. Notice that since the stride is the same as for mel-filterbanks, acoustic models on top of the learnable front-ends can also be applied to mel-filterbanks (simply modifying the number of input channels if INLINEFORM4 )."
],
[
"The acoustic model is a convolutional neural network with gated linear units BIBREF0 , which is fed with the output of the learnable front-end. Following BIBREF11 , the networks uses a growing number of channels, and dropout BIBREF19 for regularization. These acoustic models are trained to predict letters directly with the Auto Segmentation Criterion (ASG) BIBREF20 . The only differences between the WSJ and Librispeech models are their depth, the number of feature maps per layer, the receptive field and the amount of dropout."
],
[
"The convolutional language model (LM) is the GCNN-14B from BIBREF0 , which achieved competitive results on several language modeling benchmarks. The network contains 14 convolutional residual blocks BIBREF21 with a growing number of channels, and uses gated linear units as activation function.",
"The language model is used to score candidate transcriptions in addition to the acoustic model in the beam search decoder described in the next section. Compared to n-gram LMs, convolutional LMs allow for much larger context sizes. Our detailed experiments study the effect of context size on the final speech recognition performance."
],
[
"We use the beam-search decoder presented in BIBREF11 to generate word sequences given the output from our acoustic model. The decoder finds the word transcription INLINEFORM0 to maximize: INLINEFORM1 ",
"where INLINEFORM0 is the value for the INLINEFORM1 th frame in the path leading to INLINEFORM2 and INLINEFORM3 is the (unnormalized) acoustic model score of the transcription INLINEFORM4 . The hyperparameters INLINEFORM5 respectively control the weight of the language model, the word insertion reward, and the silence insertion penalty. The other parameters are the beam size and the beam score, a threshold under which candidates are discarded even if the beam is not full. These are chosen according to a trade-off between (near-)optimality of the search and computational cost."
],
[
"We evaluate our approach on the large vocabulary task of the Wall Street Journal (WSJ) dataset BIBREF25 , which contains 80 hours of clean read speech, and Librispeech BIBREF26 , which contains 1000 hours with separate train/dev/test splits for clean and noisy speech. Each dataset comes with official textual data to train language models, which contain 37 million tokens for WSJ, 800 million tokens for Librispeech. Our language models are trained separately for each dataset on the official text data only. These datasets were chosen to study the impact of the different components of our system at different scales of training data and in different recording conditions.",
"The models are evaluated in Word Error Rate (WER). Our experiments use the open source codes of wav2letter for the acoustic model, and fairseq for the language model. More details on the experimental setup are given below.",
"Baseline Our baseline for each dataset follows BIBREF11 . It uses the same convolutional acoustic model as our approach but a mel-filterbanks front-end and a 4-gram language model.",
"Training/test splits On WSJ, models are trained on si284. nov93dev is used for validation and nov92 for test. On Librispeech, we train on the concatenation of train-clean and train-other. The validation set is dev-clean when testing on test-clean, and dev-other when testing on test-other.",
"Acoustic model architecture The architecture for the convolutional acoustic model is the \"high dropout\" model from BIBREF11 for Librispeech, which has 19 layers in addition to the front-end (mel-filterbanks for the baseline, or the learnable front-end for our approach). On WSJ, we use the lighter version used in BIBREF17 , which has 17 layers. Dropout is applied at each layer after the front-end, following BIBREF20 . The learnable front-end uses 40 or 80 filters. Language model architecture As described in Section SECREF8 , we use the GCNN-14B model of BIBREF0 with dropout at each convolutional and linear layer on both WSJ and Librispeech. We keep all the words (162K) in WSJ training corpus. For Librispeech, we only use the most frequent 200K tokens (out of 900K).",
"Hyperparameter tuning The acoustic models are trained following BIBREF11 , BIBREF17 , using SGD with a decreasing learning rate, weight normalization and gradient clipping at 0.2 and a momentum of 0.9. The language models are trained with Nesterov accelerated gradient BIBREF27 . Following BIBREF0 , we also use weight normalization and gradient clipping.",
"The parameters of the beam search (see Section SECREF9 ) INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are tuned on the validation set with a beam size of 2500 and a beam score of 26 for computational efficiency. Once INLINEFORM3 are chosen, the test WER is computed with a beam size of 3000 and a beam score of 50."
],
[
"Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92. DeepSpeech 2 shows a WER of INLINEFORM1 but uses 150 times more training data for the acoustic model and huge text datasets for LM training. Finally, the state-of-the-art among end-to-end systems trained only on WSJ, and hence the most comparable to our system, uses lattice-free MMI on augmented data (with speed perturbation) and gets INLINEFORM2 WER. Our baseline system, trained on mel-filterbanks, and decoded with a n-gram language model has a INLINEFORM3 WER. Replacing the n-gram LM by a convolutional one reduces the WER to INLINEFORM4 , and puts our model on par with the current best end-to-end system. Replacing the speech features by a learnable frontend finally reduces the WER to INLINEFORM5 and then to INLINEFORM6 when doubling the number of learnable filters, improving over DeepSpeech 2 and matching the performance of the best HMM-DNN system.",
"Table TABREF10 reports WER on the Librispeech dataset. The CAPIO BIBREF22 ensemble model combines the lattices from 8 individual HMM-DNN systems (using both convolutional and LSTM layers), and is the current state-of-the-art on Librispeech. CAPIO (single) is the best individual system, selected either on dev-clean or dev-other. The sequence-to-sequence baseline is an encoder-decoder with attention and a BPE-level BIBREF28 LM, and currently the best end-to-end system on this dataset. We can observe that our fully convolutional model improves over CAPIO (Single) on the clean part, and is the current best end-to-end system on test-other with an improvement of INLINEFORM0 absolute. Our system also outperforms DeepSpeech 2 on both test sets by a significant margin. An interesting observation is the impact of each convolutional block. While replacing the 4-gram LM by a convolutional LM improves similarly on the clean and noisier parts, learning the speech frontend gives similar performance on the clean part but significantly improves the performance on noisier, harder utterances, a finding that is consistent with previous literature BIBREF15 ."
],
[
"Since this paper uses convolutional language models for speech recognition systems for the first time, we present additional studies of the language model in isolation. These experiments use our best language model on Librispeech, and evaluations in WER are carried out using the baseline system trained on mel-filterbanks. The decoder parameters are tuned using the grid search described in Section SECREF3 , a beam size is fixed to 2500 and a beam score to 30.",
"Correlation between perplexity and WER Figure FIGREF18 shows the correlation between perplexity and WER as the training progresses. As perplexity decreases, the WER on both dev-clean and dev-other also decreases following the same trend. It illustrates that perplexity on the linguistic data is a good surrogate of the final performance of the speech recognition pipeline. Architectural choices or hyper-parameter tuning can thus be carried out mostly using perplexity alone.",
"Influence of context size By limiting the context passed into the LM from the decoder, Table TABREF19 reports WER obtained for context sizes ranging from 3 (comparable to the n-gram baseline) to 50 for our best language model. The WER decreases monotonically until a context size of about 20, and then almost stays still. We observe that the convolutional LM already improves on the n-gram model even with the same context size. Increasing the context gives a significant boost in performance, with the major gains obtained between a context of 3 to 9 ( INLINEFORM0 absolute WER)."
],
[
"We introduced the first fully convolutional pipeline for speech recognition, that can directly process the raw waveform and shows state-of-the art performance on Wall Street Journal and on Librispeech among end-to-end systems. This first attempt at exploiting convolutional language models in speech recognition shows significant improvement over a 4-gram language model on both datasets. Replacing mel-filterbanks by a learnable front-end gives additional gains in performance, that appear to be more prevalent on noisy data. This suggests learning the front-end is a promising avenue for speech recognition with challenging recording conditions."
]
],
"section_name": [
"Introduction",
"Model",
"Convolutional Front end",
"Convolutional Acoustic Model",
"Convolutional Language Model",
"Beam-search decoder",
"Experiments",
"Word Error Rate results",
"Analysis of the convolutional language model",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"a07665bf794978eee76c7e94fe662a9bf9236b4d",
"c1a4c7fc421a326e7b72370f56b2f5f25566c4d5"
],
"answer": [
{
"evidence": [
"Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92. DeepSpeech 2 shows a WER of INLINEFORM1 but uses 150 times more training data for the acoustic model and huge text datasets for LM training. Finally, the state-of-the-art among end-to-end systems trained only on WSJ, and hence the most comparable to our system, uses lattice-free MMI on augmented data (with speed perturbation) and gets INLINEFORM2 WER. Our baseline system, trained on mel-filterbanks, and decoded with a n-gram language model has a INLINEFORM3 WER. Replacing the n-gram LM by a convolutional one reduces the WER to INLINEFORM4 , and puts our model on par with the current best end-to-end system. Replacing the speech features by a learnable frontend finally reduces the WER to INLINEFORM5 and then to INLINEFORM6 when doubling the number of learnable filters, improving over DeepSpeech 2 and matching the performance of the best HMM-DNN system.",
"FLOAT SELECTED: Table 1: WER (%) on the open vocabulary task of WSJ."
],
"extractive_spans": [],
"free_form_answer": "CNN-DNN-BLSTM-HMM",
"highlighted_evidence": [
"Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92.",
"FLOAT SELECTED: Table 1: WER (%) on the open vocabulary task of WSJ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92. DeepSpeech 2 shows a WER of INLINEFORM1 but uses 150 times more training data for the acoustic model and huge text datasets for LM training. Finally, the state-of-the-art among end-to-end systems trained only on WSJ, and hence the most comparable to our system, uses lattice-free MMI on augmented data (with speed perturbation) and gets INLINEFORM2 WER. Our baseline system, trained on mel-filterbanks, and decoded with a n-gram language model has a INLINEFORM3 WER. Replacing the n-gram LM by a convolutional one reduces the WER to INLINEFORM4 , and puts our model on par with the current best end-to-end system. Replacing the speech features by a learnable frontend finally reduces the WER to INLINEFORM5 and then to INLINEFORM6 when doubling the number of learnable filters, improving over DeepSpeech 2 and matching the performance of the best HMM-DNN system."
],
"extractive_spans": [
"HMM-based system"
],
"free_form_answer": "",
"highlighted_evidence": [
"Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"what is the state of the art on WSJ?"
],
"question_id": [
"70e9210fe64f8d71334e5107732d764332a81cb1"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Figure 1: Overview of the fully convolutional architecture.",
"Table 1: WER (%) on the open vocabulary task of WSJ.",
"Table 2: WER (%) on Librispeech.",
"Figure 2: Evolution of WER (%) on Librispeech with the perplexity of the language model.",
"Figure 3: Center frequency of the front-end filters, for the melfilterbank baseline and the learnable front-ends.",
"Figure 4: Power heatmap of the 40 mel-filters (left) and of the frequency response of the 40 convolutional filters learned from the raw waveform on Librispeech (right).",
"Table 3: Evolution of WER (%) on Librispeech with the context size of the language model."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"3-Table2-1.png",
"4-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png",
"4-Table3-1.png"
]
} | [
"what is the state of the art on WSJ?"
] | [
[
"1812.06864-Word Error Rate results-0",
"1812.06864-3-Table1-1.png"
]
] | [
"CNN-DNN-BLSTM-HMM"
] | 106 |
1811.12254 | The Effect of Heterogeneous Data for Alzheimer's Disease Detection from Speech | Speech datasets for identifying Alzheimer's disease (AD) are generally restricted to participants performing a single task, e.g. describing an image shown to them. As a result, models trained on linguistic features derived from such datasets may not be generalizable across tasks. Building on prior work demonstrating that same-task data of healthy participants helps improve AD detection on a single-task dataset of pathological speech, we augment an AD-specific dataset consisting of subjects describing a picture with multi-task healthy data. We demonstrate that normative data from multiple speech-based tasks helps improve AD detection by up to 9%. Visualization of decision boundaries reveals that models trained on a combination of structured picture descriptions and unstructured conversational speech have the least out-of-task error and show the most potential to generalize to multiple tasks. We analyze the impact of age of the added samples and if they affect fairness in classification. We also provide explanations for a possible inductive bias effect across tasks using model-agnostic feature anchors. This work highlights the need for heterogeneous datasets for encoding changes in multiple facets of cognition and for developing a task-independent AD detection model. | {
"paragraphs": [
[
"",
"Alzheimer’s disease (AD) is a neurodegenerative disease affecting over 40 million people worldwide with high costs of acute and long-term care BIBREF0 . Recruitment of participants with cognitive impairment has historically been a bottleneck in clinical trials BIBREF1 , making AD datasets relatively small. Additionally, though cognitive assessments test domains of cognition through multiple tasks, most available datasets of pathological speech are restricted to participants performing a single task. Picture description using an image to elicit narrative discourse samples is one such task that has proved to be successful in detecting AD BIBREF2 . However, it is important to develop ML models of high performance that would produce results generalizable across different tasks.",
"Several studies have used natural language processing and machine learning to distinguish between healthy and cognitively impaired speech of participants describing a picture. Fraser et al. BIBREF3 used linguistic and acoustic features to classify healthy and pathological speech transcripts with an accuracy of INLINEFORM0 . Similarly, Karlekar et al. BIBREF4 classified utterances of speakers as AD or healthy (HC) with an accuracy of INLINEFORM1 using an enlarged, utterance-level view of transcripts of picture descriptions. In line with previous research, we use linguistic and acoustic features of speech as input to our ML model. Furthermore, we extend the model to using data from several different tasks.",
"Noorian et al. BIBREF5 demonstrated that using within-task data of healthy participants describing a picture improved AD detection performance by up to 13%. In this paper, we evaluate if model performance improves with the addition of data from healthy participants, with varying ages, performing either the same or different tasks. We find that models trained on datasets of picture description tasks augmented with conversational speech of healthy speakers learn decision boundaries that are more generalizable across activities with lower out-of-task errors. We observe a 9% increase in AD detection performance when normative data from different tasks are utilized. We also analyze if each task provides domain-specific inductive bias for other tasks to obtain a model setting capable of detecting AD from any sample of speech using high-precision model-agnostic explanations proposed by Ribeiro et al. BIBREF6 and computation of various error metrics related to classification.",
""
],
[
"",
" All datasets shown in Tab. SECREF2 were transcribed manually by trained transcriptionists, employing the same list of annotations and protocols, with the same set of features extracted from the transcripts (see Sec. SECREF3 ). HAPD and HAFP are jointly referred to as HA.",
""
],
[
"",
"Feature Extraction: We extract 297 linguistic features from the transcripts and 183 acoustic features from the associated audio files, all task-independent. Linguistic features encompass syntactic features (e.g. syntactic complexity BIBREF9 ), lexical features (e.g. occurrence of production rules). Acoustic features include Mel-frequency Cepstral Coefficients (MFCCs) & pause-related features (e.g., mean pause duration). We also use sentiment lexical norms BIBREF10 , local, and global coherence features BIBREF11 .",
"Feature Predicates as Anchors for Prediction: Given a black box classifier INLINEFORM0 with interpretable input representation, Ribeiro et al. BIBREF6 define anchors INLINEFORM1 as a set of input rules such that when conditions in the rule are met, humans can confidently predict the behavior of a model with high precision. Since the inputs to the classifier are engineered features with finite ranges,we can obtain sufficient conditions for the prediction INLINEFORM2 in terms of interpretable feature thresholds for an unseen instance INLINEFORM3 . Anchors are found by maximizing the metric of coverage, defined as the probability of anchors holding true to samples in the data distribution INLINEFORM4 , in BIBREF6 . Hence, INLINEFORM5 is maximized, where INLINEFORM6 .",
"We show in Sec. SECREF13 that anchors identified from a model trained on multiple tasks have more coverage over the data distribution than those obtained from a model trained on a single task. Such a scenario is possible when task-independant, clinically relevant speech features are selected as anchors (e.g., fraction of filled pauses in speech BIBREF12 , acoustic features BIBREF13 etc. ). Additionally, such selected anchors must also be associated with thresholds applicable across multiple types of speech.",
""
],
[
"",
"Binary classification of each speech transcript as AD or HC is performed. We do 5-fold cross-validation, stratified by subject so that each subject's samples do not occur in both training and testing sets in each fold. The minority class is oversampled in the training set using SMOTE BIBREF14 to deal with the class imbalance. We consider a Random Forest (100 trees), Naïve Bayes (with equal priors), SVM (with RBF kernel), and a 2-layer neural network (10 units, Adam optimizer, 500 epochs) BIBREF15 . Additionally, we augment the DB data with healthy samples from FP with varied ages.",
""
],
[
""
],
[
"",
"Since data of different tasks have different noise patterns, the probability of overfitting to noise is reduced with samples from different tasks. This can also be visualized as decision boundaries of models trained on various dataset combinations. For Fig. SECREF2 , we embed the 480-dimensional feature vector into 2 dimensions using Locally Linear Embeddings BIBREF16 trained on DB.",
"",
"",
"In datasets consisting of picture descriptions and conversational speech (DB + FP), the feature ranges increase as compared to picture description tasks, so it is expected that a classifier trained on structured tasks only (DB + HAFP) would incorrectly classify healthy samples in the fourth quadrant (error rates for tasks not in dataset is 17.8%). However, decision boundaries for models trained on a mix of structured tasks and unstructured conversational speech seem to be more generalizable across tasks. E.g., decision boundaries obtained from DB + FP could apply to most datapoints in HAFP (out of task error rate is 3.6%). Clinically, some of the features used such as the patterns in usage of function words like pronouns have shown to reflect stress-related changes in gene expression, possibly caused due to dementia BIBREF17 which would not depend on the task type and could explain such a common underlying structure to features.",
""
],
[
"",
"Results of binary classification with different dataset combinations (i.e., the proportion of each dataset used) are in Tab. SECREF7 . The highest F1 score on DB is INLINEFORM0 with SVM as obtained by Noorian et al. BIBREF5 , enabling similar comparisons.",
"",
"",
"We see the same trend of increasing model performance with normative data from the picture description task, as shown by Noorian et al. BIBREF5 . We observe that this increase is independent of the nature of the task performed – normative picture description task data of similar size as in BIBREF5 and the same amount of normative data from different structured tasks of fluency tests and paragraph reading prove to be helpful, bringing about a similar increase in scores (+2%, +5% absolute F1 micro and macro). Interestingly, performance of detecting the majority (healthy) class (reflected in F1 micro) as well as the minority (AD) class (reflected in F1 macro) increases with additional data.",
"Augmenting DB with same amount of samples from structured tasks (HA) and from conversational speech (FP) brings about similar performance. Doubling the initial amount of control data with data from a different structured task (HA, HAFP) results in an increase of up to 9% in F1 scores.",
""
],
[
"",
"",
"We augment DB with healthy samples from FP with varying ages (Tab. SECREF11 ), considering 50 samples for each 15 year duration starting from age 30. Adding the same number of samples from bins of age greater than 60 leads to greater increase in performance. This could be because the average age of participants in the datasets (DB, HA etc.) we use are greater than 60. Note that despite such a trend, addition of healthy data produces fair classifiers with respect to samples with age INLINEFORM0 60 and those with age INLINEFORM1 60 (balanced F1 scores of 75.6% and 76.1% respectively; further details in App. SECREF43 .)",
""
],
[
"",
"Each task performed in the datasets is designed to assess different cognitive functions, e.g. fluency task is used to evaluate the ability to organize and plan BIBREF18 and picture description task – for detecting discourse-related impairments BIBREF19 . As a result, it is expected that the nature of decision functions and feature predicates learned on data of each of these tasks would be different. Performance of AD identification with addition of normative data from multiple tasks (Tab. SECREF7 ), despite the possibly different nature of decision functions, suggests that training the model with samples from each task provides domain-specific inductive bias for other tasks. We study possible underlying mechanisms responsible for this, suggested by Caruana et al. BIBREF20 and Ruder et al. BIBREF21 .",
"Attention-focusing on Relevant Features: Ruder et al. BIBREF21 claim that in a small, high-dimensional dataset, information regarding relevance or irrelevance of particular features is difficult to capture. However, data related to multiple tasks can help identify features relevant across different activities. We can use anchor variables BIBREF6 to show this effect. The coverage of features anchoring the prediction of an instance indicates the applicability of the feature predicate to the rest of the data distribution and hence the importance of the feature across the data distribution. The coverage of the anchors selected for a test set which is 10% (50 samples) of DB changes by 40.8% (from 0.05 to 0.07) on the addition of the HA, which indicates that there is an attention focusing effect.",
"Representation bias: As shown by Schulz et al. BIBREF22 , models trained on data from multiple tasks perform better than with single-task information when little training data is available for the main task. The non-linear trend of increase in model performance with the addition of different amounts of data is shown in App. SECREF41 . The F1 micro score of the best performing model trained on DB + HA is 82.28% for picture description tasks, 95.4% for paragraph reading and 97.01% for fluency tasks. This shows greater than trivial performance for each task and improvement in performance for picture description task from training a model purely on DB. Such an effect helps the model achieve non-trivial performance on AD detection for novel tasks measuring multiple domains of cognition, given a sufficiently large number of training tasks according to algorithms provided by Baxter et al. BIBREF23 . Hence, training models on many speech-based tasks could help develop an algorithm capable of detecting AD from any sample of spontaneous speech.",
"Ongoing work is on detailed analysis of nature and polarity of feature trends across various speech tasks. Future work will focus on learning interpretable latent representations based on the observations made, capable of good predictive performance across a multitude of tasks."
],
[
"DementiaBank (DB): The DementiaBank dataset is the largest available public dataset of speech for assessing cognitive impairments. It consists of narrative picture descriptions from participants aged between 45 to 90 BIBREF24 . In each sample, a participant describes the picture that they are shown. Out of the 210 participants in the study, 117 were diagnosed with AD ( INLINEFORM0 samples of speech) and 93 were healthy (HC; INLINEFORM1 samples) with many subjects repeating the task with an interval of a year. Demographics of age, sex, and years of education are provided in the dataset.",
"Healthy Aging (HA) : The Healthy Aging dataset consists of speech samples of cognitively healthy participants ( INLINEFORM0 ) older than 50 years. Each participant performs three structured tasks – picture description (HAPD), verbal fluency test, and a paragraph reading task. Fluency and paragraph tasks are jointly referred to as HAFP. The average number of samples per participant is 14.46. The dataset constitutes 8.5 hours of total audio.",
"Famous People (FP): The Famous People dataset BIBREF8 consists of publicly available spontaneous speech samples from 9 famous individuals (e.g., Woody Allen & Clint Eastwood) over the period from 1956 to 2017, spanning periods from early adulthood to older age, with an average of 25 samples per person. We use speech samples of these subjects who are considered to be healthy ( INLINEFORM0 ), given an absence of any reported diagnosis or subjective memory complaints. This healthy control (HC) group covers a variety of speaker ages, from 30 to 88 ( INLINEFORM1 , INLINEFORM2 )."
],
[
"A list of 480 features belonging to three groups - acoustic, semantic/ syntactic and lexical. These features include constituency-parsing based features, syntactic complexity features extracted using Lu Syntactic Complexity analyzer BIBREF9 , MFCC means, variances and other higher order moments. Few of these features are listed below :",
"Phonation rate : Percentage of recording that is voiced.",
"Mean pause duration : Mean duration of pauses in seconds.",
"Pause word ratio : Ratio of silent segments to voiced segments.",
"Short pause count normalized : Normalized number of pauses less than 1 second.",
"Medium pause count normalized : Normalized number of pauses between 1 second and 2 seconds in length.",
"ZCR kurtosis : Kurtosis of Zero Crossing Rate (ZCR) of all voiced segments across frames.",
"MFCC means : Mean of velocity of MFCC coefficient over all frames (this is calculated for multiple coefficients).",
"MFCC kurtosis: Kurtosis of mean features.",
"MFCC variance: Variance of acceleration of frame energy over all frames.",
"Moving-average type-token ratio (MATTR): Moving average TTR (type-token ratio) over a window of 10 tokens.",
"Cosine cutoff : Fraction of pairs of utterances with cosine distance INLINEFORM0 0.001.",
"Pauses of type `uh' : The number of `uh' fillers over all tokens.",
"Numbers of interjections/numerals : The number of interjections/numerals used over all tokens.",
"Noun ratio: Ratio of number of nouns to number of nouns + verbs.",
"Temporal cohesion feature : Average number of switches in tense.",
"Speech graph features : Features extracted from graph of spoken words in a sample including average total degree, number of edges, average shortest path, graph diameter (undirected) and graph density.",
"Filled pauses : Number of non-silent pauses.",
"Noun frequency : Average frequency norm for all nouns.",
"Noun imageability: Average imageability norm for all nouns.",
"Features from parse-tree : Number of times production rules such as number of noun phrases to determiners occurrences, occur over the total number of productions in the transcript's parse tree.",
"Syntactic complexity features: Ratio of clauses to T-units, Ratio of clauses to sentences etc. BIBREF9 "
],
[
"Gaussian Naive Bayes with balanced priors is used.",
"The random forest classifier fits 100 decision trees with other default parameters in BIBREF15 .",
"SVM is trained with radial basis function kernel, regularization parameter INLINEFORM0 and INLINEFORM1 .",
"The NN consists of one hidden layer of 10 units. The tanh activation function is used at each hidden layer. The network is trained using Adam for 100 epochs with other default parameters in BIBREF15 ."
],
[
"The effect of augmenting DB with data from a different structured task (HAFP) is shown in SECREF41 .",
"F1 scores (micro and macro) increase non-linearly with the addition of data."
],
[
"We evaluate fairness of classification with respect to two groups - samples with age INLINEFORM0 60 and those with age INLINEFORM1 60. A fair classifier would produce comparable classification scores for both groups. For the best performing classifier on DB, the F1 (micro) score for samples with age INLINEFORM2 60 is 85.9% and with age INLINEFORM3 60 is 76.4%. With the addition of HA, the F1 (micro) score for samples with age INLINEFORM4 60 and with age INLINEFORM5 is more balanced (75.6%, 76.1% respectively) for the same set of data points from DB. Note that the average age in both datasets are similar ( INLINEFORM6 )."
]
],
"section_name": [
"Introduction",
"Data",
"Methods",
"Experiments",
"Results and Discussion",
"Visualization of Class Boundaries",
"Classification Performance",
"Impact of Age",
"Inductive Bias of Tasks",
"Detailed Description of Datasets",
"Features",
"Hyper-parameters:",
"Effect of data from different tasks:",
"Fairness with Respect to Age:"
]
} | {
"answers": [
{
"annotation_id": [
"a472aa15bb7d7477cec691270d379811e2047efb",
"d85088593122cbce80a26b64825141642b0ec9cf"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Speech datasets used. Note that HAPD, HAFP and FP only have samples from healthy subjects. Detailed description in App. 2.",
"All datasets shown in Tab. SECREF2 were transcribed manually by trained transcriptionists, employing the same list of annotations and protocols, with the same set of features extracted from the transcripts (see Sec. SECREF3 ). HAPD and HAFP are jointly referred to as HA.",
"Binary classification of each speech transcript as AD or HC is performed. We do 5-fold cross-validation, stratified by subject so that each subject's samples do not occur in both training and testing sets in each fold. The minority class is oversampled in the training set using SMOTE BIBREF14 to deal with the class imbalance. We consider a Random Forest (100 trees), Naïve Bayes (with equal priors), SVM (with RBF kernel), and a 2-layer neural network (10 units, Adam optimizer, 500 epochs) BIBREF15 . Additionally, we augment the DB data with healthy samples from FP with varied ages.",
"We augment DB with healthy samples from FP with varying ages (Tab. SECREF11 ), considering 50 samples for each 15 year duration starting from age 30. Adding the same number of samples from bins of age greater than 60 leads to greater increase in performance. This could be because the average age of participants in the datasets (DB, HA etc.) we use are greater than 60. Note that despite such a trend, addition of healthy data produces fair classifiers with respect to samples with age INLINEFORM0 60 and those with age INLINEFORM1 60 (balanced F1 scores of 75.6% and 76.1% respectively; further details in App. SECREF43 .)",
"FLOAT SELECTED: Table 3: Augmenting DB with healthy data of varied ages. Scores averaged across 4 classifiers."
],
"extractive_spans": [],
"free_form_answer": "609",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Speech datasets used. Note that HAPD, HAFP and FP only have samples from healthy subjects. Detailed description in App. 2.",
"\nAll datasets shown in Tab. SECREF2 were transcribed manually by trained transcriptionists, employing the same list of annotations and protocols, with the same set of features extracted from the transcripts (see Sec. SECREF3 ). HAPD and HAFP are jointly referred to as HA.",
"Additionally, we augment the DB data with healthy samples from FP with varied ages.",
"We augment DB with healthy samples from FP with varying ages (Tab. SECREF11 ), considering 50 samples for each 15 year duration starting from age 30. ",
"FLOAT SELECTED: Table 3: Augmenting DB with healthy data of varied ages. Scores averaged across 4 classifiers."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
""
],
"paper_read": [
""
],
"question": [
"what is the size of the augmented dataset?"
],
"question_id": [
"57f23dfc264feb62f45d9a9e24c60bd73d7fe563"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
""
],
"topic_background": [
""
]
} | {
"caption": [
"Table 1: Speech datasets used. Note that HAPD, HAFP and FP only have samples from healthy subjects. Detailed description in App. 2.",
"Figure 1: Decision boundaries with RF classifier for datasets with their out-of-task error shown in bold; scattered points shown belong to the train set in each case. For models trained using general, task-independent features on picture description (Fig.1a) & other structured tasks from HAFP such as fluency (Fig.1b), decision boundaries are patchy as a result of few, far-lying points from the classes (e.g, in the fourth quadrant), leading to misclassifications on other tasks with varying feature ranges. However, on datasets consisting of general, unstructured conversations, this does not happen Fig.1c.",
"Table 2: AD vs HC classification. Highest F1 scores are shown in bold for datasets of similar size.",
"Table 3: Augmenting DB with healthy data of varied ages. Scores averaged across 4 classifiers.",
"Figure A.4: Effect of addition of data from a different structured task on F1 (micro) and F1 (macro)"
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"3-Table2-1.png",
"4-Table3-1.png",
"8-FigureA.4-1.png"
]
} | [
"what is the size of the augmented dataset?"
] | [
[
"1811.12254-Impact of Age-2",
"1811.12254-Experiments-1",
"1811.12254-4-Table3-1.png",
"1811.12254-2-Table1-1.png"
]
] | [
"609"
] | 108 |
1806.05504 | Entity Commonsense Representation for Neural Abstractive Summarization | A major proportion of a text summary includes important entities found in the original text. These entities build up the topic of the summary. Moreover, they hold commonsense information once they are linked to a knowledge base. Based on these observations, this paper investigates the usage of linked entities to guide the decoder of a neural text summarizer to generate concise and better summaries. To this end, we leverage on an off-the-shelf entity linking system (ELS) to extract linked entities and propose Entity2Topic (E2T), a module easily attachable to a sequence-to-sequence model that transforms a list of entities into a vector representation of the topic of the summary. Current available ELS's are still not sufficiently effective, possibly introducing unresolved ambiguities and irrelevant entities. We resolve the imperfections of the ELS by (a) encoding entities with selective disambiguation, and (b) pooling entity vectors using firm attention. By applying E2T to a simple sequence-to-sequence model with attention mechanism as base model, we see significant improvements of the performance in the Gigaword (sentence to title) and CNN (long document to multi-sentence highlights) summarization datasets by at least 2 ROUGE points. | {
"paragraphs": [
[
"Text summarization is a task to generate a shorter and concise version of a text while preserving the meaning of the original text. The task can be divided into two subtask based on the approach: extractive and abstractive summarization. Extractive summarization is a task to create summaries by pulling out snippets of text form the original text and combining them to form a summary. Abstractive summarization asks to generate summaries from scratch without the restriction to use the available words from the original text. Due to the limitations of extractive summarization on incoherent texts and unnatural methodology BIBREF0 , the research trend has shifted towards abstractive summarization.",
"Sequence-to-sequence models BIBREF1 with attention mechanism BIBREF2 have found great success in generating abstractive summaries, both from a single sentence BIBREF3 and from a long document with multiple sentences BIBREF4 . However, when generating summaries, it is necessary to determine the main topic and to sift out unnecessary information that can be omitted. Sequence-to-sequence models have the tendency to include all the information, relevant or not, that are found in the original text. This may result to unconcise summaries that concentrates wrongly on irrelevant topics. The problem is especially severe when summarizing longer texts.",
"In this paper, we propose to use entities found in the original text to infer the summary topic, mitigating the aforementioned problem. Specifically, we leverage on linked entities extracted by employing a readily available entity linking system. The importance of using linked entities in summarization is intuitive and can be explained by looking at Figure 1 as an example. First (O1 in the Figure), aside from auxiliary words to construct a sentence, a summary is mainly composed of linked entities extracted from the original text. Second (O2), we can depict the main topic of the summary as a probability distribution of relevant entities from the list of entities. Finally (O3), we can leverage on entity commonsense learned from a separate large knowledge base such as Wikipedia.",
"To this end, we present a method to effectively apply linked entities in sequence-to-sequence models, called Entity2Topic (E2T). E2T is a module that can be easily attached to any sequence-to-sequence based summarization model. The module encodes the entities extracted from the original text by an entity linking system (ELS), constructs a vector representing the topic of the summary to be generated, and informs the decoder about the constructed topic vector. Due to the imperfections of current ELS's, the extracted linked entities may be too ambiguous and coarse to be considered relevant to the summary. We solve this issue by using entity encoders with selective disambiguation and by constructing topic vectors using firm attention.",
"We experiment on two datasets, Gigaword and CNN, with varying lengths. We show that applying our module to a sequence-to-sequence model with attention mechanism significantly increases its performance on both datasets. Moreover, when compared with the state-of-the-art models for each dataset, the model obtains a comparable performance on the Gigaword dataset where the texts are short, and outperforms all competing models on the CNN dataset where the texts are longer. Furthermore, we provide analysis on how our model effectively uses the extracted linked entities to produce concise and better summaries."
],
[
"In the next subsections, we present detailed arguments with empirical and previously examined evidences on the observations and possible issues when using linked entities extracted by an entity linking system (ELS) for generating abstractive summaries. For this purpose, we use the development sets of the Gigaword dataset provided in BIBREF5 and of the CNN dataset provided in BIBREF6 as the experimental data for quantitative evidence and refer the readers to Figure 1 as the running example."
],
[
"As discussed in Section \"Introduction\" , we find three observations that show the usefulness of linked entities for abstractive summarization.",
"First, summaries are mainly composed of linked entities extracted from the original text. In the example, it can be seen that the summary contains four words that refer to different entities. In fact, all noun phrases in the summary mention at least one linked entity. In our experimental data, we extract linked entities from the original text and compare them to the noun phrases found in the summary. We report that $77.1\\%$ and $75.1\\%$ of the noun phrases on the Gigaword and CNN datasets, respectively, contain at least one linked entity, which confirms our observation.",
"Second, linked entities can be used to represent the topic of the summary, defined as a multinomial distribution over entities, as graphically shown in the example, where the probabilities refer to the relevance of the entities. Entities have been previously used to represent topics BIBREF7 , as they can be utilized as a controlled vocabulary of the main topics in a document BIBREF8 . In the example, we see that the entity “Jae Seo” is the most relevant because it is the subject of the summary, while the entity “South Korean” is less relevant because it is less important when constructing the summary.",
"Third, we can make use of the entity commonsense that can be learned as a continuous vector representation from a separate larger corpus BIBREF9 , BIBREF10 . In the example, if we know that the entities “Los Angeles Dodgers” and “New York Mets” are American baseball teams and “Jae Seo” is a baseball player associated with the teams, then we can use this information to generate more coherent summaries. We find that $76.0\\%$ of the extracted linked entities are covered by the pre-trained vectors in our experimental data, proving our third observation."
],
[
"Despite its usefulness, linked entities extracted from ELS's have issues because of low precision rates BIBREF11 and design challenges in training datasets BIBREF12 . These issues can be summarized into two parts: ambiguity and coarseness.",
"First, the extracted entities may be ambiguous. In the example, the entity “South Korean” is ambiguous because it can refer to both the South Korean person and the South Korean language, among others. In our experimental data, we extract (1) the top 100 entities based on frequency, and (2) the entities extracted from 100 randomly selected texts, and check whether they have disambiguation pages in Wikipedia or not. We discover that $71.0\\%$ of the top 100 entities and $53.6\\%$ of the entities picked at random have disambiguation pages, which shows that most entities are prone to ambiguity problems.",
"Second, the linked entities may also be too common to be considered an entity. This may introduce errors and irrelevance to the summary. In the example, “Wednesday” is erroneous because it is wrongly linked to the entity “Wednesday Night Baseball”. Also, “swap” is irrelevant because although it is linked correctly to the entity “Trade (Sports)”, it is too common and irrelevant when generating the summaries. In our experimental data, we randomly select 100 data instances and tag the correctness and relevance of extracted entities into one of four labels: A: correct and relevant, B: correct and somewhat relevant, C: correct but irrelevant, and D: incorrect. Results show that $29.4\\%$ , $13.7\\%$ , $30.0\\%$ , and $26.9\\%$ are tagged with A, B, C, and D, respectively, which shows that there is a large amount of incorrect and irrelevant entities."
],
[
"To solve the issues described above, we present Entity2Topic (E2T), a module that can be easily attached to any sequence-to-sequence based abstractive summarization model. E2T encodes the linked entities extracted from the text and transforms them into a single topic vector. This vector is ultimately concatenated to the decoder hidden state vectors. The module contains two submodules specifically for the issues presented by the entity linking systems: the entity encoding submodule with selective disambiguation and the pooling submodule with firm attention.",
"Overall, our full architecture can be illustrated as in Figure 2 , which consists of an entity linking system (ELS), a sequence-to-sequence with attention mechanism model, and the E2T module. We note that our proposed module can be easily attached to more sophisticated abstractive summarization models BIBREF13 , BIBREF14 that are based on the traditional encoder-decoder framework and consequently can produce better results. The code of the base model and the E2T are available online."
],
[
"As our base model, we employ a basic encoder-decoder RNN used in most neural machine translation BIBREF2 and text summarization BIBREF15 tasks. We employ a two-layer bidirectional GRU (BiGRU) as the recurrent unit of the encoder. The BiGRU consists of a forward and backward GRU, which results to sequences of forward and backward hidden states $(\\overrightarrow{h}_1, \\overrightarrow{h}_2, ..., \\overrightarrow{h}_n)$ and $(\\overleftarrow{h}_1, \\overleftarrow{h}_2, ..., \\overleftarrow{h}_n)$ , respectively: $\n\\overrightarrow{h}_i &= GRU(x_i, \\overrightarrow{h}_{i-1}) \\\\\n\\overleftarrow{h}_i &= GRU(x_i, \\overleftarrow{h}_{i+1}) \\nonumber $ ",
"The forward and backward hidden states are concatenated to get the hidden state vectors of the tokens (i.e. $h_i = [\\overrightarrow{h}_i; \\overleftarrow{h}_i]$ ). The final states of the forward and backward GRU are also concatenated to create the final text representation vector of the encoder $s = [\\overrightarrow{h}_n; \\overleftarrow{h}_1]$ . These values are calculated per layer, where $x_t$ of the second layer is $h_t$ of the first layer. The final text representation vectors are projected by a fully connected layer and are passed to the decoder as the initial hidden states $s_0 = s$ .",
"For the decoder, we use a two-layer uni-directional GRU with attention. At each time step $t$ , the previous token $y_{t-1}$ , the previous hidden state $s_{t-1}$ , and the previous context vector $c_{t-1}$ are passed to a GRU to calculate the new hidden state $s_t$ , as shown in the equation below. ",
"$$s_t = GRU(w_{t-1}, s_{t-1}, c_{t-1}) \\nonumber $$ (Eq. 9) ",
"The context vector $c_t$ is computed using the additive attention mechanism BIBREF2 , which matches the current decoder state $s_t$ and each encoder state $h_i$ to get an importance score. The scores are then passed to a softmax and are used to pool the encoder states using weighted sum. The final pooled vector is the context vector, as shown in the equations below. $\ng_{t,i} &= v_a^\\top tanh(W_a s_{t-1} + U_a h_i) \\\\\na_{t,i} &= \\frac{exp(g_{t,i})}{\\sum _i exp(g_{t,i})} \\\\\nc_t &= \\sum _i a_{t,i} h_i \\nonumber $ ",
"Finally, the previous token $y_{t-1}$ , the current context vector $c_t$ , and the current decoder state $s_t$ are used to generate the current word $y_t$ with a softmax layer over the decoder vocabulary, as shown below. $\no_t &= W_w w_{t-1} + W_c c_t + W_s s_t \\\\\np(y_t | y_{<t}) &= softmax(W_o o_t) \\nonumber $ "
],
[
"After performing entity linking to the input text using the ELS, we receive a sequential list of linked entities, arranged based on their location in the text. We embed these entities to $d$ -dimensional vectors $E = \\lbrace e_1, e_2, ..., e_m\\rbrace $ where $e_i \\in \\mathbb {R}^d$ . Since these entities may still contain ambiguity, it is necessary to resolve them before applying them to the base model. Based on the idea that an ambiguous entity can be disambiguated using its neighboring entities, we introduce two kinds of disambiguating encoders below.",
"One way to disambiguate an entity is by using all the other entities, putting more importance to entities that are nearer. For this purpose, we employ an RNN-based model to globally disambiguate the entities. Specifically, we use BiGRU and concatenate the forward and backward hidden state vectors as the new entity vector: $\n\\overrightarrow{h}_i &= GRU(e_i, \\overrightarrow{h}_{i-1}) \\\\\n\\overleftarrow{h}_i &= GRU(e_i, \\overleftarrow{h}_{i+1}) \\\\\ne^{\\prime }_i &= [\\overrightarrow{h}_i; \\overleftarrow{h}_i] \\nonumber $ ",
"Another way to disambiguate an entity is by using only the direct neighbors of the entity, putting no importance value to entities that are far. To do this, we employ a CNN-based model to locally disambiguate the entities. Specifically, we do the convolution operation using filter matrices $W_f \\in \\mathbb {R}^{h \\times d}$ with filter size $h$ to a window of $h$ words. We do this for different sizes of $h$ . This produces new feature vectors $c_{i,h}$ as shown below, where $f(.)$ is a non-linear function: $\nc_{i,h} = f([e_{i-(h-1)/2}; ...; e_{i+h(+1)/2}]^\\top W_f + b_f) \\nonumber $ ",
"The convolution operation reduces the number of entities differently depending on the filter size $h$ . To prevent loss of information and to produce the same amount of feature vectors $c_{i,h}$ , we pad the entity list dynamically such that when the filter size is $h$ , the number of paddings on each side is $(h-1)/2$ . The filter size $h$ therefore refers to the number of entities used to disambiguate a middle entity. Finally, we concatenate all feature vectors of different $h$ 's for each $i$ as the new entity vector: $\ne^{\\prime }_i = [c_{i,h_1}; c_{i, h_2}; ...] \\nonumber $ ",
"The question on which disambiguating encoder is better has been a debate; some argued that using only the local context is appropriate BIBREF16 while some claimed that additionally using global context also helps BIBREF17 . The RNN-based encoder is good as it smartly makes use of all entities, however it may perform bad when there are many entities as it introduces noise when using a far entity during disambiguation. The CNN-based encoder is good as it minimizes the noise by totally ignoring far entities when disambiguating, however determining the appropriate filter sizes $h$ needs engineering. Overall, we argue that when the input text is short (e.g. a sentence), both encoders perform comparably, otherwise when the input text is long (e.g. a document), the CNN-based encoder performs better.",
"It is obvious that not all entities need to be disambiguated. When a correctly linked and already adequately disambiguated entity is disambiguated again, it would make the entity very context-specific and might not be suitable for the summarization task. Our entity encoding submodule therefore uses a selective mechanism that decides whether to use the disambiguating encoder or not. This is done by introducing a selective disambiguation gate $d$ . The final entity vector $\\tilde{e}_i$ is calculated as the linear transformation of $e_i$ and $e^{\\prime }_i$ : $\ne^{\\prime }_i &= encoder(e_i) \\\\\nd &= \\sigma (W_d e^{\\prime }_i + b_d) \\\\\n\\tilde{e}_i &= d \\times f(W_x e_i + b_x) + \\\\ & \\quad (1-d) \\times f(W_y e^{\\prime }_i + b_y) \\nonumber $ ",
"The full entity encoding submodule is illustrated in Figure 3 . Ultimately, the submodule outputs the disambiguated entity vectors $\\tilde{E} = \\lbrace \\tilde{e}_1, \\tilde{e}_2, ..., \\tilde{e}_m\\rbrace $ ."
],
[
"The entity vectors $\\tilde{E}$ are pooled to create a single topic vector $t$ that represents the topic of the summary. One possible pooling technique is to use soft attention BIBREF18 on the vectors to determine the importance value of each vector, which can be done by matching each entity vector with the text vector $s$ from the text encoder as the context vector. The entity vectors are then pooled using weighted sum. One problem with soft attention is that it considers all entity vectors when constructing the topic vector. However, not all entities are important and necessary when generating summaries. Moreover, a number of these entities may be erroneous and irrelevant, as reported in Section \"Related work\" . Soft attention gives non-negligible important scores to these entities, thus adds unnecessary noise to the construction of the topic vector.",
"Our pooling submodule instead uses firm attention mechanism to consider only top $k$ entities when constructing the topic vector. This is done in a differentiable way as follows: $\nG &= v_a^\\top tanh(W_a \\tilde{E} + U_a s) \\\\\nK &= top\\_k(G) \\\\\nP &= sparse\\_vector(K, 0, -\\infty ) \\\\\ng^{\\prime }_i &= g_i + p_i \\\\\na_i &= \\frac{exp(g^{\\prime }_i)}{\\sum _i exp(g^{\\prime }_i)} \\\\\nt &= \\sum _i a_i \\tilde{e}_i \\nonumber $ ",
" where the functions $K = top\\_k(G)$ gets the indices of the top $k$ vectors in $G$ and $P = sparse\\_vector(K,0,-\\infty )$ creates a sparse vector where the values of $K$ is 0 and $-\\infty $ otherwise. The sparse vector $P$ is added to the original importance score vector $G$ to create a new importance score vector. In this new vector, important scores of non-top $k$0 entities are $k$1 . When softmax is applied, this gives very small, negligible, and close-to-zero values to non-top $k$2 entities. The value $k$3 depends on the lengths of the input text and summary. Moreover, when $k$4 increases towards infinity, firm attention becomes soft attention. We decide $k$5 empirically (see Section \"Experimental settings\" )."
],
[
"Entity2Topic module extends the base model as follows. The final text representation vector $s$ is used as a context vector when constructing the topic vector $t$ in the pooling submodule. The topic vector $t$ is then concatenated to the decoder hidden state vectors $s_i$ , i.e. $s^{\\prime }_i = [s_i; t]$ . The concatenated vector is finally used to create the output vector: ",
"$$o_i = W_w w_{i-1} + W_c c_i + W_s s^{\\prime }_i \\nonumber $$ (Eq. 18) "
],
[
"Due to its recent success, neural network models have been used with competitive results on abstractive summarization. A neural attention model was first applied to the task, easily achieving state-of-the-art performance on multiple datasets BIBREF5 . The model has been extended to instead use recurrent neural network as decoder BIBREF3 . The model was further extended to use a full RNN encoder-decoder framework and further enhancements through lexical and statistical features BIBREF15 . The current state-of-the-art performance is achieved by selectively encoding words as a process of distilling salient information BIBREF13 .",
"Neural abstractive summarization models have also been explored to summarize longer documents. Word extraction models have been previously explored, performing worse than sentence extraction models BIBREF19 . Hierarchical attention-based recurrent neural networks have also been applied to the task, owing to the idea that there are multiple sentences in a document BIBREF15 . Finally, distraction-based models were proposed to enable models to traverse the text content and grasp the overall meaning BIBREF4 . The current state-of-the-art performance is achieved by a graph-based attentional neural model, considering the key factors of document summarization such as saliency, fluency and novelty BIBREF14 .",
"Previous studies on the summarization tasks have only used entities in the preprocessing stage to anonymize the dataset BIBREF15 and to mitigate out-of-vocabulary problems BIBREF14 . Linked entities for summarization are still not properly explored and we are the first to use linked entities to improve the performance of the summarizer."
],
[
"We report the ROUGE F1 scores for both datasets of all the competing models using ROUGE F1 scores BIBREF27 . We report the results on the Gigaword and the CNN dataset in Table 2 and Table 3 , respectively. In Gigaword dataset where the texts are short, our best model achieves a comparable performance with the current state-of-the-art. In CNN dataset where the texts are longer, our best model outperforms all the previous models. We emphasize that E2T module is easily attachable to better models, and we expect E2T to improve their performance as well. Overall, E2T achieves a significant improvement over the baseline model base, with at least 2 ROUGE-1 points increase in the Gigaword dataset and 6 ROUGE-1 points increase in the CNN dataset. In fact, all variants of E2T gain improvements over the baseline, implying that leveraging on linked entities improves the performance of the summarizer. Among the model variants, the CNN-based encoder with selective disambiguation and firm attention performs the best.",
"Automatic evaluation on the Gigaword dataset shows that the CNN and RNN variants of base+E2T have similar performance. To break the tie between both models, we also conduct human evaluation on the Gigaword dataset. We instruct two annotators to read the input sentence and rank the competing summaries from first to last according to their relevance and fluency: (a) the original summary gold, and from models (b) base, (c) base+E2Tcnn, and (d) base+E2Trnn. We then compute (i) the proportion of every ranking of each model and (ii) the mean rank of each model. The results are reported in Table 4 . The model with the best mean rank is base+E2Tcnn, followed by gold, then by base+E2Trnn and base, respectively. We also perform ANOVA and post-hoc Tukey tests to show that the CNN variant is significantly ( $p<0.01$ ) better than the RNN variant and the base model. The RNN variant does not perform as well as the CNN variant, contrary to the automatic ROUGE evaluation above. Interestingly, the CNN variant produces better (but with no significant difference) summaries than the gold summaries. We posit that this is due to the fact that the article title does not correspond to the summary of the first sentence."
],
[
"We proposed to leverage on linked entities to improve the performance of sequence-to-sequence models on neural abstractive summarization task. Linked entities are used to guide the decoding process based on the summary topic and commonsense learned from a knowledge base. We introduced Entity2Topic (E2T), a module that is easily attachable to any model using an encoder-decoder framework. E2T applies linked entities into the summarizer by encoding the entities with selective disambiguation and pooling them into one summary topic vector with firm attention mechanism. We showed that by applying E2T to a basic sequence-to-sequence model, we achieve significant improvements over the base model and consequently achieve a comparable performance with more complex summarization models."
],
[
"We would like to thank the three anonymous reviewers for their valuable feedback. This work was supported by Microsoft Research, and Institute for Information communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2017-0-01778 , Development of Explainable Humanlevel Deep Machine Learning Inference Framework). S. Hwang is a corresponding author."
]
],
"section_name": [
"Introduction",
"Usefulness of linked entities in summarization",
"Observations",
"Possible issues",
"Our model",
"Base model",
"Entity encoding submodule",
"Pooling submodule",
"Extending from the base model",
"Related work",
"Results",
"Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"b3249402b6556aa515315e1c9a81ef5fc6678668",
"bd8e841bea9f58d4933fe40cf955f9fda27bf9f4"
],
"answer": [
{
"evidence": [
"Despite its usefulness, linked entities extracted from ELS's have issues because of low precision rates BIBREF11 and design challenges in training datasets BIBREF12 . These issues can be summarized into two parts: ambiguity and coarseness.",
"First, the extracted entities may be ambiguous. In the example, the entity “South Korean” is ambiguous because it can refer to both the South Korean person and the South Korean language, among others. In our experimental data, we extract (1) the top 100 entities based on frequency, and (2) the entities extracted from 100 randomly selected texts, and check whether they have disambiguation pages in Wikipedia or not. We discover that $71.0\\%$ of the top 100 entities and $53.6\\%$ of the entities picked at random have disambiguation pages, which shows that most entities are prone to ambiguity problems.",
"Second, the linked entities may also be too common to be considered an entity. This may introduce errors and irrelevance to the summary. In the example, “Wednesday” is erroneous because it is wrongly linked to the entity “Wednesday Night Baseball”. Also, “swap” is irrelevant because although it is linked correctly to the entity “Trade (Sports)”, it is too common and irrelevant when generating the summaries. In our experimental data, we randomly select 100 data instances and tag the correctness and relevance of extracted entities into one of four labels: A: correct and relevant, B: correct and somewhat relevant, C: correct but irrelevant, and D: incorrect. Results show that $29.4\\%$ , $13.7\\%$ , $30.0\\%$ , and $26.9\\%$ are tagged with A, B, C, and D, respectively, which shows that there is a large amount of incorrect and irrelevant entities."
],
"extractive_spans": [],
"free_form_answer": "Linked entities may be ambiguous or too common",
"highlighted_evidence": [
"Despite its usefulness, linked entities extracted from ELS's have issues because of low precision rates BIBREF11 and design challenges in training datasets BIBREF12 .",
"First, the extracted entities may be ambiguous.",
"Second, the linked entities may also be too common to be considered an entity. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Despite its usefulness, linked entities extracted from ELS's have issues because of low precision rates BIBREF11 and design challenges in training datasets BIBREF12 . These issues can be summarized into two parts: ambiguity and coarseness.",
"Second, the linked entities may also be too common to be considered an entity. This may introduce errors and irrelevance to the summary. In the example, “Wednesday” is erroneous because it is wrongly linked to the entity “Wednesday Night Baseball”. Also, “swap” is irrelevant because although it is linked correctly to the entity “Trade (Sports)”, it is too common and irrelevant when generating the summaries. In our experimental data, we randomly select 100 data instances and tag the correctness and relevance of extracted entities into one of four labels: A: correct and relevant, B: correct and somewhat relevant, C: correct but irrelevant, and D: incorrect. Results show that $29.4\\%$ , $13.7\\%$ , $30.0\\%$ , and $26.9\\%$ are tagged with A, B, C, and D, respectively, which shows that there is a large amount of incorrect and irrelevant entities."
],
"extractive_spans": [
"linked entities extracted from ELS's have issues because of low precision rates BIBREF11 and design challenges in training datasets BIBREF12 . These issues can be summarized into two parts: ambiguity and coarseness.",
"the linked entities may also be too common to be considered an entity."
],
"free_form_answer": "",
"highlighted_evidence": [
"linked entities extracted from ELS's have issues because of low precision rates BIBREF11 and design challenges in training datasets BIBREF12 . These issues can be summarized into two parts: ambiguity and coarseness.",
"the linked entities may also be too common to be considered an entity. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"infinity"
],
"paper_read": [
"no"
],
"question": [
"Why are current ELS's not sufficiently effective?"
],
"question_id": [
"ef7212075e80bf35b7889dc8dd52fcbae0d1400a"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"commonsense"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: Observations on linked entities in summaries. O1: Summaries are mainly composed of entities. O2: Entities can be used to represent the topic of the summary. O3: Entity commonsense learned from a large corpus can be used.",
"Figure 2: Full architecture of our proposed sequence-to-sequence model with Entity2Topic (E2T) module.",
"Figure 3: Entity encoding submodule with selective disambiguation applied to the entity 3©. The left figure represents the full submodule while the right figure represents the two choices of disambiguating encoders.",
"Table 1: Dataset statistics.",
"Table 2: Results on the Gigaword dataset using the fulllength F1 variants of ROUGE.",
"Table 3: Results on the CNN dataset using the fulllength F1 ROUGE metric.",
"Table 4: Human evaluations on the Gigaword dataset. Bold-faced values are the best while red-colored values are the worst among the values in the evaluation metric.",
"Table 5: Examples from Gigaword and CNN datasets and corresponding summaries generated by competing models. The tagged part of text is marked bold and preceded with at sign (@). The red color fill represents the attention scores given to each entity. We only report the attention scores of entities in the Gigaword example for conciseness since there are 80 linked entities in the CNN example.",
"Table 6: Examples with highest/lowest disambiguation gate d values of two example entities (United States and gold). The tagged part of text is marked bold and preceded with at sign (@)."
],
"file": [
"1-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"8-Table4-1.png",
"9-Table5-1.png",
"9-Table6-1.png"
]
} | [
"Why are current ELS's not sufficiently effective?"
] | [
[
"1806.05504-Possible issues-1",
"1806.05504-Possible issues-0",
"1806.05504-Possible issues-2"
]
] | [
"Linked entities may be ambiguous or too common"
] | 111 |
1908.05828 | Named Entity Recognition for Nepali Language | Named Entity Recognition (NER) has been studied for many languages like English, German, Spanish, and others but virtually no studies have focused on the Nepali language. One key reason is the lack of an appropriate, annotated dataset. In this paper, we describe a Nepali NER dataset that we created. We discuss and compare the performance of various machine learning models on this dataset. We also propose a novel NER scheme for Nepali and show that this scheme, based on grapheme-level representations, outperforms character-level representations when combined with BiLSTM models. Our best models obtain an overall F1 score of 86.89, which is a significant improvement on previously reported performance in literature. | {
"paragraphs": [
[
"Named Entity Recognition (NER) is a foremost NLP task to label each atomic elements of a sentence into specific categories like \"PERSON\", \"LOCATION\", \"ORGANIZATION\" and othersBIBREF0. There has been an extensive NER research on English, German, Dutch and Spanish language BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, and notable research on low resource South Asian languages like HindiBIBREF6, IndonesianBIBREF7 and other Indian languages (Kannada, Malayalam, Tamil and Telugu)BIBREF8. However, there has been no study on developing neural NER for Nepali language. In this paper, we propose a neural based Nepali NER using latest state-of-the-art architecture based on grapheme-level which doesn't require any hand-crafted features and no data pre-processing.",
"Recent neural architecture like BIBREF1 is used to relax the need to hand-craft the features and need to use part-of-speech tag to determine the category of the entity. However, this architecture have been studied for languages like English, and German and not been applied to languages like Nepali which is a low resource language i.e limited data set to train the model. Traditional methods like Hidden Markov Model (HMM) with rule based approachesBIBREF9,BIBREF10, and Support Vector Machine (SVM) with manual feature-engineeringBIBREF11 have been applied but they perform poor compared to neural. However, there has been no research in Nepali NER using neural network. Therefore, we created the named entity annotated dataset partly with the help of Dataturk to train a neural model. The texts used for this dataset are collected from various daily news sources from Nepal around the year 2015-2016.",
"Following are our contributions:",
"We present a novel Named Entity Recognizer (NER) for Nepali language. To best of our knowledge we are the first to propose neural based Nepali NER.",
"As there are not good quality dataset to train NER we release a dataset to support future research",
"We perform empirical evaluation of our model with state-of-the-art models with relative improvement of upto 10%",
"In this paper, we present works similar to ours in Section SECREF2. We describe our approach and dataset statistics in Section SECREF3 and SECREF4, followed by our experiments, evaluation and discussion in Section SECREF5, SECREF6, and SECREF7. We conclude with our observations in Section SECREF8.",
"To facilitate further research our code and dataset will be made available at github.com/link-yet-to-be-updated"
],
[
"There has been a handful of research on Nepali NER task based on approaches like Support Vector Machine and gazetteer listBIBREF11 and Hidden Markov Model and gazetteer listBIBREF9,BIBREF10.",
"BIBREF11 uses SVM along with features like first word, word length, digit features and gazetteer (person, organization, location, middle name, verb, designation and others). It uses one vs rest classification model to classify each word into different entity classes. However, it does not the take context word into account while training the model. Similarly, BIBREF9 and BIBREF10 uses Hidden Markov Model with n-gram technique for extracting POS-tags. POS-tags with common noun, proper noun or combination of both are combined together, then uses gazetteer list as look-up table to identify the named entities.",
"Researchers have shown that the neural networks like CNNBIBREF12, RNNBIBREF13, LSTMBIBREF14, GRUBIBREF15 can capture the semantic knowledge of language better with the help of pre-trained embbeddings like word2vecBIBREF16, gloveBIBREF17 or fasttextBIBREF18.",
"Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone."
],
[
"In this section, we describe our approach in building our model. This model is partly inspired from multiple models BIBREF20,BIBREF1, andBIBREF2"
],
[
"We used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence. Generally, LSTMs take inputs from left (past) of the sentence and computes the hidden state. However, it is proven beneficialBIBREF23 to use bi-directional LSTM, where, hidden states are computed based from right (future) of sentence and both of these hidden states are concatenated to produce the final output as $h_t$=[$\\overrightarrow{h_t}$;$\\overleftarrow{h_t}$], where $\\overrightarrow{h_t}$, $\\overleftarrow{h_t}$ = hidden state computed in forward and backward direction respectively."
],
[
"We have used Word2Vec BIBREF16, GloVe BIBREF17 and FastText BIBREF18 word vectors of 300 dimensions. These vectors were trained on the corpus obtained from Nepali National Corpus. This pre-lemmatized corpus consists of 14 million words from books, web-texts and news papers. This corpus was mixed with the texts from the dataset before training CBOW and skip-gram version of word2vec using gensim libraryBIBREF24. This trained model consists of vectors for 72782 unique words.",
"Light pre-processing was performed on the corpus before training it. For example, invalid characters or characters other than Devanagari were removed but punctuation and numbers were not removed. We set the window context at 10 and the rare words whose count is below 5 are dropped. These word embeddings were not frozen during the training session because fine-tuning word embedding help achieve better performance compared to frozen oneBIBREF20.",
"We have used fasttext embeddings in particular because of its sub-word representation ability, which is very useful in highly inflectional language as shown in Table TABREF25. We have trained the word embedding in such a way that the sub-word size remains between 1 and 4. We particularly chose this size because in Nepali language a single letter can also be a word, for example e, t, C, r, l, n, u and a single character (grapheme) or sub-word can be formed after mixture of dependent vowel signs with consonant letters for example, C + O + = CO, here three different consonant letters form a single sub-word.",
"The two-dimensional visualization of an example word npAl is shown in FIGREF14. Principal Component Analysis (PCA) technique was used to generate this visualization which helps use to analyze the nearest neighbor words of a given sample word. 84 and 104 nearest neighbors were observed using word2vec and fasttext embedding respectively on the same corpus."
],
[
"BIBREF20 and BIBREF2 successfully presented that the character-level embeddings, extracted using CNN, when combined with word embeddings enhances the NER model performance significantly, as it is able to capture morphological features of a word. Figure FIGREF7 shows the grapheme-level CNN used in our model, where inputs to CNN are graphemes. Character-level CNN is also built in similar fashion, except the inputs are characters. Grapheme or Character -level embeddings are randomly initialized from [0,1] with real values with uniform distribution of dimension 30."
],
[
"Grapheme is atomic meaningful unit in writing system of any languages. Since, Nepali language is highly morphologically inflectional, we compared grapheme-level representation with character-level representation to evaluate its effect. For example, in character-level embedding, each character of a word npAl results into n + + p + A + l has its own embedding. However, in grapheme level, a word npAl is clustered into graphemes, resulting into n + pA + l. Here, each grapheme has its own embedding. This grapheme-level embedding results good scores on par with character-level embedding in highly inflectional languages like Nepali, because graphemes also capture syntactic information similar to characters. We created grapheme clusters using uniseg package which is helpful in unicode text segmentations."
],
[
"We created one-hot encoded vector of POS tags and then concatenated with pre-trained word embeddings before passing it to BiLSTM network. A sample of data is shown in figure FIGREF13."
],
[
"Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25.",
"Since, this dataset is not lemmatized originally, we lemmatized only the post-positions like Ek, kO, l, mA, m, my, jF, sg, aEG which are just the few examples among 299 post positions in Nepali language. We obtained these post-positions from sanjaalcorps and added few more to match our dataset. We will be releasing this list in our github repository. We found out that lemmatizing the post-positions boosted the F1 score by almost 10%.",
"In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset.",
"The dataset released in our github repository contains each word in newline with space separated POS-tags and Entity-tags. The sentences are separated by empty newline. A sample sentence from the dataset is presented in table FIGREF13."
],
[
"After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23.",
"Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively."
],
[
"In this section, we present the details about training our neural network. The neural network architecture are implemented using PyTorch framework BIBREF26. The training is performed on a single Nvidia Tesla P100 SXM2. We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. The training and evaluation was done on sentence-level. The RNN variants are initialized randomly from $(-\\sqrt{k},\\sqrt{k})$ where $k=\\frac{1}{hidden\\_size}$.",
"First we loaded our dataset and built vocabulary using torchtext library. This eased our process of data loading using its SequenceTaggingDataset class. We trained our model with shuffled training set using Adam optimizer with hyper-parameters mentioned in table TABREF30. All our models were trained on single layer of LSTM network. We found out Adam was giving better performance and faster convergence compared to Stochastic Gradient Descent (SGD). We chose those hyper-parameters after many ablation studies. The dropout of 0.5 is applied after LSTM layer.",
"For CNN, we used 30 different filters of sizes 3, 4 and 5. The embeddings of each character or grapheme involved in a given word, were passed through the pipeline of Convolution, Rectified Linear Unit and Max-Pooling. The resulting vectors were concatenated and applied dropout of 0.5 before passing into linear layer to obtain the embedding size of 30 for the given word. This resulting embedding is concatenated with word embeddings, which is again concatenated with one-hot POS vector."
],
[
"Currently, for our experiments we trained our model on IO (Inside, Outside) format for both the dataset, hence the dataset does not contain any B-type annotation unlike in BIO (Beginning, Inside, Outside) scheme."
],
[
"We used simple early stopping technique where if the validation loss does not decrease after 10 epochs, the training was stopped, else the training will run upto 100 epochs. In our experience, training usually stops around 30-50 epochs."
],
[
"We ran our experiment looking for the best hyper-parameters by changing learning rate from (0,1, 0.01, 0.001, 0.0001), weight decay from [$10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$, $10^{-7}$], batch size from [1, 2, 4, 8, 16, 32, 64, 128], hidden size from [8, 16, 32, 64, 128, 256, 512 1024]. Table TABREF30 shows all other hyper-parameter used in our experiment for both of the dataset."
],
[
"Figure FIGREF31 shows how we end up choosing 0.5 as dropout rate. When the dropout layer was not used, the F1 score are at the lowest. As, we slowly increase the dropout rate, the F1 score also gradually increases, however after dropout rate = 0.5, the F1 score starts falling down. Therefore, we have chosen 0.5 as dropout rate for all other experiments performed."
],
[
"In this section, we present the details regarding evaluation and comparison of our models with other baselines.",
"Table TABREF25 shows the study of various embeddings and comparison among each other in OurNepali dataset. Here, raw dataset represents such dataset where post-positions are not lemmatized. We can observe that pre-trained embeddings significantly improves the score compared to randomly initialized embedding. We can deduce that Skip Gram models perform better compared CBOW models for word2vec and fasttext. Here, fastText_Pretrained represents the embedding readily available in fastText website, while other embeddings are trained on the Nepali National Corpus as mentioned in sub-section SECREF11. From this table TABREF25, we can clearly observe that model using fastText_Skip Gram embeddings outperforms all other models.",
"Table TABREF35 shows the model architecture comparison between all the models experimented. The features used for Stanford CRF classifier are words, letter n-grams of upto length 6, previous word and next word. This model is trained till the current function value is less than $1\\mathrm {e}{-2}$. The hyper-parameters of neural network experiments are set as shown in table TABREF30. Since, word embedding of character-level and grapheme-level is random, their scores are near.",
"All models are evaluated using CoNLL-2003 evaluation scriptBIBREF25 to calculate entity-wise precision, recall and f1 score."
],
[
"In this paper we present that we can exploit the power of neural network to train the model to perform downstream NLP tasks like Named Entity Recognition even in Nepali language. We showed that the word vectors learned through fasttext skip gram model performs better than other word embedding because of its capability to represent sub-word and this is particularly important to capture morphological structure of words and sentences in highly inflectional like Nepali. This concept can come handy in other Devanagari languages as well because the written scripts have similar syntactical structure.",
"We also found out that stemming post-positions can help a lot in improving model performance because of inflectional characteristics of Nepali language. So when we separate out its inflections or morphemes, we can minimize the variations of same word which gives its root word a stronger word vector representations compared to its inflected versions.",
"We can clearly imply from tables TABREF23, TABREF24, and TABREF35 that we need more data to get better results because OurNepali dataset volume is almost ten times bigger compared to ILPRL dataset in terms of entities."
],
[
"In this paper, we proposed a novel NER for Nepali language and achieved relative improvement of upto 10% and studies different factors effecting the performance of the NER for Nepali language.",
"We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively.",
"Since this is the first named entity recognition research in Nepal language using neural network, there are many rooms for improvement. We believe initializing the grapheme-level embedding with fasttext embeddings might help boosting the performance, rather than randomly initializing it. In future, we plan to apply other latest techniques like BERT, ELMo and FLAIR to study its effect on low-resource language like Nepali. We also plan to improve the model using cross-lingual or multi-lingual parameter sharing techniques by jointly training with other Devanagari languages like Hindi and Bengali.",
"Finally, we would like to contribute our dataset to Nepali NLP community to move forward the research going on in language understanding domain. We believe there should be special committee to create and maintain such dataset for Nepali NLP and organize various competitions which would elevate the NLP research in Nepal.",
"Some of the future works are listed below:",
"Proper initialization of grapheme level embedding from fasttext embeddings.",
"Apply robust POS-tagger for Nepali dataset",
"Lemmatize the OurNepali dataset with robust and efficient lemmatizer",
"Improve Nepali language score with cross-lingual learning techniques",
"Create more dataset using Wikipedia/Wikidata framework"
],
[
"The authors of this paper would like to express sincere thanks to Bal Krishna Bal, Kathmandu University Professor for providing us the POS-tagged Nepali NER data."
]
],
"section_name": [
"Introduction",
"Related Work",
"Approach",
"Approach ::: Bidirectional LSTM",
"Approach ::: Features ::: Word embeddings",
"Approach ::: Features ::: Character-level embeddings",
"Approach ::: Features ::: Grapheme-level embeddings",
"Approach ::: Features ::: Part-of-speech (POS) one hot encoding",
"Dataset Statistics ::: OurNepali dataset",
"Dataset Statistics ::: ILPRL dataset",
"Experiments",
"Experiments ::: Tagging Scheme",
"Experiments ::: Early Stopping",
"Experiments ::: Hyper-parameters Tuning",
"Experiments ::: Effect of Dropout",
"Evaluation",
"Discussion",
"Conclusion and Future work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"d9162ca0b7abcbb531f92d0e4d2bf096091098a0"
],
"answer": [
{
"evidence": [
"We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively."
],
"extractive_spans": [
"BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS "
],
"free_form_answer": "",
"highlighted_evidence": [
"Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"68ea8331d1a22735cca49b307b6b131a9d504574",
"e64987455032c51186bf784c43932e95010a9b50"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Dataset statistics"
],
"extractive_spans": [],
"free_form_answer": "3606",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Dataset statistics"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset."
],
"extractive_spans": [
"6946"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"025cbaabb3dd477dc8ea0fb65ed47dc1c5a41fb7",
"ac4b8cc68e088e067c1855c722fe1934598ccd35"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"28fc80e637b470e32426d47e12d1262c751a4d7c",
"7c632091c62bc8f0413933fa656b365cf50dfe17"
],
"answer": [
{
"evidence": [
"Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone."
],
"extractive_spans": [
"CNN modelBIBREF0",
"Stanford CRF modelBIBREF21"
],
"free_form_answer": "",
"highlighted_evidence": [
"First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 6: Comparison with previous models based on Test F1 score"
],
"extractive_spans": [],
"free_form_answer": "Bam et al. SVM, Ma and Hovy w/glove, Lample et al. w/fastText, Lample et al. w/word2vec",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Comparison with previous models based on Test F1 score"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"104492fc61e7972c776833b3ccae33327c8b8fe5",
"2ad2fcdae10b551381d297c0b2289d81d23875bb"
],
"answer": [
{
"evidence": [
"In this section, we present the details about training our neural network. The neural network architecture are implemented using PyTorch framework BIBREF26. The training is performed on a single Nvidia Tesla P100 SXM2. We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. The training and evaluation was done on sentence-level. The RNN variants are initialized randomly from $(-\\sqrt{k},\\sqrt{k})$ where $k=\\frac{1}{hidden\\_size}$."
],
"extractive_spans": [
"BiLSTM",
"BiLSTM-CNN",
"BiLSTM-CRF",
"BiLSTM-CNN-CRF"
],
"free_form_answer": "",
"highlighted_evidence": [
"We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone."
],
"extractive_spans": [
"BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2",
"CNN modelBIBREF0 and Stanford CRF modelBIBREF21"
],
"free_form_answer": "",
"highlighted_evidence": [
"First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1c7d2b4e7a7288363900695e3f734b474ac94407",
"dac08766c09fd6a4a1ed8d8c83e5ae76f0f4d548"
],
"answer": [
{
"evidence": [
"After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23.",
"FLOAT SELECTED: Table 1: Dataset statistics"
],
"extractive_spans": [],
"free_form_answer": "Dataset contains 3606 total sentences and 79087 total entities.",
"highlighted_evidence": [
"The statistics of both the dataset is presented in table TABREF23.",
"FLOAT SELECTED: Table 1: Dataset statistics"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Dataset statistics",
"Dataset Statistics ::: OurNepali dataset",
"Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25.",
"Dataset Statistics ::: ILPRL dataset",
"After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23."
],
"extractive_spans": [],
"free_form_answer": "ILPRL contains 548 sentences, OurNepali contains 3606 sentences",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Dataset statistics",
"Dataset Statistics ::: OurNepali dataset\nSince, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset.",
"Dataset Statistics ::: ILPRL dataset\nAfter much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. ",
" The statistics of both the dataset is presented in table TABREF23."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"09db9163498cad58368f5f37bdc9a8346508ec31",
"cb9a2097a210d56a924fc8cb8d3608c28bce12a8"
],
"answer": [
{
"evidence": [
"Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25."
],
"extractive_spans": [
"daily newspaper of the year 2015-2016"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25."
],
"extractive_spans": [
"daily newspaper of the year 2015-2016"
],
"free_form_answer": "",
"highlighted_evidence": [
"This dataset contains the sentences collected from daily newspaper of the year 2015-2016."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"9099a2921c41dbc0fcf63ebb69db66835f38a63f",
"b59fa84272048d77bcf339f2ab8fe30206206b5f"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"0ab22973fee7967eba952361618cefeba1f1b9bf",
"a8696acdb8f68288772ce9d245e240fb4aeb52b1"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Dataset statistics",
"Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively."
],
"extractive_spans": [],
"free_form_answer": "OurNepali contains 3 different types of entities, ILPRL contains 4 different types of entities",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Dataset statistics",
"Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25."
],
"extractive_spans": [
"three"
],
"free_form_answer": "",
"highlighted_evidence": [
"This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"b706f66b3799b95228124546b6c533715478b77d",
"f3f3377c3b31eee7af5548080b96d765d615106e"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Dataset statistics",
"After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23."
],
"extractive_spans": [],
"free_form_answer": "3606 sentences",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Dataset statistics",
"The statistics of both the dataset is presented in table TABREF23.\n\n"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23.",
"FLOAT SELECTED: Table 1: Dataset statistics"
],
"extractive_spans": [],
"free_form_answer": "Dataset contains 3606 total sentences and 79087 total entities.",
"highlighted_evidence": [
"The statistics of both the dataset is presented in table TABREF23.",
"FLOAT SELECTED: Table 1: Dataset statistics"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"640a33539487a8a1da720376794106f45f21b183",
"f68d1c1e18165a41eb367c3d676212d07a2863e3"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 5: Comparison of different variation of our models"
],
"extractive_spans": [],
"free_form_answer": "On OurNepali test dataset Grapheme-level representation model achieves average 0.16% improvement, on ILPRL test dataset it achieves maximum 1.62% improvement",
"highlighted_evidence": [
"FLOAT SELECTED: Table 5: Comparison of different variation of our models"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively."
],
"extractive_spans": [
"BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"95b461da825fc2564cc58725fdab3a826048a5b2",
"caada640f1078dd87d2a6e86126e774d032379e6"
],
"answer": [
{
"evidence": [
"Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone."
],
"extractive_spans": [
"BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2",
"CNN modelBIBREF0 and Stanford CRF modelBIBREF21"
],
"free_form_answer": "",
"highlighted_evidence": [
"First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone."
],
"extractive_spans": [
"BiLSTM",
"BiLSTM+CNN",
"BiLSTM+CRF",
"BiLSTM+CNN+CRF",
"CNN",
"Stanford CRF"
],
"free_form_answer": "",
"highlighted_evidence": [
"Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two",
"two",
"two",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is the best model?",
"How many sentences does the dataset contain?",
"Do the authors train a Naive Bayes classifier on their dataset?",
"What is the baseline?",
"Which machine learning models do they explore?",
"What is the size of the dataset?",
"What is the source of their dataset?",
"Do they try to use byte-pair encoding representations?",
"How many different types of entities exist in the dataset?",
"How big is the new Nepali NER dataset?",
"What is the performance improvement of the grapheme-level representation model over the character-level model?",
"Which models are used to solve NER for Nepali?"
],
"question_id": [
"567dc9bad8428ea9a2658c88203a0ed0f8da0dc3",
"d51dc36fbf6518226b8e45d4c817e07e8f642003",
"d8627ba08b7342e473b8a2b560baa8cdbae3c7fd",
"cb77d6a74065cb05318faf57e7ceca05e126a80d",
"8a7615fc6ff1de287d36ab21bf2c6a3b2914f73d",
"a1b3e2107302c5a993baafbe177684ae88d6f505",
"bb2de20ee5937da7e3e6230e942bec7b6e8f61ee",
"1170e4ee76fa202cabac9f621e8fbeb4a6c5f094",
"1462eb312944926469e7cee067dfc7f1267a2a8c",
"f59f1f5b528a2eec5cfb1e49c87699e0c536cc45",
"9bd080bb2a089410fd7ace82e91711136116af6c",
"6d1217b3d9cfb04be7fcd2238666fa02855ce9c5"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"language recognition",
"language recognition",
"language recognition",
"",
"",
"",
"",
"Entity",
"Entity",
"Entity",
"Entity",
"Entity"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: The grapheme level convolution neural network to extract grapheme representation. The dropout layer is applied after maxpooling layer.",
"Figure 2: End-to-end model architecture of our neural network. W-EMB, GRAPH, POS represents pre-trained word embeddings, grapheme representations and POS one-hot encoding vectors. GRAPH is obtained from CNN as shown in figure 1. The dashed line implies the application of dropout.",
"Figure 3: Format of a sample sentence in our dataset.",
"Figure 4: 2D Visualization of nearest neighbor word using PCA for a sample word n pAl",
"Table 1: Dataset statistics",
"Table 2: Dataset division statistics. The number presented are total count of entities token in each set.",
"Table 3: Effect of different embedding with Bi-LSTM.",
"Figure 5: F1 score based on different dropout values using fastText embeddings (Skip Gram). All other hyper-parameter used for this evaluation are presented in table 4.",
"Table 4: Hyper-parameters of our experiments",
"Figure 6: Sample output of the best model from ILPRL test dataset. First, second and third column indicates word to be predicted, ground truth and predicted truth respectively. We can see that not all the tags are classified correctly.",
"Table 5: Comparison of different variation of our models",
"Table 6: Comparison with previous models based on Test F1 score",
"Table 7: Entity-wise comparison using best model for respective dataset. MISC-entity is not available for OurNepali dataset."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"5-Figure4-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Figure5-1.png",
"6-Table4-1.png",
"7-Figure6-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"8-Table7-1.png"
]
} | [
"How many sentences does the dataset contain?",
"What is the baseline?",
"What is the size of the dataset?",
"How many different types of entities exist in the dataset?",
"How big is the new Nepali NER dataset?",
"What is the performance improvement of the grapheme-level representation model over the character-level model?"
] | [
[
"1908.05828-5-Table1-1.png",
"1908.05828-Dataset Statistics ::: OurNepali dataset-2"
],
[
"1908.05828-8-Table6-1.png",
"1908.05828-Related Work-3"
],
[
"1908.05828-Dataset Statistics ::: OurNepali dataset-0",
"1908.05828-5-Table1-1.png",
"1908.05828-Dataset Statistics ::: ILPRL dataset-0"
],
[
"1908.05828-Dataset Statistics ::: ILPRL dataset-1",
"1908.05828-5-Table1-1.png",
"1908.05828-Dataset Statistics ::: OurNepali dataset-0"
],
[
"1908.05828-5-Table1-1.png",
"1908.05828-Dataset Statistics ::: ILPRL dataset-0"
],
[
"1908.05828-Conclusion and Future work-1",
"1908.05828-8-Table5-1.png"
]
] | [
"3606",
"Bam et al. SVM, Ma and Hovy w/glove, Lample et al. w/fastText, Lample et al. w/word2vec",
"ILPRL contains 548 sentences, OurNepali contains 3606 sentences",
"OurNepali contains 3 different types of entities, ILPRL contains 4 different types of entities",
"Dataset contains 3606 total sentences and 79087 total entities.",
"On OurNepali test dataset Grapheme-level representation model achieves average 0.16% improvement, on ILPRL test dataset it achieves maximum 1.62% improvement"
] | 112 |
1909.13104 | Attention-based method for categorizing different types of online harassment language | In the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter, because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual or racist harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. Previous studies have been focused on collecting data about sexism and racism in very broad terms. However, there is not much study focusing on different types of online harassment alone attracting natural language processing techniques. In this work, we present an attention-based approach for the detection of harassment in tweets and the detection of different types of harassment as well. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach. | {
"paragraphs": [
[
"In the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. In fact, online harassment becomes very common in Twitter and there have been a lot of critics that Twitter has become the platform for many racists, misogynists and hate groups which can express themselves openly. Online harassment is usually in the form of verbal or graphical formats and is considered harassment, because it is neither invited nor has the consent of the receipt. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter. The main reason is because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. The basic goal of this automatic classification is that it will significantly improve the process of detecting these types of hate speech on social media by reducing the time and effort required by human beings.",
"Previous studies have been focused on collecting data about sexism and racism in very broad terms or have proposed two categories of sexism as benevolent or hostile sexism BIBREF0, which undermines other types of online harassment. However, there is no much study focusing on different types online harassment alone attracting natural language processing techniques.",
"In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion."
],
[
"Waseem et al. BIBREF1 were the first who collected hateful tweets and categorized them into being sexist, racist or neither. However, they did not provide specific definitions for each category. Jha and Mamidi BIBREF0 focused on just sexist tweets and proposed two categories of hostile and benevolent sexism. However, these categories were general as they ignored other types of sexism happening in social media. Sharifirad S. and Matwin S. BIBREF2 proposed complimentary categories of sexist language inspired from social science work. They categorized the sexist tweets into the categories of indirect harassment, information threat, sexual harassment and physical harassment. In the next year the same authors proposed BIBREF3 a more comprehensive categorization of online harassment in social media e.g. twitter into the following categories, indirect harassment, information threat, sexual harassment, physical harassment and not sexist.",
"For the detection of hate speech in social media like twitter, many approaches have been proposed. Jha and Mamidi BIBREF0 tested support vector machine, bi-directional RNN encoder-decoder and FastText on hostile and benevolent sexist tweets. They also used SentiWordNet and subjectivity lexicon on the extracted phrases to show the polarity of the tweets. Sharifirad et al. BIBREF4 trained, tested and evaluated different classification methods on the SemEval2018 dataset and chose the classifier with the highest accuracy for testing on each category of sexist tweets to know the mental state and the affectual state of the user who tweets in each category. To overcome the limitations of small data sets on sexist speech detection, Sharifirad S. et al. BIBREF5 have applied text augmentation and text generation with certain success. They have generated new tweets by replacing words in order to increase the size of our training set. Moreover, in the presented text augmentation approach, the number of tweets in each class remains the same, but their words are augmented with words extracted from their ConceptNet relations and their description extracted from Wikidata. Zhang et al. BIBREF6 combined convolutional and gated recurrent networks to detect hate speech in tweets. Others have proposed different methods, which are not based on deep learning. Burnap and Williams BIBREF7 used Support Vector Machines, Random Forests and a meta-classifier to distinguish between hateful and non-hateful messages. A survey of recent research in the field is presented in BIBREF8. For the problem of the hate speech detection a few approaches have been proposed that are based on the Attention mechanism. Pavlopoulos et al. BIBREF9 have proposed a novel, classification-specific attention mechanism that improves the performance of the RNN further for the detection of abusive content in the web. Xie et al. BIBREF10 for emotion intensity prediction, which is a similar problem to ours, have proposed a novel attention mechanism for CNN model that associates attention-based weights for every convolution window. Park and Fung BIBREF11 transformed the classification into a 2-step problem, where abusive text first is distinguished from the non-abusive, and then the class of abuse (Sexism or Racism) is determined. However, while the first part of the two step classification performs quite well, it falls short in detecting the particular class the abusive text belongs to. Pitsilis et al. BIBREF12 have proposed a detection scheme that is an ensemble of RNN classifiers, which incorporates various features associated with user related information, such as the users’ tendency towards racism or sexism"
],
[
"The dataset from Twitter that we are using in our work, consists of a train set, a validation set and a test set. It was published for the \"First workshop on categorizing different types of online harassment languages in social media\". The whole dataset is divided into two categories, which are harassment and non-harassment tweets. Moreover, considering the type of the harassment, the tweets are divided into three sub-categories which are indirect harassment, sexual and physical harassment. We can see in Table TABREF1 the class distribution of our dataset. One important issue here is that the categories of indirect and physical harassment seem to be more rare in the train set than in the validation and test sets. To tackle this issue, as we describe in the next section, we are performing data augmentation techniques. However, the dataset is imbalanced and this has a significant impact in our results."
],
[
"As described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. These \"noisy\" data that have been translated back, increase the number of indirect and physical harassment tweets and boost significantly the performance of our models.",
"Another way to enrich our models is the use of pre-trained word embeddings from 2B Twitter data BIBREF14 having 27B tokens, for the initialization of the embedding layer."
],
[
"Before training our models we are processing the given tweets using a tweet pre-processor. The scope here is the cleaning and tokenization of the dataset."
],
[
"We are presenting an attention-based approach for the problem of the harassment detection in tweets. In this section, we describe the basic approach of our work. We are using RNN models because of their ability to deal with sequence information. The RNN model is a chain of GRU cells BIBREF15 that transforms the tokens $w_{1}, w_{2},..., w_{k}$ of each tweet to the hidden states $h_{1}, h_{2},..., h_{k}$, followed by an LR Layer that uses $h_{k}$ to classify the tweet as harassment or non-harassment (similarly for the other categories). Given the vocabulary V and a matrix E $\\in $ $R^{d \\times \\vert V \\vert }$ containing d-dimensional word embeddings, an initial $h_{0}$ and a tweet $w = <w_{1},.., w_{k}>$, the RNN computes $h_{1}, h_{2},..., h_{k}$, with $h_{t} \\in R^{m}$, as follows:",
"where $h^{^{\\prime }}_{t} \\in R^{m}$ is the proposed hidden state at position t, obtained using the word embedding $x_{t}$ of token $w_{t}$ and the previous hidden state $h_{t-1}$, $\\odot $ represents the element-wise multiplication, $r_{t} \\in R^{m}$ is the reset gate, $z_{t} \\in R^{m}$ is the update gate, $\\sigma $ is the sigmoid function. Also $W_{h}, W_{z}, W_{r} \\in R^{m \\times d}$ and $U_{h}, U_{z}, U_{r} \\in R^{m \\times m}$, $b_{h}, b_{z}, b_{r} \\in R^{m}$. After the computation of state $h_{k}$ the LR Layer estimates the probability that tweet w should be considered as harassment, with $W_{p} \\in R^{1 \\times m}, b_{p} \\in R$:",
"We would like to add an attention mechanism similar to the one presented in BIBREF9, so that the LR Layer will consider the weighted sum $h_{sum}$ of all the hidden states instead of $h_{k}$:",
"$h_{sum} = \\sum _{t=1}^{k} \\alpha _{t}h_{t}$",
"$P_{attentionRNN} = \\sigma (W_{p}h_{sum} + b_{p})$",
"Alternatively, we could pass $h_{sum}$ through an MLP with k layers and then the LR layer will estimate the corresponding probability. More formally,",
"$P_{attentionRNN} = \\sigma (W_{p}h_{*} + b_{p})$",
"where $h_{*}$ is the state that comes out from the MLP. The weights $\\alpha _{t}$ are produced by an attention mechanism presented in BIBREF9 (see Fig. FIGREF7), which is an MLP with l layers. This attention mechanism differs from most previous ones BIBREF16, BIBREF17, because it is used in a classification setting, where there is no previously generated output sub-sequence to drive the attention. It assigns larger weights $\\alpha _{t}$ to hidden states $h_{t}$ corresponding to positions, where there is more evidence that the tweet should be harassment (or any other specific type of harassment) or not. In our work we are using four attention mechanisms instead of one that is presented in BIBREF9. Particularly, we are using one attention mechanism per category. Another element that differentiates our approach from Pavlopoulos et al. BIBREF9 is that we are using a projection layer for the word embeddings (see Fig. FIGREF2). In the next subsection we describe the Model Architecture of our approach."
],
[
"The Embedding Layer is initialized using pre-trained word embeddings of dimension 200 from Twitter data that have been described in a previous sub-section. After the Embedding Layer, we are applying a Spatial Dropout Layer, which drops a certain percentage of dimensions from each word vector in the training sample. The role of Dropout is to improve generalization performance by preventing activations from becoming strongly correlated BIBREF18. Spatial Dropout, which has been proposed in BIBREF19, is an alternative way to use dropout with convolutional neural networks as it is able to dropout entire feature maps from the convolutional layer which are then not used during pooling. After that, the word embeddings are passing through a one-layer MLP, which has tanh as activation function and 128 hidden units, in order to project them in the vector space of our problem considering that they have been pre-trained using text that has a different subject. In the next step the embeddings are fed in a unidirectional GRU having 1 Stacked Layer and size 128. We prefer GRU than LSTM, because it is more efficient computationally. Also the basic advantage of LSTM which is the ability to keep in memory large text documents, does not hold here, because tweets supposed to be not too large text documents. The output states of the GRU are passing through four self-attentions like the one described above BIBREF9, because we are using one attention per category (see Fig. FIGREF7). Finally, a one-layer MLP having 128 nodes and ReLU as activation function computes the final score for each category. At this final stage we have avoided using a softmax function to decide the harassment type considering that the tweet is a harassment, otherwise we had to train our models taking into account only the harassment tweets and this might have been a problem as the dataset is not large enough."
],
[
"In this subsection we are giving the details of the training process of our models. Moreover, we are describing the different models that we compare in our experiments.",
"Batch size which pertains to the amount of training samples to consider at a time for updating our network weights, is set to 32, because our dataset is not large and small batches might help to generalize better. Also, we set other hyperparameters as: epochs = 20, patience = 10. As early stopping criterion we choose the average AUC, because our dataset is imbalanced.",
"The training process is based on the optimization of the loss function mentioned below and it is carried out with the Adam optimizer BIBREF20, which is known for yielding quicker convergence. We set the learning rate equal to 0.001:",
"$L = \\frac{1}{2}BCE(harassment) + \\frac{1}{2}(\\frac{1}{5}BCE(sexualH) + \\frac{2}{5}BCE(indirectH)+\\frac{2}{5}BCE(physicalH))$",
"where BCE is the binary cross-entropy loss function,",
"$BCE = -\\frac{1}{n}\\sum _{i=1}^{n}[y_{i}log(y^{^{\\prime }}_{i}) + (1 - y_{i})log(1 - y^{^{\\prime }}_{i}))]$",
"$i$ denotes the $i$th training sample, $y$ is the binary representation of true harassment label, and $y^{^{\\prime }}$ is the predicted probability. In the loss function we have applied equal weight to both tasks. However, in the second task (type of harassment classification) we have applied higher weight in the categories that it is harder to predict due to the problem of the class imbalance between the training, validation and test sets respectively."
],
[
"Each model produces four scores and each score is the probability that a tweet includes harassment language, indirect, physical and sexual harassment language respectively. For any tweet, we first check the score of the harassment language and if it is less than a specified threshold, then the harassment label is zero, so the other three labels are zero as well. If it is greater than or equal to that threshold, then the harassment label is one and the type of harassment is the one among these three having that has the greatest score (highest probability). We set this threshold equal to 0.33.",
"We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category.",
"We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement."
],
[
"We present an attention-based approach for the detection of harassment language in tweets and the detection of different types of harassment as well. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach and a few baseline methods. According to the results of our experiments and considering the F1 Score, the multi-attention method having a projected layer, achieved the highest performance. Also, we tackled the problem of the imbalance between the training, validation and test sets performing the technique of back-translation.",
"In the future, we would like to perform more experiments with this dataset applying different models using BERT BIBREF21. Also, we would like to apply the models presented in this work, in other datasets about hate speech in social media."
]
],
"section_name": [
"Introduction",
"Related Work",
"Dataset description",
"Proposed methodology ::: Data augmentation",
"Proposed methodology ::: Text processing",
"Proposed methodology ::: RNN Model and Attention Mechanism",
"Proposed methodology ::: Model Architecture",
"Experiments ::: Training Models",
"Experiments ::: Evaluation and Results",
"Conclusion - Future work"
]
} | {
"answers": [
{
"annotation_id": [
"40534fea999957767a5c8ffa3574ab9110bd9e1c",
"7545dea5428d8d283a98b378f852d3532d399e2e"
],
"answer": [
{
"evidence": [
"As described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. These \"noisy\" data that have been translated back, increase the number of indirect and physical harassment tweets and boost significantly the performance of our models."
],
"extractive_spans": [
"english"
],
"free_form_answer": "",
"highlighted_evidence": [
"To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. These \"noisy\" data that have been translated back, increase the number of indirect and physical harassment tweets and boost significantly the performance of our models."
],
"extractive_spans": [
"english"
],
"free_form_answer": "",
"highlighted_evidence": [
"As described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"128df2a3c1979dbaa71009bf848375bfdc4bac03",
"ca26a0e40cb46399429e0da78bff285798b80dc4"
],
"answer": [
{
"evidence": [
"We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category."
],
"extractive_spans": [
" LastStateRNN",
"AvgRNN",
"AttentionRNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category."
],
"extractive_spans": [
"LastStateRNN",
"AvgRNN",
"AttentionRNN "
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category.\n\n"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"5862eac5c0b6fb4cb378833aa6864c56d22e72c9",
"911d57a8be975769c492bc4aee171184d2e06bf1"
],
"answer": [
{
"evidence": [
"We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement."
],
"extractive_spans": [],
"free_form_answer": "the model with multi-attention mechanism and a projected layer",
"highlighted_evidence": [
"We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement.\n\n"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement."
],
"extractive_spans": [
"Projected Layer"
],
"free_form_answer": "",
"highlighted_evidence": [
"In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"7c2e8dc0aaaf2a652d7b729f546f19831fb7af71",
"fefb909937d960954b2498a30a87eb7a40d5210b"
],
"answer": [
{
"evidence": [
"We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category."
],
"extractive_spans": [],
"free_form_answer": "classic RNN model, avgRNN model, attentionRNN model and multiattention RNN model with and without a projected layer",
"highlighted_evidence": [
"We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"where $h_{*}$ is the state that comes out from the MLP. The weights $\\alpha _{t}$ are produced by an attention mechanism presented in BIBREF9 (see Fig. FIGREF7), which is an MLP with l layers. This attention mechanism differs from most previous ones BIBREF16, BIBREF17, because it is used in a classification setting, where there is no previously generated output sub-sequence to drive the attention. It assigns larger weights $\\alpha _{t}$ to hidden states $h_{t}$ corresponding to positions, where there is more evidence that the tweet should be harassment (or any other specific type of harassment) or not. In our work we are using four attention mechanisms instead of one that is presented in BIBREF9. Particularly, we are using one attention mechanism per category. Another element that differentiates our approach from Pavlopoulos et al. BIBREF9 is that we are using a projection layer for the word embeddings (see Fig. FIGREF2). In the next subsection we describe the Model Architecture of our approach."
],
"extractive_spans": [
" four attention mechanisms instead of one",
"a projection layer for the word embeddings"
],
"free_form_answer": "",
"highlighted_evidence": [
" In our work we are using four attention mechanisms instead of one that is presented in BIBREF9. ",
"Another element that differentiates our approach from Pavlopoulos et al. BIBREF9 is that we are using a projection layer for the word embeddings (see Fig. FIGREF2)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"5a1e23edaec4a05a0280cd3f8e98f7add72dae56",
"c96694cc2fda022731c2f6df5a1aef2a837273af"
],
"answer": [
{
"evidence": [
"In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion."
],
"extractive_spans": [],
"free_form_answer": "Twitter dataset provided by the organizers",
"highlighted_evidence": [
"We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion."
],
"extractive_spans": [],
"free_form_answer": "The dataset from the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference.",
"highlighted_evidence": [
"In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference.",
"We are using the dataset of the competition, which includes text from tweets having the aforementioned categories."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"5c5902417510124fa422eaff3484a9394af53465",
"be1808fff34055b4d5653ff25c10606e00c94eae"
],
"answer": [
{
"evidence": [
"The dataset from Twitter that we are using in our work, consists of a train set, a validation set and a test set. It was published for the \"First workshop on categorizing different types of online harassment languages in social media\". The whole dataset is divided into two categories, which are harassment and non-harassment tweets. Moreover, considering the type of the harassment, the tweets are divided into three sub-categories which are indirect harassment, sexual and physical harassment. We can see in Table TABREF1 the class distribution of our dataset. One important issue here is that the categories of indirect and physical harassment seem to be more rare in the train set than in the validation and test sets. To tackle this issue, as we describe in the next section, we are performing data augmentation techniques. However, the dataset is imbalanced and this has a significant impact in our results."
],
"extractive_spans": [
"indirect harassment, sexual and physical harassment"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset from Twitter that we are using in our work, consists of a train set, a validation set and a test set. It was published for the \"First workshop on categorizing different types of online harassment languages in social media\". The whole dataset is divided into two categories, which are harassment and non-harassment tweets. Moreover, considering the type of the harassment, the tweets are divided into three sub-categories which are indirect harassment, sexual and physical harassment."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion."
],
"extractive_spans": [
"indirect",
"physical",
"sexual"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"987f87b2e0fe20168279bc796bb866ffd70e03e8"
],
"answer": [
{
"evidence": [
"We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category."
],
"extractive_spans": [
"LastStateRNN",
"AvgRNN",
"AttentionRNN"
],
"free_form_answer": "",
"highlighted_evidence": [
"Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"0c8a16074f94429635f5568cc0f443a79e88a09d",
"5fbb7b3e42a18d9a102fa1b4bfe52ca54ef7e3de"
],
"answer": [
{
"evidence": [
"In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion."
],
"extractive_spans": [],
"free_form_answer": "The dataset from the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. ",
"highlighted_evidence": [
"In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. ",
" We are using the dataset of the competition, which includes text from tweets having the aforementioned categories."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion.",
"The dataset from Twitter that we are using in our work, consists of a train set, a validation set and a test set. It was published for the \"First workshop on categorizing different types of online harassment languages in social media\". The whole dataset is divided into two categories, which are harassment and non-harassment tweets. Moreover, considering the type of the harassment, the tweets are divided into three sub-categories which are indirect harassment, sexual and physical harassment. We can see in Table TABREF1 the class distribution of our dataset. One important issue here is that the categories of indirect and physical harassment seem to be more rare in the train set than in the validation and test sets. To tackle this issue, as we describe in the next section, we are performing data augmentation techniques. However, the dataset is imbalanced and this has a significant impact in our results."
],
"extractive_spans": [],
"free_form_answer": "Twitter dataset provided by organizers containing harassment and non-harassment tweets",
"highlighted_evidence": [
"We are using the dataset of the competition, which includes text from tweets having the aforementioned categories.",
"The dataset from Twitter that we are using in our work, consists of a train set, a validation set and a test set. It was published for the \"First workshop on categorizing different types of online harassment languages in social media\". The whole dataset is divided into two categories, which are harassment and non-harassment tweets. Moreover, considering the type of the harassment, the tweets are divided into three sub-categories which are indirect harassment, sexual and physical harassment. We can see in Table TABREF1 the class distribution of our dataset. One important issue here is that the categories of indirect and physical harassment seem to be more rare in the train set than in the validation and test sets. To tackle this issue, as we describe in the next section, we are performing data augmentation techniques. However, the dataset is imbalanced and this has a significant impact in our results."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"",
"",
""
],
"question": [
"What language(s) is/are represented in the dataset?",
"What baseline model is used?",
"Which variation provides the best results on this dataset?",
"What are the different variations of the attention-based approach which are examined?",
"What dataset is used for this work?",
"What types of online harassment are studied?",
"What was the baseline?",
"What were the datasets used in this paper?"
],
"question_id": [
"1e775cf30784e6b1c2b573294a82e145a3f959bb",
"392fb87564c4f45d0d8d491a9bb217c4fce87f03",
"203337c15bd1ee05763c748391d295a1f6415b9b",
"d004ca2e999940ac5c1576046e30efa3059832fa",
"21548433abd21346659505296fb0576e78287a74",
"f0b2289cb887740f9255909018f400f028b1ef26",
"51b1142c1d23420dbf6d49446730b0e82b32137c",
"58355e2a782bf145b61ee2a3e0e426119985c179"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"",
"",
""
]
} | {
"caption": [
"Table 1. Class distribution of the dataset.",
"Fig. 1. Projection Layer",
"Fig. 2. Attention mechanism, MLP with l Layers",
"Table 2. The results considering F1 Score."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"5-Figure2-1.png",
"7-Table2-1.png"
]
} | [
"Which variation provides the best results on this dataset?",
"What are the different variations of the attention-based approach which are examined?",
"What dataset is used for this work?",
"What were the datasets used in this paper?"
] | [
[
"1909.13104-Experiments ::: Evaluation and Results-2"
],
[
"1909.13104-Experiments ::: Evaluation and Results-1",
"1909.13104-Proposed methodology ::: RNN Model and Attention Mechanism-7"
],
[
"1909.13104-Introduction-2"
],
[
"1909.13104-Dataset description-0",
"1909.13104-Introduction-2"
]
] | [
"the model with multi-attention mechanism and a projected layer",
"classic RNN model, avgRNN model, attentionRNN model and multiattention RNN model with and without a projected layer",
"The dataset from the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference.",
"Twitter dataset provided by organizers containing harassment and non-harassment tweets"
] | 113 |
2002.02070 | Understanding Car-Speak: Replacing Humans in Dealerships | A large portion of the car-buying experience in the United States involves interactions at a car dealership. At the dealership, the car-buyer relays their needs to a sales representative. However, most car-buyers are only have an abstract description of the vehicle they need. Therefore, they are only able to describe their ideal car in "car-speak". Car-speak is abstract language that pertains to a car's physical attributes. In this paper, we define car-speak. We also aim to curate a reasonable data set of car-speak language. Finally, we train several classifiers in order to classify car-speak. | {
"paragraphs": [
[
"A large portion of the car-buying experience in the United States involves interactions at a car dealership BIBREF0, BIBREF1, BIBREF2. Traditionally, a car dealer listens and understands the needs of the client and helps them find what car is right based on their needs.",
"With the advent of the internet, many potential car buyers take to the web to research cars before going to a dealership in person BIBREF0, BIBREF2. However, nearly 50% of customers bought a car at the dealership based on the sales representative's advice, not their own research BIBREF1, BIBREF2.",
"Throughout this interaction the dealer is acting as a type of translator or classifier. The dealer takes a natural language input (e.g. “I need a fast, family friendly, reliable car under $20k”) and returns a list of suggestions. The dealer understands the ideas of “fast”, “family friendly”, and “reliable” and is able to come up with a reasonable recommendation based on this knowledge.",
"In this paper we aim to create a system that can understand car-speak based on some natural language input (we want to recreate the dealer from above). But how do we prepare a proper training set for a Natural Language model? What model is best suited to this problem? Can this model take a human out of the car-buying process? To answer these questions, the remainder of this paper makes the following contributions:",
"Defining “car-speak” and its role in the car-buying process.",
"Appropriate training data for a Natural Language model.",
"A model that is able to properly classify car-speak and return a car.",
"We aim to accomplish these goals in a scientific manner, using real data and modern methods."
],
[
"There has been some work done in the field of car-sales and dealer interactions. However, this is the first work that specifically focuses on the",
"Deloitte has published a report on the entire car-buying process BIBREF0. The report goes into great depth about the methods potential buyers use to find new cars to buy, and how they go about buying them. The report tells us that there are several unique phases that a potential buyer goes through before buying a car.",
"Verhoef et al. looked at the specifics of dealer interaction and how dealers retain customers BIBREF3. Verhoef tells us how important dealers are to the car-buying process. He also explains how influential a dealer can be on what car the buyer purchases.",
"Jeff Kershner compiled a series of statistics about Dealership Sales BIBREF1. These statistics focus on small social interactions BIBREF4 between the dealer and the buyer.",
"Barley explains the increasing role of technology in the car-buying process BIBREF2. Barley tells us that users prefer to use technology/robots to find the cars they want to buy instead of going to a dealer, due the distrust towards sales representatives."
],
[
"When a potential buyer begins to identify their next car-purchase they begin with identifying their needs. These needs often come in the form of an abstract situation, for instance, “I need a car that goes really fast”. This could mean that they need a car with a V8 engine type or a car that has 500 horsepower, but the buyer does not know that, all they know is that they need a “fast” car.",
"The term “fast” is car-speak. Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term “fast” pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term “fast” refers to.",
"The use of car-speak is present throughout the car-buying process. It begins in the Research Phase where buyers identify their needs BIBREF0. When the buyer goes to a dealership to buy a car, they communicate with the dealer in similar car-speak BIBREF2 and convey their needs to the sales representative. Finally, the sales representative uses their internal classifier to translate this car-speak into actual physical attributes (e.g. `fast' $ \\longrightarrow $ `700 horsepower & a sleek form factor') and offers a car to the buyer.",
"Understanding car-speak is not a trivial task. Figure FIGREF4 shows two cars that have high top speeds, however both cars may not be considered “fast”. We need to mine the ideas that people have about cars in order to determine which cars are “fast” and which cars are not."
],
[
"We aim to curate a data set of car-speak in order to train a model properly. However, there are a few challenges that present themselves: What is a good source of car-speak? How can we acquire the data? How can we be sure the data set is relevant?",
"What is a good source of car-speak? We find plenty of car-speak in car reviews. Table TABREF5 provides excerpts from reviews with the car-speak terms bolded. Car reviews often describe cars in an abstract manner, which makes the review more useful for car-buyers. The reviews are often also about specific use-cases for each car (e.g. using the car to tow a trailer), so they capture all possible aspects of a car. The reviews are each written about a specific car, so we are able to map car-speak to a specific car model.",
"We choose the reviews from the U.S. News & World Report because they have easily accessible full-length reviews about every car that has been sold in the United States since 2006 BIBREF5.",
"How can we acquire the data? We can acquire this data using modern web-scraping tools such as beautiful-soup. The data is publicly available on https://cars.usnews.com/cars-trucks BIBREF5. These reviews also include a scorecard and justification of their reviews.",
"How can we be sure the data set is relevant? On average vehicles on United States roads are 11.6 years old, making the average manufacturing year 2006-2007 BIBREF6, BIBREF7. In order to have a relevant data set we gather all of the available reviews for car models made between the years 2000 and 2018."
],
[
"Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them."
],
[
"We would like to be able to represent each car with the most relevant car-speak terms. We can do this by filtering each review using the NLTK library BIBREF8, only retaining the most relevant words. First we token-ize each review and then keep only the nouns and adjectives from each review since they are the most salient parts of speech BIBREF9. This leaves us with $10,867$ words across all reviews. Figure FIGREF6 shows the frequency of the top 20 words that remain.",
"Words such as “saftey” and “luxury” are among the top words used in reviews. These words are very good examples of car-speak. Both words are abstract descriptions of cars, but both have physical characteristics that are associated with them as we discussed in Section SECREF3."
],
[
"So far we have compiled the most relevant terms in from the reviews. We now need to weight these terms for each review, so that we know the car-speak terms are most associated with a car. Using TF-IDF (Term Frequency-Inverse Document Frequency) has been used as a reliable metric for finding the relevant terms in a document BIBREF10.",
"We represent each review as a vector of TF-IDF scores for each word in the review. The length of this vector is $10,867$. We label each review vector with the car it reviews. We ignore the year of the car being reviewed and focus specifically on the model (i.e Acura ILX, not 2013 Acura ILX). This is because there a single model of car generally retains the same characteristics over time BIBREF11, BIBREF12."
],
[
"We train a series of classifiers in order to classify car-speak. We train three classifiers on the review vectors that we prepared in Section SECREF8. The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13.",
"In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set."
],
[
"In this paper we aim to provide an introductory understanding of car-speak and a way to automate car dealers at dealerships. We first provide a definition of “car-speak” in Section SECREF3. We explore what constitutes car-speak and how to identify car-speak.",
"We also gather a data set of car-speak to use for exploration and training purposes. This data set id full of vehicle reviews from U.S. News BIBREF5. These reviews provide a reasonable set of car-speak data that we can study.",
"Finally, we create and test several classifiers that are trained on the data we gathered. While these classifiers did not perform particularly well, they provide a good starting point for future work on this subject.",
"In the future we plan to use more complex models to attempt to understand car-speak. We also would like to test our classifiers on user-provided natural language queries. This would be a more practical evaluation of our classification. It would also satisfy the need for a computer system that understands car-speak."
]
],
"section_name": [
"Introduction",
"Related Work",
"What is Car-speak?",
"Gathering Car-speak Data",
"Translating Car-Speak",
"Translating Car-Speak ::: Filtering the Data",
"Translating Car-Speak ::: TF-IDF",
"Translating Car-Speak ::: Classification Experiments",
"Conclusion & Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"57c005faa3e141d0520a5c5e39a7ae074005b87d",
"e501cac4a83ea9656f60fe19999bad570c02506d"
],
"answer": [
{
"evidence": [
"The term “fast” is car-speak. Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term “fast” pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term “fast” refers to.",
"We train a series of classifiers in order to classify car-speak. We train three classifiers on the review vectors that we prepared in Section SECREF8. The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term “fast” pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term “fast” refers to.",
"We train a series of classifiers in order to classify car-speak. We train three classifiers on the review vectors that we prepared in Section SECREF8. The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"The term “fast” is car-speak. Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term “fast” pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term “fast” refers to.",
"Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The term “fast” is car-speak. Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term “fast” pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term “fast” refers to.",
"In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"531f13e3a65795200fc5ab0bb8645dbf0df280ee",
"ea4e243641c657322b71a2a4fb70a527fa7f87e7"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"We would like to be able to represent each car with the most relevant car-speak terms. We can do this by filtering each review using the NLTK library BIBREF8, only retaining the most relevant words. First we token-ize each review and then keep only the nouns and adjectives from each review since they are the most salient parts of speech BIBREF9. This leaves us with $10,867$ words across all reviews. Figure FIGREF6 shows the frequency of the top 20 words that remain.",
"So far we have compiled the most relevant terms in from the reviews. We now need to weight these terms for each review, so that we know the car-speak terms are most associated with a car. Using TF-IDF (Term Frequency-Inverse Document Frequency) has been used as a reliable metric for finding the relevant terms in a document BIBREF10."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" We can do this by filtering each review using the NLTK library BIBREF8, only retaining the most relevant words. First we token-ize each review and then keep only the nouns and adjectives from each review since they are the most salient parts of speech BIBREF9. This leaves us with $10,867$ words across all reviews.",
"Using TF-IDF (Term Frequency-Inverse Document Frequency) has been used as a reliable metric for finding the relevant terms in a document BIBREF10."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"820cf6fc2076beb25c8bb7c6838415265dc8f40b",
"dc1a8b8cb6334eef875752b078f005fa3826a25b"
],
"answer": [
{
"evidence": [
"We represent each review as a vector of TF-IDF scores for each word in the review. The length of this vector is $10,867$. We label each review vector with the car it reviews. We ignore the year of the car being reviewed and focus specifically on the model (i.e Acura ILX, not 2013 Acura ILX). This is because there a single model of car generally retains the same characteristics over time BIBREF11, BIBREF12."
],
"extractive_spans": [
"car "
],
"free_form_answer": "",
"highlighted_evidence": [
"We label each review vector with the car it reviews. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them.",
"We represent each review as a vector of TF-IDF scores for each word in the review. The length of this vector is $10,867$. We label each review vector with the car it reviews. We ignore the year of the car being reviewed and focus specifically on the model (i.e Acura ILX, not 2013 Acura ILX). This is because there a single model of car generally retains the same characteristics over time BIBREF11, BIBREF12."
],
"extractive_spans": [
"the car"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification.",
"We represent each review as a vector of TF-IDF scores for each word in the review. The length of this vector is $10,867$. We label each review vector with the car it reviews. We ignore the year of the car being reviewed and focus specifically on the model (i.e Acura ILX, not 2013 Acura ILX). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"6b9b900a596cfe81f85eb4b016a78dcdcb661990",
"a7e44116c3d6871a537078734084e5b00bbe6ea8"
],
"answer": [
{
"evidence": [
"Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them."
],
"extractive_spans": [
"$3,209$ reviews "
],
"free_form_answer": "",
"highlighted_evidence": [
"Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them."
],
"extractive_spans": [
"$3,209$ reviews about 553 different cars from 49 different car manufacturers"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"e7afe3d129c67b64dfa3c7c742caab4b969b3ccb",
"f5607a22d620c9a0858a29d2444d1981950eb6fe"
],
"answer": [
{
"evidence": [
"In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set.",
"FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."
],
"extractive_spans": [
"Table TABREF10",
" The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set",
" While these classifiers did not perform particularly well, they provide a good starting point for future work on this subject"
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set.",
"FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."
],
"extractive_spans": [],
"free_form_answer": "Using F1 Micro measure, the KNN classifier perform 0.6762, the RF 0.6687, SVM 0.6712 and MLP 0.6778.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"23a17e180c21e2038c7158191379c6a4b6cb48b7",
"94475d71ec9057a0abd22e684e725bf2e28e42c1"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers.",
"In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set."
],
"extractive_spans": [],
"free_form_answer": "KNN\nRF\nSVM\nMLP",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers.",
"In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We train a series of classifiers in order to classify car-speak. We train three classifiers on the review vectors that we prepared in Section SECREF8. The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13."
],
"extractive_spans": [
" K Nearest Neighbors (KNN)",
"Random Forest (RF)",
"Support Vector Machine (SVM)",
"Multi-layer Perceptron (MLP)"
],
"free_form_answer": "",
"highlighted_evidence": [
" The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"b4858ea7b21f6705974b7e02189349ea82f9d719"
],
"answer": [
{
"evidence": [
"The term “fast” is car-speak. Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term “fast” pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term “fast” refers to."
],
"extractive_spans": [
"we do not know exactly"
],
"free_form_answer": "",
"highlighted_evidence": [
"Car-speak is abstract language that pertains to a car's physical attribute(s). In this instance the physical attributes that the term “fast” pertains to could be the horsepower, or it could be the car's form factor (how the car looks). However, we do not know exactly which attributes the term “fast” refers to."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"zero",
"zero",
"zero",
"zero",
"zero",
"zero",
"zero"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Is car-speak language collection of abstract features that classifier is later trained on?",
"Is order of \"words\" important in car speak language?",
"What are labels in car speak language dataset?",
"How big is dataset of car-speak language?",
"What is the performance of classifiers?",
"What classifiers have been trained?",
"How does car speak pertains to a car's physical attributes?"
],
"question_id": [
"25c1c4a91f5dedd4e06d14121af3b5921db125e9",
"f88036174b4a0dbf4fe70ddad884d16082c5748d",
"a267d620af319b48e56c191aa4c433ea3870f6fb",
"899ed05c460bf2aa0aa65101cad1986d4f622652",
"d53299fac8c94bd0179968eb868506124af407d1",
"29f2954098f055fb19d9502572f085862d75bf61",
"6bf93968110c6e3e3640360440607744007a5228"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Both of these cars can achieve high speeds. Which is “fast”?",
"Table 1: Excerpts from car reviews.",
"Figure 2: The frequencies of the top 20 words in reviews.",
"Table 2: Evaluation metrics for all classifiers."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"2-Figure2-1.png",
"3-Table2-1.png"
]
} | [
"What is the performance of classifiers?",
"What classifiers have been trained?"
] | [
[
"2002.02070-Translating Car-Speak ::: Classification Experiments-1",
"2002.02070-3-Table2-1.png"
],
[
"2002.02070-Translating Car-Speak ::: Classification Experiments-1",
"2002.02070-3-Table2-1.png",
"2002.02070-Translating Car-Speak ::: Classification Experiments-0"
]
] | [
"Using F1 Micro measure, the KNN classifier perform 0.6762, the RF 0.6687, SVM 0.6712 and MLP 0.6778.",
"KNN\nRF\nSVM\nMLP"
] | 114 |
1611.03599 | UTCNN: a Deep Learning Model of Stance Classificationon on Social Media Text | Most neural network models for document classification on social media focus on text infor-mation to the neglect of other information on these platforms. In this paper, we classify post stance on social media channels and develop UTCNN, a neural network model that incorporates user tastes, topic tastes, and user comments on posts. UTCNN not only works on social media texts, but also analyzes texts in forums and message boards. Experiments performed on Chinese Facebook data and English online debate forum data show that UTCNN achieves a 0.755 macro-average f-score for supportive, neutral, and unsupportive stance classes on Facebook data, which is significantly better than models in which either user, topic, or comment information is withheld. This model design greatly mitigates the lack of data for the minor class without the use of oversampling. In addition, UTCNN yields a 0.842 accuracy on English online debate forum data, which also significantly outperforms results from previous work as well as other deep learning models, showing that UTCNN performs well regardless of language or platform. | {
"paragraphs": [
[
" This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ Deep neural networks have been widely used in text classification and have achieved promising results BIBREF0 , BIBREF1 , BIBREF2 . Most focus on content information and use models such as convolutional neural networks (CNN) BIBREF3 or recursive neural networks BIBREF4 . However, for user-generated posts on social media like Facebook or Twitter, there is more information that should not be ignored. On social media platforms, a user can act either as the author of a post or as a reader who expresses his or her comments about the post.",
"In this paper, we classify posts taking into account post authorship, likes, topics, and comments. In particular, users and their “likes” hold strong potential for text mining. For example, given a set of posts that are related to a specific topic, a user's likes and dislikes provide clues for stance labeling. From a user point of view, users with positive attitudes toward the issue leave positive comments on the posts with praise or even just the post's content; from a post point of view, positive posts attract users who hold positive stances. We also investigate the influence of topics: different topics are associated with different stance labeling tendencies and word usage. For example we discuss women's rights and unwanted babies on the topic of abortion, but we criticize medicine usage or crime when on the topic of marijuana BIBREF5 . Even for posts on a specific topic like nuclear power, a variety of arguments are raised: green energy, radiation, air pollution, and so on. As for comments, we treat them as additional text information. The arguments in the comments and the commenters (the users who leave the comments) provide hints on the post's content and further facilitate stance classification.",
"In this paper, we propose the user-topic-comment neural network (UTCNN), a deep learning model that utilizes user, topic, and comment information. We attempt to learn user and topic representations which encode user interactions and topic influences to further enhance text classification, and we also incorporate comment information. We evaluate this model on a post stance classification task on forum-style social media platforms. The contributions of this paper are as follows: 1. We propose UTCNN, a neural network for text in modern social media channels as well as legacy social media, forums, and message boards — anywhere that reveals users, their tastes, as well as their replies to posts. 2. When classifying social media post stances, we leverage users, including authors and likers. User embeddings can be generated even for users who have never posted anything. 3. We incorporate a topic model to automatically assign topics to each post in a single topic dataset. 4. We show that overall, the proposed method achieves the highest performance in all instances, and that all of the information extracted, whether users, topics, or comments, still has its contributions."
],
[
"In this paper we aim to use text as well as other features to see how they complement each other in a deep learning model. In the stance classification domain, previous work has showed that text features are limited, suggesting that adding extra-linguistic constraints could improve performance BIBREF6 , BIBREF7 , BIBREF8 . For example, Hasan and Ng as well as Thomas et al. require that posts written by the same author have the same stance BIBREF9 , BIBREF10 . The addition of this constraint yields accuracy improvements of 1–7% for some models and datasets. Hasan and Ng later added user-interaction constraints and ideology constraints BIBREF7 : the former models the relationship among posts in a sequence of replies and the latter models inter-topic relationships, e.g., users who oppose abortion could be conservative and thus are likely to oppose gay rights.",
"For work focusing on online forum text, since posts are linked through user replies, sequential labeling methods have been used to model relationships between posts. For example, Hasan and Ng use hidden Markov models (HMMs) to model dependent relationships to the preceding post BIBREF9 ; Burfoot et al. use iterative classification to repeatedly generate new estimates based on the current state of knowledge BIBREF11 ; Sridhar et al. use probabilistic soft logic (PSL) to model reply links via collaborative filtering BIBREF12 . In the Facebook dataset we study, we use comments instead of reply links. However, as the ultimate goal in this paper is predicting not comment stance but post stance, we treat comments as extra information for use in predicting post stance."
],
[
"In recent years neural network models have been applied to document sentiment classification BIBREF13 , BIBREF4 , BIBREF14 , BIBREF15 , BIBREF2 . Text features can be used in deep networks to capture text semantics or sentiment. For example, Dong et al. use an adaptive layer in a recursive neural network for target-dependent Twitter sentiment analysis, where targets are topics such as windows 7 or taylor swift BIBREF16 , BIBREF17 ; recursive neural tensor networks (RNTNs) utilize sentence parse trees to capture sentence-level sentiment for movie reviews BIBREF4 ; Le and Mikolov predict sentiment by using paragraph vectors to model each paragraph as a continuous representation BIBREF18 . They show that performance can thus be improved by more delicate text models.",
"Others have suggested using extra-linguistic features to improve the deep learning model. The user-word composition vector model (UWCVM) BIBREF19 is inspired by the possibility that the strength of sentiment words is user-specific; to capture this they add user embeddings in their model. In UPNN, a later extension, they further add a product-word composition as product embeddings, arguing that products can also show different tendencies of being rated or reviewed BIBREF20 . Their addition of user information yielded 2–10% improvements in accuracy as compared to the above-mentioned RNTN and paragraph vector methods. We also seek to inject user information into the neural network model. In comparison to the research of Tang et al. on sentiment classification for product reviews, the difference is two-fold. First, we take into account multiple users (one author and potentially many likers) for one post, whereas only one user (the reviewer) is involved in a review. Second, we add comment information to provide more features for post stance classification. None of these two factors have been considered previously in a deep learning model for text stance classification. Therefore, we propose UTCNN, which generates and utilizes user embeddings for all users — even for those who have not authored any posts — and incorporates comments to further improve performance."
],
[
"In this section, we first describe CNN-based document composition, which captures user- and topic-dependent document-level semantic representation from word representations. Then we show how to add comment information to construct the user-topic-comment neural network (UTCNN)."
],
[
"As shown in Figure FIGREF4 , we use a general CNN BIBREF3 and two semantic transformations for document composition . We are given a document with an engaged user INLINEFORM0 , a topic INLINEFORM1 , and its composite INLINEFORM2 words, each word INLINEFORM3 of which is associated with a word embedding INLINEFORM4 where INLINEFORM5 is the vector dimension. For each word embedding INLINEFORM6 , we apply two dot operations as shown in Equation EQREF6 : DISPLAYFORM0 ",
"where INLINEFORM0 models the user reading preference for certain semantics, and INLINEFORM1 models the topic semantics; INLINEFORM2 and INLINEFORM3 are the dimensions of transformed user and topic embeddings respectively. We use INLINEFORM4 to model semantically what each user prefers to read and/or write, and use INLINEFORM5 to model the semantics of each topic. The dot operation of INLINEFORM6 and INLINEFORM7 transforms the global representation INLINEFORM8 to a user-dependent representation. Likewise, the dot operation of INLINEFORM9 and INLINEFORM10 transforms INLINEFORM11 to a topic-dependent representation.",
"After the two dot operations on INLINEFORM0 , we have user-dependent and topic-dependent word vectors INLINEFORM1 and INLINEFORM2 , which are concatenated to form a user- and topic-dependent word vector INLINEFORM3 . Then the transformed word embeddings INLINEFORM4 are used as the CNN input. Here we apply three convolutional layers on the concatenated transformed word embeddings INLINEFORM5 : DISPLAYFORM0 ",
"where INLINEFORM0 is the index of words; INLINEFORM1 is a non-linear activation function (we use INLINEFORM2 ); INLINEFORM5 is the convolutional filter with input length INLINEFORM6 and output length INLINEFORM7 , where INLINEFORM8 is the window size of the convolutional operation; and INLINEFORM9 and INLINEFORM10 are the output and bias of the convolution layer INLINEFORM11 , respectively. In our experiments, the three window sizes INLINEFORM12 in the three convolution layers are one, two, and three, encoding unigram, bigram, and trigram semantics accordingly.",
"After the convolutional layer, we add a maximum pooling layer among convolutional outputs to obtain the unigram, bigram, and trigram n-gram representations. This is succeeded by an average pooling layer for an element-wise average of the three maximized convolution outputs."
],
[
"Figure FIGREF10 illustrates the UTCNN model. As more than one user may interact with a given post, we first add a maximum pooling layer after the user matrix embedding layer and user vector embedding layer to form a moderator matrix embedding INLINEFORM0 and a moderator vector embedding INLINEFORM1 for moderator INLINEFORM2 respectively, where INLINEFORM3 is used for the semantic transformation in the document composition process, as mentioned in the previous section. The term moderator here is to denote the pseudo user who provides the overall semantic/sentiment of all the engaged users for one document. The embedding INLINEFORM4 models the moderator stance preference, that is, the pattern of the revealed user stance: whether a user is willing to show his preference, whether a user likes to show impartiality with neutral statements and reasonable arguments, or just wants to show strong support for one stance. Ideally, the latent user stance is modeled by INLINEFORM5 for each user. Likewise, for topic information, a maximum pooling layer is added after the topic matrix embedding layer and topic vector embedding layer to form a joint topic matrix embedding INLINEFORM6 and a joint topic vector embedding INLINEFORM7 for topic INLINEFORM8 respectively, where INLINEFORM9 models the semantic transformation of topic INLINEFORM10 as in users and INLINEFORM11 models the topic stance tendency. The latent topic stance is also modeled by INLINEFORM12 for each topic.",
"As for comments, we view them as short documents with authors only but without likers nor their own comments. Therefore we apply document composition on comments although here users are commenters (users who comment). It is noticed that the word embeddings INLINEFORM0 for the same word in the posts and comments are the same, but after being transformed to INLINEFORM1 in the document composition process shown in Figure FIGREF4 , they might become different because of their different engaged users. The output comment representation together with the commenter vector embedding INLINEFORM2 and topic vector embedding INLINEFORM3 are concatenated and a maximum pooling layer is added to select the most important feature for comments. Instead of requiring that the comment stance agree with the post, UTCNN simply extracts the most important features of the comment contents; they could be helpful, whether they show obvious agreement or disagreement. Therefore when combining comment information here, the maximum pooling layer is more appropriate than other pooling or merging layers. Indeed, we believe this is one reason for UTCNN's performance gains.",
"Finally, the pooled comment representation, together with user vector embedding INLINEFORM0 , topic vector embedding INLINEFORM1 , and document representation are fed to a fully connected network, and softmax is applied to yield the final stance label prediction for the post."
],
[
"We start with the experimental dataset and then describe the training process as well as the implementation of the baselines. We also implement several variations to reveal the effects of features: authors, likers, comment, and commenters. In the results section we compare our model with related work."
],
[
"We tested the proposed UTCNN on two different datasets: FBFans and CreateDebate. FBFans is a privately-owned, single-topic, Chinese, unbalanced, social media dataset, and CreateDebate is a public, multiple-topic, English, balanced, forum dataset. Results using these two datasets show the applicability and superiority for different topics, languages, data distributions, and platforms.",
"The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen’s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero.",
"To test whether the assumption of this paper – posts attract users who hold the same stance to like them – is reliable, we examine the likes from authors of different stances. Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts. As the numbers of authors in the Sup, Neu and Uns stances are largely imbalanced, these numbers are normalized by the number of users of each stance. Table TABREF13 shows the results. Posts with stances (i.e., not neutral) attract users of the same stance. Neutral posts also attract both supportive and neutral users, like what we observe in supportive posts, but just the neutral posts can attract even more neutral likers. These results do suggest that users prefer posts of the same stance, or at least posts of no obvious stance which might cause annoyance when reading, and hence support the user modeling in our approach.",
"The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 .",
"The FBFans dataset has more integrated functions than the CreateDebate dataset; thus our model can utilize all linguistic and extra-linguistic features. For the CreateDebate dataset, on the other hand, the like and comment features are not available (as there is a stance label for each reply, replies are evaluated as posts as other previous work) but we still implemented our model using the content, author, and topic information."
],
[
"In the UTCNN training process, cross-entropy was used as the loss function and AdaGrad as the optimizer. For FBFans dataset, we learned the 50-dimensional word embeddings on the whole dataset using GloVe BIBREF21 to capture the word semantics; for CreateDebate dataset we used the publicly available English 50-dimensional word embeddings, pre-trained also using GloVe. These word embeddings were fixed in the training process. The learning rate was set to 0.03. All user and topic embeddings were randomly initialized in the range of [-0.1 0.1]. Matrix embeddings for users and topics were sized at 250 ( INLINEFORM0 ); vector embeddings for users and topics were set to length 10.",
"We applied the LDA topic model BIBREF22 on the FBFans dataset to determine the latent topics with which to build topic embeddings, as there is only one general known topic: nuclear power plants. We learned 100 latent topics and assigned the top three topics for each post. For the CreateDebate dataset, which itself constitutes four topics, the topic labels for posts were used directly without additionally applying LDA.",
"For the FBFans data we report class-based f-scores as well as the macro-average f-score ( INLINEFORM0 ) shown in equation EQREF19 . DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are the average precision and recall of the three class. We adopted the macro-average f-score as the evaluation metric for the overall performance because (1) the experimental dataset is severely imbalanced, which is common for contentious issues; and (2) for stance classification, content in minor-class posts is usually more important for further applications. For the CreateDebate dataset, accuracy was adopted as the evaluation metric to compare the results with related work BIBREF7 , BIBREF9 , BIBREF12 ."
],
[
"We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; 6) UTCNN without user information, representing a pure-text CNN model where we use the same user matrix and user embeddings INLINEFORM1 and INLINEFORM2 for each user; 7) UTCNN without the LDA model, representing how UTCNN works with a single-topic dataset; 8) UTCNN without comments, in which the model predicts the stance label given only user and topic information. All these models were trained on the training set, and parameters as well as the SVM kernel selections (linear or RBF) were fine-tuned on the development set. Also, we adopt oversampling on SVMs, CNN and RCNN because the FBFans dataset is highly imbalanced."
],
[
"In Table TABREF22 we show the results of UTCNN and the baselines on the FBFans dataset. Here Majority yields good performance on Neu since FBFans is highly biased to the neutral class. The SVM models perform well on Sup and Neu but perform poorly for Uns, showing that content information in itself is insufficient to predict stance labels, especially for the minor class. With the transformed word embedding feature, SVM can achieve comparable performance as SVM with n-gram feature. However, the much fewer feature dimension of the transformed word embedding makes SVM with word embeddings a more efficient choice for modeling the large scale social media dataset. For the CNN and RCNN models, they perform slightly better than most of the SVM models but still, the content information is insufficient to achieve a good performance on the Uns posts. As to adding comment information to these models, since the commenters do not always hold the same stance as the author, simply adding comments and post contents together merely adds noise to the model.",
"Among all UTCNN variations, we find that user information is most important, followed by topic and comment information. UTCNN without user information shows results similar to SVMs — it does well for Sup and Neu but detects no Uns. Its best f-scores on both Sup and Neu among all methods show that with enough training data, content-based models can perform well; at the same time, the lack of user information results in too few clues for minor-class posts to either predict their stance directly or link them to other users and posts for improved performance. The 17.5% improvement when adding user information suggests that user information is especially useful when the dataset is highly imbalanced. All models that consider user information predict the minority class successfully. UCTNN without topic information works well but achieves lower performance than the full UTCNN model. The 4.9% performance gain brought by LDA shows that although it is satisfactory for single topic datasets, adding that latent topics still benefits performance: even when we are discussing the same topic, we use different arguments and supporting evidence. Lastly, we get 4.8% improvement when adding comment information and it achieves comparable performance to UTCNN without topic information, which shows that comments also benefit performance. For platforms where user IDs are pixelated or otherwise hidden, adding comments to a text model still improves performance. In its integration of user, content, and comment information, the full UTCNN produces the highest f-scores on all Sup, Neu, and Uns stances among models that predict the Uns class, and the highest macro-average f-score overall. This shows its ability to balance a biased dataset and supports our claim that UTCNN successfully bridges content and user, topic, and comment information for stance classification on social media text. Another merit of UTCNN is that it does not require a balanced training data. This is supported by its outperforming other models though no oversampling technique is applied to the UTCNN related experiments as shown in this paper. Thus we can conclude that the user information provides strong clues and it is still rich even in the minority class.",
"We also investigate the semantic difference when a user acts as an author/liker or a commenter. We evaluated a variation in which all embeddings from the same user were forced to be identical (this is the UTCNN shared user embedding setting in Table TABREF22 ). This setting yielded only a 2.5% improvement over the model without comments, which is not statistically significant. However, when separating authors/likers and commenters embeddings (i.e., the UTCNN full model), we achieved much greater improvements (4.8%). We attribute this result to the tendency of users to use different wording for different roles (for instance author vs commenter). This is observed when the user, acting as an author, attempts to support her argument against nuclear power by using improvements in solar power; when acting as a commenter, though, she interacts with post contents by criticizing past politicians who supported nuclear power or by arguing that the proposed evacuation plan in case of a nuclear accident is ridiculous. Based on this finding, in the final UTCNN setting we train two user matrix embeddings for one user: one for the author/liker role and the other for the commenter role."
],
[
"Table TABREF24 shows the results of UTCNN, baselines as we implemented on the FBFans datset and related work on the CreateDebate dataset. We do not adopt oversampling on these models because the CreateDebate dataset is almost balanced. In previous work, integer linear programming (ILP) or linear-chain conditional random fields (CRFs) were proposed to integrate text features, author, ideology, and user-interaction constraints, where text features are unigram, bigram, and POS-dependencies; the author constraint tends to require that posts from the same author for the same topic hold the same stance; the ideology constraint aims to capture inferences between topics for the same author; the user-interaction constraint models relationships among posts via user interactions such as replies BIBREF7 , BIBREF9 .",
"The SVM with n-gram or average word embedding feature performs just similar to the majority. However, with the transformed word embedding, it achieves superior results. It shows that the learned user and topic embeddings really capture the user and topic semantics. This finding is not so obvious in the FBFans dataset and it might be due to the unfavorable data skewness for SVM. As for CNN and RCNN, they perform slightly better than most SVMs as we found in Table TABREF22 for FBFans.",
"Compared to the ILP BIBREF7 and CRF BIBREF9 methods, the UTCNN user embeddings encode author and user-interaction constraints, where the ideology constraint is modeled by the topic embeddings and text features are modeled by the CNN. The significant improvement achieved by UTCNN suggests the latent representations are more effective than overt model constraints.",
"The PSL model BIBREF12 jointly labels both author and post stance using probabilistic soft logic (PSL) BIBREF23 by considering text features and reply links between authors and posts as in Hasan and Ng's work. Table TABREF24 reports the result of their best AD setting, which represents the full joint stance/disagreement collective model on posts and is hence more relevant to UTCNN. In contrast to their model, the UTCNN user embeddings represent relationships between authors, but UTCNN models do not utilize link information between posts. Though the PSL model has the advantage of being able to jointly label the stances of authors and posts, its performance on posts is lower than the that for the ILP or CRF models. UTCNN significantly outperforms these models on posts and has the potential to predict user stances through the generated user embeddings.",
"For the CreateDebate dataset, we also evaluated performance when not using topic embeddings or user embeddings; as replies in this dataset are viewed as posts, the setting without comment embeddings is not available. Table TABREF24 shows the same findings as Table TABREF22 : the 21% improvement in accuracy demonstrates that user information is the most vital. This finding also supports the results in the related work: user constraints are useful and can yield 11.2% improvement in accuracy BIBREF7 . Further considering topic information yields 3.4% improvement, suggesting that knowing the subject of debates provides useful information. In sum, Table TABREF22 together with Table TABREF24 show that UTCNN achieves promising performance regardless of topic, language, data distribution, and platform."
],
[
"We have proposed UTCNN, a neural network model that incorporates user, topic, content and comment information for stance classification on social media texts. UTCNN learns user embeddings for all users with minimum active degree, i.e., one post or one like. Topic information obtained from the topic model or the pre-defined labels further improves the UTCNN model. In addition, comment information provides additional clues for stance classification. We have shown that UTCNN achieves promising and balanced results. In the future we plan to explore the effectiveness of the UTCNN user embeddings for author stance classification."
],
[
"Research of this paper was partially supported by Ministry of Science and Technology, Taiwan, under the contract MOST 104-2221-E-001-024-MY2."
]
],
"section_name": [
"Introduction",
"Extra-Linguistic Features for Stance Classification",
"Deep Learning on Extra-Linguistic Features",
"Method",
"User- and Topic-dependent Document Composition",
"UTCNN Model Description",
"Experiment",
"Dataset",
"Settings",
"Baselines",
"Results on FBFans Dataset",
"Results on CreateDebate Dataset",
"Conclusion",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"08c093860f115e2b178c70128098ea69a1430ab8",
"78a382b17b1c41f970eb19f3cdf32c8750f2c46e"
],
"answer": [
{
"evidence": [
"The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen’s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero."
],
"extractive_spans": [
"anti-nuclear-power"
],
"free_form_answer": "",
"highlighted_evidence": [
"The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen’s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero."
],
"extractive_spans": [
"anti-nuclear-power"
],
"free_form_answer": "",
"highlighted_evidence": [
"The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"87ce7f55c6ac51ba04c9a584ca7de029f0496d87"
],
"answer": [
{
"evidence": [
"Figure FIGREF10 illustrates the UTCNN model. As more than one user may interact with a given post, we first add a maximum pooling layer after the user matrix embedding layer and user vector embedding layer to form a moderator matrix embedding INLINEFORM0 and a moderator vector embedding INLINEFORM1 for moderator INLINEFORM2 respectively, where INLINEFORM3 is used for the semantic transformation in the document composition process, as mentioned in the previous section. The term moderator here is to denote the pseudo user who provides the overall semantic/sentiment of all the engaged users for one document. The embedding INLINEFORM4 models the moderator stance preference, that is, the pattern of the revealed user stance: whether a user is willing to show his preference, whether a user likes to show impartiality with neutral statements and reasonable arguments, or just wants to show strong support for one stance. Ideally, the latent user stance is modeled by INLINEFORM5 for each user. Likewise, for topic information, a maximum pooling layer is added after the topic matrix embedding layer and topic vector embedding layer to form a joint topic matrix embedding INLINEFORM6 and a joint topic vector embedding INLINEFORM7 for topic INLINEFORM8 respectively, where INLINEFORM9 models the semantic transformation of topic INLINEFORM10 as in users and INLINEFORM11 models the topic stance tendency. The latent topic stance is also modeled by INLINEFORM12 for each topic.",
"As for comments, we view them as short documents with authors only but without likers nor their own comments. Therefore we apply document composition on comments although here users are commenters (users who comment). It is noticed that the word embeddings INLINEFORM0 for the same word in the posts and comments are the same, but after being transformed to INLINEFORM1 in the document composition process shown in Figure FIGREF4 , they might become different because of their different engaged users. The output comment representation together with the commenter vector embedding INLINEFORM2 and topic vector embedding INLINEFORM3 are concatenated and a maximum pooling layer is added to select the most important feature for comments. Instead of requiring that the comment stance agree with the post, UTCNN simply extracts the most important features of the comment contents; they could be helpful, whether they show obvious agreement or disagreement. Therefore when combining comment information here, the maximum pooling layer is more appropriate than other pooling or merging layers. Indeed, we believe this is one reason for UTCNN's performance gains.",
"Finally, the pooled comment representation, together with user vector embedding INLINEFORM0 , topic vector embedding INLINEFORM1 , and document representation are fed to a fully connected network, and softmax is applied to yield the final stance label prediction for the post."
],
"extractive_spans": [],
"free_form_answer": "eight layers",
"highlighted_evidence": [
"Figure FIGREF10 illustrates the UTCNN model. As more than one user may interact with a given post, we first add a maximum pooling layer after the user matrix embedding layer and user vector embedding layer to form a moderator matrix embedding INLINEFORM0 and a moderator vector embedding INLINEFORM1 for moderator INLINEFORM2 respectively, where INLINEFORM3 is used for the semantic transformation in the document composition process, as mentioned in the previous section. The term moderator here is to denote the pseudo user who provides the overall semantic/sentiment of all the engaged users for one document. The embedding INLINEFORM4 models the moderator stance preference, that is, the pattern of the revealed user stance: whether a user is willing to show his preference, whether a user likes to show impartiality with neutral statements and reasonable arguments, or just wants to show strong support for one stance. Ideally, the latent user stance is modeled by INLINEFORM5 for each user. Likewise, for topic information, a maximum pooling layer is added after the topic matrix embedding layer and topic vector embedding layer to form a joint topic matrix embedding INLINEFORM6 and a joint topic vector embedding INLINEFORM7 for topic INLINEFORM8 respectively, where INLINEFORM9 models the semantic transformation of topic INLINEFORM10 as in users and INLINEFORM11 models the topic stance tendency. The latent topic stance is also modeled by INLINEFORM12 for each topic.\n\nAs for comments, we view them as short documents with authors only but without likers nor their own comments. Therefore we apply document composition on comments although here users are commenters (users who comment). It is noticed that the word embeddings INLINEFORM0 for the same word in the posts and comments are the same, but after being transformed to INLINEFORM1 in the document composition process shown in Figure FIGREF4 , they might become different because of their different engaged users. The output comment representation together with the commenter vector embedding INLINEFORM2 and topic vector embedding INLINEFORM3 are concatenated and a maximum pooling layer is added to select the most important feature for comments. Instead of requiring that the comment stance agree with the post, UTCNN simply extracts the most important features of the comment contents; they could be helpful, whether they show obvious agreement or disagreement. Therefore when combining comment information here, the maximum pooling layer is more appropriate than other pooling or merging layers. Indeed, we believe this is one reason for UTCNN's performance gains.\n\nFinally, the pooled comment representation, together with user vector embedding INLINEFORM0 , topic vector embedding INLINEFORM1 , and document representation are fed to a fully connected network, and softmax is applied to yield the final stance label prediction for the post."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"31c82ccf7692b2fe2aa345c66a99dbe4ff88eaac",
"e713cf9d9a988c51b2bdad4b3173cad93d75b72c"
],
"answer": [
{
"evidence": [
"The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 ."
],
"extractive_spans": [
"abortion",
"gay rights",
"Obama",
"marijuana"
],
"free_form_answer": "",
"highlighted_evidence": [
"The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 ."
],
"extractive_spans": [
"abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR)"
],
"free_form_answer": "",
"highlighted_evidence": [
"The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"421b4a1cb5e655bf876e28a979df32f623188040",
"60930a2202aee84276d2f6d27ea3bb4a760b232b"
],
"answer": [
{
"evidence": [
"To test whether the assumption of this paper – posts attract users who hold the same stance to like them – is reliable, we examine the likes from authors of different stances. Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts. As the numbers of authors in the Sup, Neu and Uns stances are largely imbalanced, these numbers are normalized by the number of users of each stance. Table TABREF13 shows the results. Posts with stances (i.e., not neutral) attract users of the same stance. Neutral posts also attract both supportive and neutral users, like what we observe in supportive posts, but just the neutral posts can attract even more neutral likers. These results do suggest that users prefer posts of the same stance, or at least posts of no obvious stance which might cause annoyance when reading, and hence support the user modeling in our approach."
],
"extractive_spans": [
"32,595 posts"
],
"free_form_answer": "",
"highlighted_evidence": [
"Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen’s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero.",
"To test whether the assumption of this paper – posts attract users who hold the same stance to like them – is reliable, we examine the likes from authors of different stances. Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts. As the numbers of authors in the Sup, Neu and Uns stances are largely imbalanced, these numbers are normalized by the number of users of each stance. Table TABREF13 shows the results. Posts with stances (i.e., not neutral) attract users of the same stance. Neutral posts also attract both supportive and neutral users, like what we observe in supportive posts, but just the neutral posts can attract even more neutral likers. These results do suggest that users prefer posts of the same stance, or at least posts of no obvious stance which might cause annoyance when reading, and hence support the user modeling in our approach."
],
"extractive_spans": [
"32,595"
],
"free_form_answer": "",
"highlighted_evidence": [
"The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. ",
"Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"03dce6f4b0fbb16248d2ea38c7b852f41d761554",
"66bdb4452665b85b38877093b5a9388674bf7fb8"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"We tested the proposed UTCNN on two different datasets: FBFans and CreateDebate. FBFans is a privately-owned, single-topic, Chinese, unbalanced, social media dataset, and CreateDebate is a public, multiple-topic, English, balanced, forum dataset. Results using these two datasets show the applicability and superiority for different topics, languages, data distributions, and platforms.",
"The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We tested the proposed UTCNN on two different datasets: FBFans and CreateDebate. FBFans is a privately-owned, single-topic, Chinese, unbalanced, social media dataset, and CreateDebate is a public, multiple-topic, English, balanced, forum dataset. ",
"The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 ."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"64b8c2449eb3d1583d86783d28f7a4764075495f",
"d10bc18f3f2e713b53868da880c65e43da2fdd4a"
],
"answer": [
{
"evidence": [
"We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; 6) UTCNN without user information, representing a pure-text CNN model where we use the same user matrix and user embeddings INLINEFORM1 and INLINEFORM2 for each user; 7) UTCNN without the LDA model, representing how UTCNN works with a single-topic dataset; 8) UTCNN without comments, in which the model predicts the stance label given only user and topic information. All these models were trained on the training set, and parameters as well as the SVM kernel selections (linear or RBF) were fine-tuned on the development set. Also, we adopt oversampling on SVMs, CNN and RCNN because the FBFans dataset is highly imbalanced."
],
"extractive_spans": [
"SVM with unigram, bigram, and trigram features",
"SVM with average word embedding",
"SVM with average transformed word embeddings",
"CNN",
"ecurrent Convolutional Neural Networks",
"SVM and deep learning models with comment information"
],
"free_form_answer": "",
"highlighted_evidence": [
"We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; 6) UTCNN without user information, representing a pure-text CNN model where we use the same user matrix and user embeddings INLINEFORM1 and INLINEFORM2 for each user; 7) UTCNN without the LDA model, representing how UTCNN works with a single-topic dataset; 8) UTCNN without comments, in which the model predicts the stance label given only user and topic information. All these models were trained on the training set, and parameters as well as the SVM kernel selections (linear or RBF) were fine-tuned on the development set. Also, we adopt oversampling on SVMs, CNN and RCNN because the FBFans dataset is highly imbalanced."
],
"extractive_spans": [],
"free_form_answer": "SVM with unigram, bigram, trigram features, with average word embedding, with average transformed word embeddings, CNN and RCNN, SVM, CNN, RCNN with comment information",
"highlighted_evidence": [
"We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; 6) UTCNN without user information, representing a pure-text CNN model where we use the same user matrix and user embeddings INLINEFORM1 and INLINEFORM2 for each user; 7) UTCNN without the LDA model, representing how UTCNN works with a single-topic dataset; 8) UTCNN without comments, in which the model predicts the stance label given only user and topic information. All these models were trained on the training set, and parameters as well as the SVM kernel selections (linear or RBF) were fine-tuned on the development set. Also, we adopt oversampling on SVMs, CNN and RCNN because the FBFans dataset is highly imbalanced."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"What topic is covered in the Chinese Facebook data? ",
"How many layers does the UTCNN model have?",
"What topics are included in the debate data?",
"What is the size of the Chinese data?",
"Did they collected the two datasets?",
"What are the baselines?"
],
"question_id": [
"37a79be0148e1751ffb2daabe4c8ec6680036106",
"518dae6f936882152c162058895db4eca815e649",
"e44a6bf67ce3fde0c6608b150030e44d87eb25e3",
"6a31db1aca57a818f36bba9002561724655372a7",
"e330e162ec29722f5ec9f83853d129c9e0693d65",
"d3093062aebff475b4deab90815004051e802aa6"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"social media",
"social media",
"social media",
"social media",
"social media",
"social media"
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Document composition in a convolutional neural network with three convolutional filters and user- and topic-dependent semantic transformations. Respectively, xw is the word embedding of word w, x′w is the word embedding of word w after transformation, Uk and Tj are user and topic matrix embeddings for user k and topic j.",
"Figure 2: The UTCNN model. Assuming one post author, l likers and p topics, xdw is the word embedding of word w in the document; xcw is the word embedding of word w in the comments; Uk and uk are the moderator matrix and vector embedding for moderator k; Tj and tj are the topic matrix and vector embedding for topic j; Ri and ri are the commenter matrix and vector embedding for commenter i. For simplicity we do not explicitly plot the topic vector embedding part for comments, but it does include a maximum pooling layer as with documents.",
"Table 1: Annotation results of FBFans and CreateDebate dataset.",
"Table 2: Distribution of like behavior.",
"Table 3: Performance of post stance classification on the FBFans dataset. *UTCNN (full) results are statistically significant (p-value < 0.005) with respect to all other methods except for UTCNN shared user embedding.",
"Table 4: Accuracies of post stance classification on CreateDebate dataset. *UTCNN results were statistically significant (p-value < 0.001) with respect to other UTCNN settings."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"7-Table3-1.png",
"9-Table4-1.png"
]
} | [
"How many layers does the UTCNN model have?",
"What are the baselines?"
] | [
[
"1611.03599-UTCNN Model Description-1",
"1611.03599-UTCNN Model Description-0",
"1611.03599-UTCNN Model Description-2"
],
[
"1611.03599-Baselines-0"
]
] | [
"eight layers",
"SVM with unigram, bigram, trigram features, with average word embedding, with average transformed word embeddings, CNN and RCNN, SVM, CNN, RCNN with comment information"
] | 115 |
1908.10084 | Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks | BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10,000 sentences requires about 50 million inference computations (~65 hours) with BERT. The construction of BERT makes it unsuitable for semantic similarity search as well as for unsupervised tasks like clustering. ::: In this publication, we present Sentence-BERT (SBERT), a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. ::: We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods. | {
"paragraphs": [
[
"In this publication, we present Sentence-BERT (SBERT), a modification of the BERT network using siamese and triplet networks that is able to derive semantically meaningful sentence embeddings. This enables BERT to be used for certain new tasks, which up-to-now were not applicable for BERT. These tasks include large-scale semantic similarity comparison, clustering, and information retrieval via semantic search.",
"BERT set new state-of-the-art performance on various sentence classification and sentence-pair regression tasks. BERT uses a cross-encoder: Two sentences are passed to the transformer network and the target value is predicted. However, this setup is unsuitable for various pair regression tasks due to too many possible combinations. Finding in a collection of $n=10\\,000$ sentences the pair with the highest similarity requires with BERT $n\\cdot (n-1)/2=49\\,995\\,000$ inference computations. On a modern V100 GPU, this requires about 65 hours. Similar, finding which of the over 40 million existent questions of Quora is the most similar for a new question could be modeled as a pair-wise comparison with BERT, however, answering a single query would require over 50 hours.",
"A common method to address clustering and semantic search is to map each sentence to a vector space such that semantically similar sentences are close. Researchers have started to input individual sentences into BERT and to derive fixed-size sentence embeddings. The most commonly used approach is to average the BERT output layer (known as BERT embeddings) or by using the output of the first token (the [CLS] token). As we will show, this common practice yields rather bad sentence embeddings, often worse than averaging GloVe embeddings BIBREF2.",
"To alleviate this issue, we developed SBERT. The siamese network architecture enables that fixed-sized vectors for input sentences can be derived. Using a similarity measure like cosine-similarity or Manhatten / Euclidean distance, semantically similar sentences can be found. These similarity measures can be performed extremely efficient on modern hardware, allowing SBERT to be used for semantic similarity search as well as for clustering. The complexity for finding the most similar sentence pair in a collection of 10,000 sentences is reduced from 65 hours with BERT to the computation of 10,000 sentence embeddings (5 seconds with SBERT) and computing cosine-similarity (0.01 seconds). By using optimized index structures, finding the most similar Quora question can be reduced from 50 hours to a few milliseconds BIBREF3.",
"We fine-tune SBERT on NLI data, which creates sentence embeddings that significantly outperform other state-of-the-art sentence embedding methods like InferSent BIBREF4 and Universal Sentence Encoder BIBREF5. On seven Semantic Textual Similarity (STS) tasks, SBERT achieves an improvement of 11.7 points compared to InferSent and 5.5 points compared to Universal Sentence Encoder. On SentEval BIBREF6, an evaluation toolkit for sentence embeddings, we achieve an improvement of 2.1 and 2.6 points, respectively.",
"SBERT can be adapted to a specific task. It sets new state-of-the-art performance on a challenging argument similarity dataset BIBREF7 and on a triplet dataset to distinguish sentences from different sections of a Wikipedia article BIBREF8.",
"The paper is structured in the following way: Section SECREF3 presents SBERT, section SECREF4 evaluates SBERT on common STS tasks and on the challenging Argument Facet Similarity (AFS) corpus BIBREF7. Section SECREF5 evaluates SBERT on SentEval. In section SECREF6, we perform an ablation study to test some design aspect of SBERT. In section SECREF7, we compare the computational efficiency of SBERT sentence embeddings in contrast to other state-of-the-art sentence embedding methods."
],
[
"We first introduce BERT, then, we discuss state-of-the-art sentence embedding methods.",
"BERT BIBREF0 is a pre-trained transformer network BIBREF9, which set for various NLP tasks new state-of-the-art results, including question answering, sentence classification, and sentence-pair regression. The input for BERT for sentence-pair regression consists of the two sentences, separated by a special [SEP] token. Multi-head attention over 12 (base-model) or 24 layers (large-model) is applied and the output is passed to a simple regression function to derive the final label. Using this setup, BERT set a new state-of-the-art performance on the Semantic Textual Semilarity (STS) benchmark BIBREF10. RoBERTa BIBREF1 showed, that the performance of BERT can further improved by small adaptations to the pre-training process. We also tested XLNet BIBREF11, but it led in general to worse results than BERT.",
"A large disadvantage of the BERT network structure is that no independent sentence embeddings are computed, which makes it difficult to derive sentence embeddings from BERT. To bypass this limitations, researchers passed single sentences through BERT and then derive a fixed sized vector by either averaging the outputs (similar to average word embeddings) or by using the output of the special CLS token (for example: bertsentenceembeddings1,bertsentenceembeddings2,bertsentenceembeddings3). These two options are also provided by the popular bert-as-a-service-repository. Up to our knowledge, there is so far no evaluation if these methods lead to useful sentence embeddings.",
"Sentence embeddings are a well studied area with dozens of proposed methods. Skip-Thought BIBREF12 trains an encoder-decoder architecture to predict the surrounding sentences. InferSent BIBREF4 uses labeled data of the Stanford Natural Language Inference dataset BIBREF13 and the Multi-Genre NLI dataset BIBREF14 to train a siamese BiLSTM network with max-pooling over the output. Conneau et al. showed, that InferSent consistently outperforms unsupervised methods like SkipThought. Universal Sentence Encoder BIBREF5 trains a transformer network and augments unsupervised learning with training on SNLI. hill-etal-2016-learning showed, that the task on which sentence embeddings are trained significantly impacts their quality. Previous work BIBREF4, BIBREF5 found that the SNLI datasets are suitable for training sentence embeddings. yang-2018-learning presented a method to train on conversations from Reddit using siamese DAN and siamese transformer networks, which yielded good results on the STS benchmark dataset.",
"polyencoders addresses the run-time overhead of the cross-encoder from BERT and present a method (poly-encoders) to compute a score between $m$ context vectors and pre-computed candidate embeddings using attention. This idea works for finding the highest scoring sentence in a larger collection. However, poly-encoders have the drawback that the score function is not symmetric and the computational overhead is too large for use-cases like clustering, which would require $O(n^2)$ score computations.",
"Previous neural sentence embedding methods started the training from a random initialization. In this publication, we use the pre-trained BERT and RoBERTa network and only fine-tune it to yield useful sentence embeddings. This reduces significantly the needed training time: SBERT can be tuned in less than 20 minutes, while yielding better results than comparable sentence embedding methods."
],
[
"SBERT adds a pooling operation to the output of BERT / RoBERTa to derive a fixed sized sentence embedding. We experiment with three pooling strategies: Using the output of the CLS-token, computing the mean of all output vectors (MEAN-strategy), and computing a max-over-time of the output vectors (MAX-strategy). The default configuration is MEAN.",
"In order to fine-tune BERT / RoBERTa, we create siamese and triplet networks BIBREF15 to update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity.",
"The network structure depends on the available training data. We experiment with the following structures and objective functions.",
"Classification Objective Function. We concatenate the sentence embeddings $u$ and $v$ with the element-wise difference $|u-v|$ and multiply it with the trainable weight $W_t \\in \\mathbb {R}^{3n \\times k}$:",
"where $n$ is the dimension of the sentence embeddings and $k$ the number of labels. We optimize cross-entropy loss. This structure is depicted in Figure FIGREF4.",
"Regression Objective Function. The cosine-similarity between the two sentence embeddings $u$ and $v$ is computed (Figure FIGREF5). We use mean-squared-error loss as the objective function.",
"Triplet Objective Function. Given an anchor sentence $a$, a positive sentence $p$, and a negative sentence $n$, triplet loss tunes the network such that the distance between $a$ and $p$ is smaller than the distance between $a$ and $n$. Mathematically, we minimize the following loss function:",
"with $s_x$ the sentence embedding for $a$/$n$/$p$, $||\\cdot ||$ a distance metric and margin $\\epsilon $. Margin $\\epsilon $ ensures that $s_p$ is at least $\\epsilon $ closer to $s_a$ than $s_n$. As metric we use Euclidean distance and we set $\\epsilon =1$ in our experiments."
],
[
"We train SBERT on the combination of the SNLI BIBREF13 and the Multi-Genre NLI BIBREF14 dataset. The SNLI is a collection of 570,000 sentence pairs annotated with the labels contradiction, eintailment, and neutral. MultiNLI contains 430,000 sentence pairs and covers a range of genres of spoken and written text. We fine-tune SBERT with a 3-way softmax-classifier objective function for one epoch. We used a batch-size of 16, Adam optimizer with learning rate $2\\mathrm {e}{-5}$, and a linear learning rate warm-up over 10% of the training data. Our default pooling strategy is MEAN."
],
[
"We evaluate the performance of SBERT for common Semantic Textual Similarity (STS) tasks. State-of-the-art methods often learn a (complex) regression function that maps sentence embeddings to a similarity score. However, these regression functions work pair-wise and due to the combinatorial explosion those are often not scalable if the collection of sentences reaches a certain size. Instead, we always use cosine-similarity to compare the similarity between two sentence embeddings. We ran our experiments also with negative Manhatten and negative Euclidean distances as similarity measures, but the results for all approaches remained roughly the same."
],
[
"We evaluate the performance of SBERT for STS without using any STS specific training data. We use the STS tasks 2012 - 2016 BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, the STS benchmark BIBREF10, and the SICK-Relatedness dataset BIBREF21. These datasets provide labels between 0 and 5 on the semantic relatedness of sentence pairs. We showed in BIBREF22 that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. The setup for the other sentence embedding methods is equivalent, the similarity is computed by cosine-similarity. The results are depicted in Table TABREF6.",
"The results shows that directly using the output of BERT leads to rather poor performances. Averaging the BERT embeddings achieves an average correlation of only 54.81, and using the CLS-token output only achieves an average correlation of 29.19. Both are worse than computing average GloVe embeddings.",
"Using the described siamese network structure and fine-tuning mechanism substantially improves the correlation, outperforming both InferSent and Universal Sentence Encoder substantially. The only dataset where SBERT performs worse than Universal Sentence Encoder is SICK-R. Universal Sentence Encoder was trained on various datasets, including news, question-answer pages and discussion forums, which appears to be more suitable to the data of SICK-R. In contrast, SBERT was pre-trained only on Wikipedia (via BERT) and on NLI data.",
"While RoBERTa was able to improve the performance for several supervised tasks, we only observe minor difference between SBERT and SRoBERTa for generating sentence embeddings."
],
[
"The STS benchmark (STSb) BIBREF10 provides is a popular dataset to evaluate supervised STS systems. The data includes 8,628 sentence pairs from the three categories captions, news, and forums. It is divided into train (5,749), dev (1,500) and test (1,379). BERT set a new state-of-the-art performance on this dataset by passing both sentences to the network and using a simple regression method for the output.",
"We use the training set to fine-tune SBERT using the regression objective function. At prediction time, we compute the cosine-similarity between the sentence embeddings. All systems are trained with 10 random seeds to counter variances BIBREF23.",
"The results are depicted in Table TABREF10. We experimented with two setups: Only training on STSb, and first training on NLI, then training on STSb. We observe that the later strategy leads to a slight improvement of 1-2 points. This two-step approach had an especially large impact for the BERT cross-encoder, which improved the performance by 3-4 points. We do not observe a significant difference between BERT and RoBERTa."
],
[
"We evaluate SBERT on the Argument Facet Similarity (AFS) corpus by MisraEW16. The AFS corpus annotated 6,000 sentential argument pairs from social media dialogs on three controversial topics: gun control, gay marriage, and death penalty. The data was annotated on a scale from 0 (“different topic\") to 5 (“completely equivalent\"). The similarity notion in the AFS corpus is fairly different to the similarity notion in the STS datasets from SemEval. STS data is usually descriptive, while AFS data are argumentative excerpts from dialogs. To be considered similar, arguments must not only make similar claims, but also provide a similar reasoning. Further, the lexical gap between the sentences in AFS is much larger. Hence, simple unsupervised methods as well as state-of-the-art STS systems perform badly on this dataset BIBREF24.",
"We evaluate SBERT on this dataset in two scenarios: 1) As proposed by Misra et al., we evaluate SBERT using 10-fold cross-validation. A draw-back of this evaluation setup is that it is not clear how well approaches generalize to different topics. Hence, 2) we evaluate SBERT in a cross-topic setup. Two topics serve for training and the approach is evaluated on the left-out topic. We repeat this for all three topics and average the results.",
"SBERT is fine-tuned using the Regression Objective Function. The similarity score is computed using cosine-similarity based on the sentence embeddings. We also provide the Pearson correlation $r$ to make the results comparable to Misra et al. However, we showed BIBREF22 that Pearson correlation has some serious drawbacks and should be avoided for comparing STS systems. The results are depicted in Table TABREF12.",
"Unsupervised methods like tf-idf, average GloVe embeddings or InferSent perform rather badly on this dataset with low scores. Training SBERT in the 10-fold cross-validation setup gives a performance that is nearly on-par with BERT.",
"However, in the cross-topic evaluation, we observe a performance drop of SBERT by about 7 points Spearman correlation. To be considered similar, arguments should address the same claims and provide the same reasoning. BERT is able to use attention to compare directly both sentences (e.g. word-by-word comparison), while SBERT must map individual sentences from an unseen topic to a vector space such that arguments with similar claims and reasons are close. This is a much more challenging task, which appears to require more than just two topics for training to work on-par with BERT."
],
[
"ein-dor-etal-2018-learning use Wikipedia to create a thematically fine-grained train, dev and test set for sentence embeddings methods. Wikipedia articles are separated into distinct sections focusing on certain aspects. Dor et al. assume that sentences in the same section are thematically closer than sentences in different sections. They use this to create a large dataset of weakly labeled sentence triplets: The anchor and the positive example come from the same section, while the negative example comes from a different section of the same article. For example, from the Alice Arnold article: Anchor: Arnold joined the BBC Radio Drama Company in 1988., positive: Arnold gained media attention in May 2012., negative: Balding and Arnold are keen amateur golfers.",
"We use the dataset from Dor et al. We use the Triplet Objective, train SBERT for one epoch on the about 1.8 Million training triplets and evaluate it on the 222,957 test triplets. Test triplets are from a distinct set of Wikipedia articles. As evaluation metric, we use accuracy: Is the positive example closer to the anchor than the negative example?",
"Results are presented in Table TABREF14. Dor et al. fine-tuned a BiLSTM architecture with triplet loss to derive sentence embeddings for this dataset. As the table shows, SBERT clearly outperforms the BiLSTM approach by Dor et al."
],
[
"SentEval BIBREF6 is a popular toolkit to evaluate the quality of sentence embeddings. Sentence embeddings are used as features for a logistic regression classifier. The logistic regression classifier is trained on various tasks in a 10-fold cross-validation setup and the prediction accuracy is computed for the test-fold.",
"The purpose of SBERT sentence embeddings are not to be used for transfer learning for other tasks. Here, we think fine-tuning BERT as described by devlin2018bert for new tasks is the more suitable method, as it updates all layers of the BERT network. However, SentEval can still give an impression on the quality of our sentence embeddings for various tasks.",
"We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:",
"MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.",
"CR: Sentiment prediction of customer product reviews BIBREF26.",
"SUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.",
"MPQA: Phrase level opinion polarity classification from newswire BIBREF28.",
"SST: Stanford Sentiment Treebank with binary labels BIBREF29.",
"TREC: Fine grained question-type classification from TREC BIBREF30.",
"MRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31.",
"The results can be found in Table TABREF15. SBERT is able to achieve the best performance in 5 out of 7 tasks. The average performance increases by about 2 percentage points compared to InferSent as well as the Universal Sentence Encoder. Even though transfer learning is not the purpose of SBERT, it outperforms other state-of-the-art sentence embeddings methods on this task.",
"It appears that the sentence embeddings from SBERT capture well sentiment information: We observe large improvements for all sentiment tasks (MR, CR, and SST) from SentEval in comparison to InferSent and Universal Sentence Encoder.",
"The only dataset where SBERT is significantly worse than Universal Sentence Encoder is the TREC dataset. Universal Sentence Encoder was pre-trained on question-answering data, which appears to be beneficial for the question-type classification task of the TREC dataset.",
"Average BERT embeddings or using the CLS-token output from a BERT network achieved bad results for various STS tasks (Table TABREF6), worse than average GloVe embeddings. However, for SentEval, average BERT embeddings and the BERT CLS-token output achieves decent results (Table TABREF15), outperforming average GloVe embeddings. The reason for this are the different setups. For the STS tasks, we used cosine-similarity to estimate the similarities between sentence embeddings. Cosine-similarity treats all dimensions equally. In contrast, SentEval fits a logistic regression classifier to the sentence embeddings. This allows that certain dimensions can have higher or lower impact on the classification result.",
"We conclude that average BERT embeddings / CLS-token output from BERT return sentence embeddings that are infeasible to be used with cosine-similarity or with Manhatten / Euclidean distance. For transfer learning, they yield slightly worse results than InferSent or Universal Sentence Encoder. However, using the described fine-tuning setup with a siamese network structure on NLI datasets yields sentence embeddings that achieve a new state-of-the-art for the SentEval toolkit."
],
[
"We have demonstrated strong empirical results for the quality of SBERT sentence embeddings. In this section, we perform an ablation study of different aspects of SBERT in order to get a better understanding of their relative importance.",
"We evaluated different pooling strategies (MEAN, MAX, and CLS). For the classification objective function, we evaluate different concatenation methods. For each possible configuration, we train SBERT with 10 different random seeds and average the performances.",
"The objective function (classification vs. regression) depends on the annotated dataset. For the classification objective function, we train SBERT-base on the SNLI and the Multi-NLI dataset. For the regression objective function, we train on the training set of the STS benchmark dataset. Performances are measured on the development split of the STS benchmark dataset. Results are shown in Table TABREF23.",
"When trained with the classification objective function on NLI data, the pooling strategy has a rather minor impact. The impact of the concatenation mode is much larger. InferSent BIBREF4 and Universal Sentence Encoder BIBREF5 both use $(u, v, |u-v|, u*v)$ as input for a softmax classifier. However, in our architecture, adding the element-wise $u*v$ decreased the performance.",
"The most important component is the element-wise difference $|u-v|$. Note, that the concatenation mode is only relevant for training the softmax classifier. At inference, when predicting similarities for the STS benchmark dataset, only the sentence embeddings $u$ and $v$ are used in combination with cosine-similarity. The element-wise difference measures the distance between the dimensions of the two sentence embeddings, ensuring that similar pairs are closer and dissimilar pairs are further apart.",
"When trained with the regression objective function, we observe that the pooling strategy has a large impact. There, the MAX strategy perform significantly worse than MEAN or CLS-token strategy. This is in contrast to BIBREF4, who found it beneficial for the BiLSTM-layer of InferSent to use MAX instead of MEAN pooling."
],
[
"Sentence embeddings need potentially be computed for Millions of sentences, hence, a high computation speed is desired. In this section, we compare SBERT to average GloVe embeddings, InferSent BIBREF4, and Universal Sentence Encoder BIBREF5.",
"For our comparison we use the sentences from the STS benchmark BIBREF10. We compute average GloVe embeddings using a simple for-loop with python dictionary lookups and NumPy. InferSent is based on PyTorch. For Universal Sentence Encoder, we use the TensorFlow Hub version, which is based on TensorFlow. SBERT is based on PyTorch. For improved computation of sentence embeddings, we implemented a smart batching strategy: Sentences with similar lengths are grouped together and are only padded to the longest element in a mini-batch. This drastically reduces computational overhead from padding tokens.",
"Performances were measured on a server with Intel i7-5820K CPU @ 3.30GHz, Nvidia Tesla V100 GPU, CUDA 9.2 and cuDNN. The results are depicted in Table TABREF26.",
"On CPU, InferSent is about 65% faster than SBERT. This is due to the much simpler network architecture. InferSent uses a single BiLSTM layer, while BERT uses 12 stacked transformer layers. However, an advantage of transformer networks is the computational efficiency on GPUs. There, SBERT with smart batching is about 9% faster than InferSent and about 55% faster than Universal Sentence Encoder. Smart batching achieves a speed-up of 89% on CPU and 48% on GPU. Average GloVe embeddings is obviously by a large margin the fastest method to compute sentence embeddings."
],
[
"We showed that BERT out-of-the-box maps sentences to a vector space that is rather unsuitable to be used with common similarity measures like cosine-similarity. The performance for seven STS tasks was below the performance of average GloVe embeddings.",
"To overcome this shortcoming, we presented Sentence-BERT (SBERT). SBERT fine-tunes BERT in a siamese / triplet network architecture. We evaluated the quality on various common benchmarks, where it could achieve a significant improvement over state-of-the-art sentence embeddings methods. Replacing BERT with RoBERTa did not yield a significant improvement in our experiments.",
"SBERT is computationally efficient. On a GPU, it is about 9% faster than InferSent and about 55% faster than Universal Sentence Encoder. SBERT can be used for tasks which are computationally not feasible to be modeled with BERT. For example, clustering of 10,000 sentences with hierarchical clustering requires with BERT about 65 hours, as around 50 Million sentence combinations must be computed. With SBERT, we were able to reduce the effort to about 5 seconds."
],
[
"This work has been supported by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1 and grant GU 798/17-1). It has been co-funded by the German Federal Ministry of Education and Research (BMBF) under the promotional references 03VP02540 (ArgumenText)."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model",
"Model ::: Training Details",
"Evaluation - Semantic Textual Similarity",
"Evaluation - Semantic Textual Similarity ::: Unsupervised STS",
"Evaluation - Semantic Textual Similarity ::: Supervised STS",
"Evaluation - Semantic Textual Similarity ::: Argument Facet Similarity",
"Evaluation - Semantic Textual Similarity ::: Wikipedia Sections Distinction",
"Evaluation - SentEval",
"Ablation Study",
"Computational Efficiency",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"227155540f46bbcf4c74da1594e692247c0a4571",
"82a5dc4b7e33d1da88bc50001333830478fe53d6",
"c6b5b7fa1a22da712721a4ec18fd71cbc6c70e8a"
],
"answer": [
{
"evidence": [
"We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:",
"MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.",
"CR: Sentiment prediction of customer product reviews BIBREF26.",
"SUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.",
"MPQA: Phrase level opinion polarity classification from newswire BIBREF28.",
"SST: Stanford Sentiment Treebank with binary labels BIBREF29.",
"TREC: Fine grained question-type classification from TREC BIBREF30.",
"MRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31."
],
"extractive_spans": [
"MR",
"CR",
"SUBJ",
"MPQA",
"SST",
"TREC",
"MRPC"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:\n\nMR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.\n\nCR: Sentiment prediction of customer product reviews BIBREF26.\n\nSUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.\n\nMPQA: Phrase level opinion polarity classification from newswire BIBREF28.\n\nSST: Stanford Sentiment Treebank with binary labels BIBREF29.\n\nTREC: Fine grained question-type classification from TREC BIBREF30.\n\nMRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The purpose of SBERT sentence embeddings are not to be used for transfer learning for other tasks. Here, we think fine-tuning BERT as described by devlin2018bert for new tasks is the more suitable method, as it updates all layers of the BERT network. However, SentEval can still give an impression on the quality of our sentence embeddings for various tasks.",
"We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:",
"MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.",
"CR: Sentiment prediction of customer product reviews BIBREF26.",
"SUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.",
"MPQA: Phrase level opinion polarity classification from newswire BIBREF28.",
"SST: Stanford Sentiment Treebank with binary labels BIBREF29.",
"TREC: Fine grained question-type classification from TREC BIBREF30.",
"MRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31."
],
"extractive_spans": [
"MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.\n\nCR: Sentiment prediction of customer product reviews BIBREF26.\n\nSUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.\n\nMPQA: Phrase level opinion polarity classification from newswire BIBREF28.\n\nSST: Stanford Sentiment Treebank with binary labels BIBREF29.\n\nTREC: Fine grained question-type classification from TREC BIBREF30.\n\nMRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31."
],
"free_form_answer": "",
"highlighted_evidence": [
"The purpose of SBERT sentence embeddings are not to be used for transfer learning for other tasks. Here, we think fine-tuning BERT as described by devlin2018bert for new tasks is the more suitable method, as it updates all layers of the BERT network. However, SentEval can still give an impression on the quality of our sentence embeddings for various tasks.\n\nWe compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:\n\nMR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.\n\nCR: Sentiment prediction of customer product reviews BIBREF26.\n\nSUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.\n\nMPQA: Phrase level opinion polarity classification from newswire BIBREF28.\n\nSST: Stanford Sentiment Treebank with binary labels BIBREF29.\n\nTREC: Fine grained question-type classification from TREC BIBREF30.\n\nMRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We fine-tune SBERT on NLI data, which creates sentence embeddings that significantly outperform other state-of-the-art sentence embedding methods like InferSent BIBREF4 and Universal Sentence Encoder BIBREF5. On seven Semantic Textual Similarity (STS) tasks, SBERT achieves an improvement of 11.7 points compared to InferSent and 5.5 points compared to Universal Sentence Encoder. On SentEval BIBREF6, an evaluation toolkit for sentence embeddings, we achieve an improvement of 2.1 and 2.6 points, respectively.",
"We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:",
"MR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.",
"CR: Sentiment prediction of customer product reviews BIBREF26.",
"SUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.",
"MPQA: Phrase level opinion polarity classification from newswire BIBREF28.",
"SST: Stanford Sentiment Treebank with binary labels BIBREF29.",
"TREC: Fine grained question-type classification from TREC BIBREF30.",
"MRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31."
],
"extractive_spans": [],
"free_form_answer": "Semantic Textual Similarity, sentiment prediction, subjectivity prediction, phrase level opinion polarity classification, Stanford Sentiment Treebank, fine grained question-type classification.",
"highlighted_evidence": [
"On seven Semantic Textual Similarity (STS) tasks, SBERT achieves an improvement of 11.7 points compared to InferSent and 5.5 points compared to Universal Sentence Encoder.",
"We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:\n\nMR: Sentiment prediction for movie reviews snippets on a five start scale BIBREF25.\n\nCR: Sentiment prediction of customer product reviews BIBREF26.\n\nSUBJ: Subjectivity prediction of sentences from movie reviews and plot summaries BIBREF27.\n\nMPQA: Phrase level opinion polarity classification from newswire BIBREF28.\n\nSST: Stanford Sentiment Treebank with binary labels BIBREF29.\n\nTREC: Fine grained question-type classification from TREC BIBREF30.\n\nMRPC: Microsoft Research Paraphrase Corpus from parallel news sources BIBREF31."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"065f13ef9828b94a686f9272b39b0cb24d151649",
"c2deb9ee8f60b883c6c34dbe06bd21d23c422f12"
],
"answer": [
{
"evidence": [
"We evaluate the performance of SBERT for STS without using any STS specific training data. We use the STS tasks 2012 - 2016 BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, the STS benchmark BIBREF10, and the SICK-Relatedness dataset BIBREF21. These datasets provide labels between 0 and 5 on the semantic relatedness of sentence pairs. We showed in BIBREF22 that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. The setup for the other sentence embedding methods is equivalent, the similarity is computed by cosine-similarity. The results are depicted in Table TABREF6."
],
"extractive_spans": [
" Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels"
],
"free_form_answer": "",
"highlighted_evidence": [
"We showed in BIBREF22 that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate the performance of SBERT for STS without using any STS specific training data. We use the STS tasks 2012 - 2016 BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, the STS benchmark BIBREF10, and the SICK-Relatedness dataset BIBREF21. These datasets provide labels between 0 and 5 on the semantic relatedness of sentence pairs. We showed in BIBREF22 that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. The setup for the other sentence embedding methods is equivalent, the similarity is computed by cosine-similarity. The results are depicted in Table TABREF6."
],
"extractive_spans": [
"Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels"
],
"free_form_answer": "",
"highlighted_evidence": [
"Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"a2aa52e85fcbaec8926d7a36e5af6dff01358aaf"
],
"answer": [
{
"evidence": [
"Previous neural sentence embedding methods started the training from a random initialization. In this publication, we use the pre-trained BERT and RoBERTa network and only fine-tune it to yield useful sentence embeddings. This reduces significantly the needed training time: SBERT can be tuned in less than 20 minutes, while yielding better results than comparable sentence embedding methods."
],
"extractive_spans": [
"20 minutes"
],
"free_form_answer": "",
"highlighted_evidence": [
"his reduces significantly the needed training time: SBERT can be tuned in less than 20 minutes, while yielding better results than comparable sentence embedding methods."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"271a384812be81d52bec41f7b32e103c602e80e9"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"83e35e1130821d77f71ae62fcd0d24a445e22e1c",
"975cdf66f21b22593caed10db96afe9b7e878a5a"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"SBERT adds a pooling operation to the output of BERT / RoBERTa to derive a fixed sized sentence embedding. We experiment with three pooling strategies: Using the output of the CLS-token, computing the mean of all output vectors (MEAN-strategy), and computing a max-over-time of the output vectors (MAX-strategy). The default configuration is MEAN.",
"In order to fine-tune BERT / RoBERTa, we create siamese and triplet networks BIBREF15 to update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity.",
"The network structure depends on the available training data. We experiment with the following structures and objective functions.",
"Classification Objective Function. We concatenate the sentence embeddings $u$ and $v$ with the element-wise difference $|u-v|$ and multiply it with the trainable weight $W_t \\in \\mathbb {R}^{3n \\times k}$:",
"where $n$ is the dimension of the sentence embeddings and $k$ the number of labels. We optimize cross-entropy loss. This structure is depicted in Figure FIGREF4.",
"Regression Objective Function. The cosine-similarity between the two sentence embeddings $u$ and $v$ is computed (Figure FIGREF5). We use mean-squared-error loss as the objective function.",
"Triplet Objective Function. Given an anchor sentence $a$, a positive sentence $p$, and a negative sentence $n$, triplet loss tunes the network such that the distance between $a$ and $p$ is smaller than the distance between $a$ and $n$. Mathematically, we minimize the following loss function:",
"with $s_x$ the sentence embedding for $a$/$n$/$p$, $||\\cdot ||$ a distance metric and margin $\\epsilon $. Margin $\\epsilon $ ensures that $s_p$ is at least $\\epsilon $ closer to $s_a$ than $s_n$. As metric we use Euclidean distance and we set $\\epsilon =1$ in our experiments."
],
"extractive_spans": [
"update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity.",
"Classification Objective Function",
"Regression Objective Function",
"Triplet Objective Function"
],
"free_form_answer": "",
"highlighted_evidence": [
"Model\nSBERT adds a pooling operation to the output of BERT / RoBERTa to derive a fixed sized sentence embedding. We experiment with three pooling strategies: Using the output of the CLS-token, computing the mean of all output vectors (MEAN-strategy), and computing a max-over-time of the output vectors (MAX-strategy). The default configuration is MEAN.\n\nIn order to fine-tune BERT / RoBERTa, we create siamese and triplet networks BIBREF15 to update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity.\n\nThe network structure depends on the available training data. We experiment with the following structures and objective functions.\n\nClassification Objective Function. We concatenate the sentence embeddings $u$ and $v$ with the element-wise difference $|u-v|$ and multiply it with the trainable weight $W_t \\in \\mathbb {R}^{3n \\times k}$:\n\nwhere $n$ is the dimension of the sentence embeddings and $k$ the number of labels. We optimize cross-entropy loss. This structure is depicted in Figure FIGREF4.\n\nRegression Objective Function. The cosine-similarity between the two sentence embeddings $u$ and $v$ is computed (Figure FIGREF5). We use mean-squared-error loss as the objective function.\n\nTriplet Objective Function. Given an anchor sentence $a$, a positive sentence $p$, and a negative sentence $n$, triplet loss tunes the network such that the distance between $a$ and $p$ is smaller than the distance between $a$ and $n$. Mathematically, we minimize the following loss function:\n\nwith $s_x$ the sentence embedding for $a$/$n$/$p$, $||\\cdot ||$ a distance metric and margin $\\epsilon $. Margin $\\epsilon $ ensures that $s_p$ is at least $\\epsilon $ closer to $s_a$ than $s_n$. As metric we use Euclidean distance and we set $\\epsilon =1$ in our experiments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"8a5075b4259be420151bb345a76b45393a82dc11",
"dbbde81110fbe47316170e7c289bec13457a3180"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Spearman rank correlation ρ between the cosine similarity of sentence representations and the gold labels for various Textual Similarity (STS) tasks. Performance is reported by convention as ρ × 100. STS12-STS16: SemEval 2012-2016, STSb: STSbenchmark, SICK-R: SICK relatedness dataset.",
"FLOAT SELECTED: Table 3: Average Pearson correlation r and average Spearman’s rank correlation ρ on the Argument Facet Similarity (AFS) corpus (Misra et al., 2016). Misra et al. proposes 10-fold cross-validation. We additionally evaluate in a cross-topic scenario: Methods are trained on two topics, and are evaluated on the third topic."
],
"extractive_spans": [],
"free_form_answer": "GloVe, BERT, Universal Sentence Encoder, TF-IDF, InferSent",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Spearman rank correlation ρ between the cosine similarity of sentence representations and the gold labels for various Textual Similarity (STS) tasks. Performance is reported by convention as ρ × 100. STS12-STS16: SemEval 2012-2016, STSb: STSbenchmark, SICK-R: SICK relatedness dataset.",
"FLOAT SELECTED: Table 3: Average Pearson correlation r and average Spearman’s rank correlation ρ on the Argument Facet Similarity (AFS) corpus (Misra et al., 2016). Misra et al. proposes 10-fold cross-validation. We additionally evaluate in a cross-topic scenario: Methods are trained on two topics, and are evaluated on the third topic."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:",
"The results can be found in Table TABREF15. SBERT is able to achieve the best performance in 5 out of 7 tasks. The average performance increases by about 2 percentage points compared to InferSent as well as the Universal Sentence Encoder. Even though transfer learning is not the purpose of SBERT, it outperforms other state-of-the-art sentence embeddings methods on this task.",
"FLOAT SELECTED: Table 5: Evaluation of SBERT sentence embeddings using the SentEval toolkit. SentEval evaluates sentence embeddings on different sentence classification tasks by training a logistic regression classifier using the sentence embeddings as features. Scores are based on a 10-fold cross-validation."
],
"extractive_spans": [],
"free_form_answer": "Avg. GloVe embeddings, Avg. fast-text embeddings, Avg. BERT embeddings, BERT CLS-vector, InferSent - GloVe and Universal Sentence Encoder.",
"highlighted_evidence": [
"We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:",
"The results can be found in Table TABREF15.",
"FLOAT SELECTED: Table 5: Evaluation of SBERT sentence embeddings using the SentEval toolkit. SentEval evaluates sentence embeddings on different sentence classification tasks by training a logistic regression classifier using the sentence embeddings as features. Scores are based on a 10-fold cross-validation."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat",
"somewhat"
],
"question": [
"What transfer learning tasks are evaluated?",
"What metrics are used for the STS tasks?",
"How much time takes its training?",
"How many GPUs are used for the training of SBERT?",
"How are the siamese networks trained?",
"What other sentence embeddings methods are evaluated?"
],
"question_id": [
"4944cd597b836b62616a4e37c045ce48de8c82ca",
"a29c071065d26e5ee3c3bcd877e7f215c59d1d33",
"7f207549c75f5c4388efc15ed28822672b845663",
"596aede2b311deb8cb0a82d2e7de314ef6e83e4e",
"2e89ebd2e4008c67bb2413699589ee55f59c4f36",
"e2db361ae9ad9dbaa9a85736c5593eb3a471983d"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"Roberta",
"Roberta",
"Roberta",
"Roberta",
"Roberta",
"Roberta"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: SBERT architecture with classification objective function, e.g., for fine-tuning on SNLI dataset. The two BERT networks have tied weights (siamese network structure).",
"Figure 2: SBERT architecture at inference, for example, to compute similarity scores. This architecture is also used with the regression objective function.",
"Table 1: Spearman rank correlation ρ between the cosine similarity of sentence representations and the gold labels for various Textual Similarity (STS) tasks. Performance is reported by convention as ρ × 100. STS12-STS16: SemEval 2012-2016, STSb: STSbenchmark, SICK-R: SICK relatedness dataset.",
"Table 2: Evaluation on the STS benchmark test set. BERT systems were trained with 10 random seeds and 4 epochs. SBERT was fine-tuned on the STSb dataset, SBERT-NLI was pretrained on the NLI datasets, then fine-tuned on the STSb dataset.",
"Table 4: Evaluation on the Wikipedia section triplets dataset (Dor et al., 2018). SBERT trained with triplet loss for one epoch.",
"Table 3: Average Pearson correlation r and average Spearman’s rank correlation ρ on the Argument Facet Similarity (AFS) corpus (Misra et al., 2016). Misra et al. proposes 10-fold cross-validation. We additionally evaluate in a cross-topic scenario: Methods are trained on two topics, and are evaluated on the third topic.",
"Table 5: Evaluation of SBERT sentence embeddings using the SentEval toolkit. SentEval evaluates sentence embeddings on different sentence classification tasks by training a logistic regression classifier using the sentence embeddings as features. Scores are based on a 10-fold cross-validation.",
"Table 6: SBERT trained on NLI data with the classification objective function, on the STS benchmark (STSb) with the regression objective function. Configurations are evaluated on the development set of the STSb using cosine-similarity and Spearman’s rank correlation. For the concatenation methods, we only report scores with MEAN pooling strategy.",
"Table 7: Computation speed (sentences per second) of sentence embedding methods. Higher is better."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table4-1.png",
"6-Table3-1.png",
"7-Table5-1.png",
"7-Table6-1.png",
"8-Table7-1.png"
]
} | [
"What transfer learning tasks are evaluated?",
"What other sentence embeddings methods are evaluated?"
] | [
[
"1908.10084-Evaluation - SentEval-7",
"1908.10084-Evaluation - SentEval-2",
"1908.10084-Evaluation - SentEval-3",
"1908.10084-Evaluation - SentEval-9",
"1908.10084-Evaluation - SentEval-4",
"1908.10084-Evaluation - SentEval-8",
"1908.10084-Evaluation - SentEval-5",
"1908.10084-Evaluation - SentEval-1",
"1908.10084-Introduction-4",
"1908.10084-Evaluation - SentEval-6"
],
[
"1908.10084-Evaluation - SentEval-2",
"1908.10084-Evaluation - SentEval-10",
"1908.10084-7-Table5-1.png",
"1908.10084-4-Table1-1.png",
"1908.10084-6-Table3-1.png"
]
] | [
"Semantic Textual Similarity, sentiment prediction, subjectivity prediction, phrase level opinion polarity classification, Stanford Sentiment Treebank, fine grained question-type classification.",
"Avg. GloVe embeddings, Avg. fast-text embeddings, Avg. BERT embeddings, BERT CLS-vector, InferSent - GloVe and Universal Sentence Encoder."
] | 116 |
1707.06806 | Shallow reading with Deep Learning: Predicting popularity of online content using only its title | With the ever decreasing attention span of contemporary Internet users, the title of online content (such as a news article or video) can be a major factor in determining its popularity. To take advantage of this phenomenon, we propose a new method based on a bidirectional Long Short-Term Memory (LSTM) neural network designed to predict the popularity of online content using only its title. We evaluate the proposed architecture on two distinct datasets of news articles and news videos distributed in social media that contain over 40,000 samples in total. On those datasets, our approach improves the performance over traditional shallow approaches by a margin of 15%. Additionally, we show that using pre-trained word vectors in the embedding layer improves the results of LSTM models, especially when the training set is small. To our knowledge, this is the first attempt of applying popularity prediction using only textual information from the title. | {
"paragraphs": [
[
"The distribution of textual content is typically very fast and catches user attention for only a short period of time BIBREF0 . For this reason, proper wording of the article title may play a significant role in determining the future popularity of the article. The reflection of this phenomenon is the proliferation of click-baits - short snippets of text whose main purpose is to encourage viewers to click on the link embedded in the snippet. Although detection of click-baits is a separate research topic BIBREF1 , in this paper we address a more general problem of predicting popularity of online content based solely on its title.",
"Predicting popularity in the Internet is a challenging and non-trivial task due to a multitude of factors impacting the distribution of the information: external context, social network of the publishing party, relevance of the video to the final user, etc. This topic has therefore attracted a lot of attention from the research community BIBREF2 , BIBREF3 , BIBREF0 , BIBREF4 .",
"In this paper we propose a method for online content popularity prediction based on a bidirectional recurrent neural network called BiLSTM. This work is inspired by recent successful applications of deep neural networks in many natural language processing problems BIBREF5 , BIBREF6 . Our method attempts to model complex relationships between the title of an article and its popularity using novel deep network architecture that, in contrast to the previous approaches, gives highly interpretable results. Last but not least, the proposed BiLSTM method provides a significant performance boost in terms of prediction accuracy over the standard shallow approach, while outperforming the current state-of-the-art on two distinct datasets with over 40,000 samples.",
"To summarize, the contributions presented in this paper are the following:",
"The remainder of this paper is organized in the following manner: first, we review the relevant literature and compare our approach to existing work. Next, we formulate the problem of popularity prediction and propose a model that takes advantage of BiLSTM architecture to address it. Then, we evaluate our model on two datasets using several pre-trained word embeddings and compare it to benchmark models. We conclude this work with discussion on future research paths."
],
[
"The ever increasing popularity of the Internet as a virtual space to share content inspired research community to analyze different aspects of online information distribution. Various types of content were analyzed, ranging from textual data, such as Twitter posts BIBREF0 or Digg stories BIBREF2 to images BIBREF7 to videos BIBREF8 , BIBREF3 , BIBREF9 . Although several similarities were observed across content domains, e.g. log-normal distribution of data popularity BIBREF10 , in this work we focus only on textual content and, more precisely, on the popularity of news articles and its relation to the article's title.",
"Forecasting popularity of news articles was especially well studied in the context of Twitter - a social media platform designed specifically for sharing textual data BIBREF11 , BIBREF12 . Not only did the previous works focus on the prediction part, but also on modeling message propagation within the network BIBREF13 . However, most of the works were focused on analyzing the social interactions between the users and the characteristics of so-called social graph of users' connections, rather than on the textual features. Contrary to those approaches, in this paper we base our predictions using only textual features of the article title. We also validate our proposed method on one dataset collected using a different social media platform, namely Facebook, and another one created from various news articles BIBREF4 .",
"Recently, several works have touched on the topic of popularity prediction of news article from a multimodal perspective BIBREF4 , BIBREF14 . Although in BIBREF4 the authors analyze news articles on a per-modality basis, they do not approach the problem of popularity prediction in a holistic way. To address this shortcoming, BIBREF14 have proposed a multimodal approach to predicting popularity of short videos shares in social media platform Vine using a model that fuses features related to different modalities. In our work, we focus only on textual features of the article title for the purpose of popularity prediction, as our goal is to empower the journalists to quantitatively assess the quality of the headlines they create before the publication. Nevertheless, we believe that in future research we will extend our method towards multimodal popularity prediction."
],
[
"In this section we present the bidirectional LSTM model for popularity prediction. We start by formulating the problem and follow up with the description of word embeddings used in our approach. We then present the Long Short-Term Memory network that serves as a backbone for our bidirectional LSTM architecture. We conclude this section with our interpretation of hidden bidirectional states and describe how they can be employed for title introspection."
],
[
"We cast the problem of popularity prediction as a binary classification task. We assume our data points contain a string of characters representing article title and a popularity metric, such as number of comments or views. The input of our classification is the character string, while the output is the binary label corresponding to popular or unpopular class. To enable the comparison of the methods on datasets containing content published on different websites and with different audience sizes, we determine that a video is popular if its popularity metric exceeds the median value of the corresponding metric for other points in the set, otherwise - it is labeled as unpopular. The details of the labeling procedure are discussed separately in the Datasets section."
],
[
"Since the input of our method is textual data, we follow the approach of BIBREF15 and map the text into a fixed-size vector representation. To this end, we use word embeddings that were successfully applied in other domains. We follow BIBREF5 and use pre-trained GloVe word vectors BIBREF16 to initialize the embedding layer (also known as look-up table). Section SECREF18 discusses the embedding layer in more details."
],
[
"Our method for popularity prediction using article's title is inspired by a bidirectional LSTM architecture. The overview of the model can be seen in Fig. FIGREF8 .",
"Let INLINEFORM0 be INLINEFORM1 -dimensional word vector corresponding to the INLINEFORM2 -the word in the headline, then a variable length sequence: INLINEFORM3 represents a headline. A recurrent neural network (RNN) processes this sequence by recursively applying a transformation function to the current element of sequence INLINEFORM4 and its previous hidden internal state INLINEFORM5 (optionally outputting INLINEFORM6 ). At each time step INLINEFORM7 , the hidden state is updated by: DISPLAYFORM0 ",
"where INLINEFORM0 is a non-linear activation function. LSTM network BIBREF17 updates its internal state differently, at each step INLINEFORM1 it calculates: DISPLAYFORM0 ",
" where INLINEFORM0 is the sigmoid activation function, tanh is the hyperbolic tangent function and INLINEFORM1 denotes component-wise multiplication. In our experiments we used 128, 256 for the dimensionality of hidden layer in both LSTM and BiLSTM. The term in equation EQREF10 INLINEFORM2 , is called the input gate and it uses the input word and the past hidden state to determine whether the input is worth remembering or not. The amount of information that is being discarded is controlled by forget gate INLINEFORM3 , while INLINEFORM4 is the output gate that controls the amount of information that leaks from memory cell INLINEFORM5 to the hidden state INLINEFORM6 . In the context of classification, we typically treat the output of the hidden state at the last time step of LSTM as the document representation and feed it to sigmoid layer to perform classification BIBREF18 .",
"Due to its sequential nature, a recurrent neural network puts more emphasis on the recent elements. To circumvent this problem BIBREF19 introduced a bidirectional RNN in which each training sequence is presented forwards and backwards to two separate recurrent nets, both of which are connected to the same output layer. Therefore, at any time-step we have the whole information about the sequence. This is shown by the following equation: DISPLAYFORM0 ",
"In our method, we use the bidirectional LSTM architecture for content popularity prediction using only textual cues. We have to therefore map the neural network outputs from a set of hidden states INLINEFORM0 to classification labels. We evaluated several approaches to this problem, such as max or mean pooling. The initial experiments showed that the highest performance was achieved using late fusion approach, that is by concatenating the last hidden state in forward and backward sequence. The intuition behind this design choice is that the importance of the first few words of the headline is relatively high, as the information contained in INLINEFORM1 , i.e. the last item in the backward sequence, is mostly taken from the first word."
],
[
"One interesting property of bidirectional RNNs is the fact, that the concatenation of hidden states INLINEFORM0 and INLINEFORM1 can be interpreted as a context-dependent vector representation of word INLINEFORM2 . This allows us to introspect a given title and approximate the contribution of each word to the estimated popularity. To that end one can process the headline representation INLINEFORM3 through the bidirectional recurrent network and then retrieve pairs of forward and backwards hidden state INLINEFORM4 for each word INLINEFORM5 . Then, the output of the last fully-connected layer INLINEFORM6 could be interpreted as context-depended popularity of a word INLINEFORM7 ."
],
[
"In our experiments we minimize the binary cross-entropy loss using Stochastic Gradient Descent on randomly shuffled mini-batches with the Adam optimization algorithm BIBREF20 . We reduce the learning rate by a factor of 0.2 once learning plateaus. We also employ early stopping strategy, i.e. stopping the training algorithm before convergence based on the values of loss function on the validation set."
],
[
"In this section, we evaluate our method and compare its performance against the competitive approaches. We use INLINEFORM0 -fold evaluation protocol with INLINEFORM1 with random dataset split. We measure the performance using standard accuracy metric which we define as a ratio between correctly classified data samples from test dataset and all test samples."
],
[
"In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles.",
"contains 4090 posts with associated videos from NowThisNews Facebook page collected between 07/2015 and 07/2016. For each post we collected its title and the number of views of the corresponding video, which we consider our popularity metric. Due to a fairly lengthy data collection process, we decided to normalize our data by first grouping posts according to their publication month and then labeling the posts for which the popularity metric exceeds the median monthly value as popular, the remaining part as unpopular.",
" BIBREF4 contains a variety of news-related information such as images, captions, geo-location information and comments which could be used as a proxy for article popularity. The articles in this dataset were collected between January and December 2014. Although we tried to retrieve the entire dataset, we were able to download only 38,182 articles due to the dead links published in the dataset. The retrieved articles were published in main news channels, such as Yahoo News, The Guardian or The Washington Post. Similarly, to The NowThisNews dataset we normalize the data by grouping articles per publisher, and classifying them as popular, when the number of comments exceeds the median value for given publisher."
],
[
"As a first baseline we use Bag-of-Words, a well-known and robust text representations used in various domains BIBREF21 , combined with a standard shallow classifier, namely, a Support Vector Machine with linear kernel. We used LIBSVM implementation of SVM.",
"Our second baseline is a deep Convectional Neural Network applied on word embeddings. This baseline represents state-of-the-art method presented in BIBREF4 with minor adjustments to the binary classification task. The architecture of the CNN benchmark we use is the following: the embedding layer transforms one-hot encoded words to their dense vector representations, followed by the convolution layer of 256 filters with width equal to 5 followed by max pooling layer (repeated three times), fully-connected layer with dropout and INLINEFORM0 regularization and finally, sigmoid activation layer. For fair comparison, both baselines were trained using the same training procedure as our method."
],
[
"As a text embedding in our experiments, we use publicly available GloVe word vectors BIBREF16 pre-trained on two datasets: Wikipedia 2014 with Gigaword5 (W+G5) and Common Crawl (CC). Since their output dimensionality can be modified, we show the results for varying dimensionality sizes. On top of that, we evaluate two training approaches: using static word vectors and fine-tuning them during training phase."
],
[
"The results of our experiments can be seen in Tab. TABREF21 and TABREF22 . Our proposed BiLSTM approach outperforms the competing methods consistently across both datasets. The performance improvement is especially visible for The NowThisNews dataset and reaches over 15% with respect to the shallow architecture in terms of of accuracy. Although the improvement with respect to the other methods based on deep neural network is less evident, the recurrent nature of our method provides much more intuitive interpretation of the results and allow for parsing the contribution of each single word to the overall score.",
"To present how our model works in practice, we show in Tab. TABREF23 a list of 3 headlines from NowThisNews dataset that are scored with the highest probability of belonging to a popular class, as well as 3 headlines with the lowest score. As can be seen, our model correctly detected videos that become viral at the same time assigning low score to content that underperformed. We believe that BiLSTM could be successfully applied in real-life scenarios."
],
[
"In this paper we present a novel approach to the problem of online article popularity prediction. To our knowledge, this is the first attempt of predicting the performance of content on social media using only textual information from its title. We show that our method consistently outperforms benchmark models. Additionally, the proposed method could not only be used to compare competing titles with regard to their estimated probability, but also to gain insights about what constitutes a good title. Future work includes modeling popularity prediction problem with multiple data modalities, such as images or videos. Furthermore, all of the evaluated models function at the word level, which could be problematic due to idiosyncratic nature of social media and Internet content. It is, therefore, worth investigating, whether combining models that operate at the character level to learn and generate vector representation of titles with visual features could improve the overall performance."
]
],
"section_name": [
"Introduction",
"Related Work",
"Method",
"Problem Formulation",
"Text Representation",
"Bidirectional Long Short-Term Memory Network",
"Hidden State Interpretation",
"Training",
"Evaluation",
"Datasets",
"Baselines",
"Embeddings",
"Results",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"4073d618697ed6e9ac55649d7eaadcc80ae443e5"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"60a07a216a2ad8dca51926d57bdf73c2c9843e6f",
"66146081f26b64f467e66e62e791f890fbbc0560"
],
"answer": [
{
"evidence": [
"Since the input of our method is textual data, we follow the approach of BIBREF15 and map the text into a fixed-size vector representation. To this end, we use word embeddings that were successfully applied in other domains. We follow BIBREF5 and use pre-trained GloVe word vectors BIBREF16 to initialize the embedding layer (also known as look-up table). Section SECREF18 discusses the embedding layer in more details."
],
"extractive_spans": [
" pre-trained GloVe word vectors "
],
"free_form_answer": "",
"highlighted_evidence": [
"Since the input of our method is textual data, we follow the approach of BIBREF15 and map the text into a fixed-size vector representation. To this end, we use word embeddings that were successfully applied in other domains. We follow BIBREF5 and use pre-trained GloVe word vectors BIBREF16 to initialize the embedding layer (also known as look-up table). Section SECREF18 discusses the embedding layer in more details."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As a text embedding in our experiments, we use publicly available GloVe word vectors BIBREF16 pre-trained on two datasets: Wikipedia 2014 with Gigaword5 (W+G5) and Common Crawl (CC). Since their output dimensionality can be modified, we show the results for varying dimensionality sizes. On top of that, we evaluate two training approaches: using static word vectors and fine-tuning them during training phase."
],
"extractive_spans": [
"GloVe word vectors BIBREF16 pre-trained on two datasets: Wikipedia 2014 with Gigaword5 (W+G5) and Common Crawl (CC)"
],
"free_form_answer": "",
"highlighted_evidence": [
"As a text embedding in our experiments, we use publicly available GloVe word vectors BIBREF16 pre-trained on two datasets: Wikipedia 2014 with Gigaword5 (W+G5) and Common Crawl (CC). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"3c0d42931aaae53acefbee56b67ca230244422b4"
]
},
{
"annotation_id": [
"7d4d3a5e45f844f8ec83e4d59f83ed20db1926be",
"866675dc6dfbe9c44dcd371db052457a6b7d47ca"
],
"answer": [
{
"evidence": [
"In this section, we evaluate our method and compare its performance against the competitive approaches. We use INLINEFORM0 -fold evaluation protocol with INLINEFORM1 with random dataset split. We measure the performance using standard accuracy metric which we define as a ratio between correctly classified data samples from test dataset and all test samples."
],
"extractive_spans": [
"standard accuracy metric"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section, we evaluate our method and compare its performance against the competitive approaches. We use INLINEFORM0 -fold evaluation protocol with INLINEFORM1 with random dataset split. We measure the performance using standard accuracy metric which we define as a ratio between correctly classified data samples from test dataset and all test samples."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper we propose a method for online content popularity prediction based on a bidirectional recurrent neural network called BiLSTM. This work is inspired by recent successful applications of deep neural networks in many natural language processing problems BIBREF5 , BIBREF6 . Our method attempts to model complex relationships between the title of an article and its popularity using novel deep network architecture that, in contrast to the previous approaches, gives highly interpretable results. Last but not least, the proposed BiLSTM method provides a significant performance boost in terms of prediction accuracy over the standard shallow approach, while outperforming the current state-of-the-art on two distinct datasets with over 40,000 samples."
],
"extractive_spans": [
"accuracy"
],
"free_form_answer": "",
"highlighted_evidence": [
"Last but not least, the proposed BiLSTM method provides a significant performance boost in terms of prediction accuracy over the standard shallow approach, while outperforming the current state-of-the-art on two distinct datasets with over 40,000 samples."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"69ba89d3c1980a1afa93a066afb621eb612d8d14",
"fa9eeba099c0d3b7b0452630d106ff71097a0010"
],
"answer": [
{
"evidence": [
"As a first baseline we use Bag-of-Words, a well-known and robust text representations used in various domains BIBREF21 , combined with a standard shallow classifier, namely, a Support Vector Machine with linear kernel. We used LIBSVM implementation of SVM."
],
"extractive_spans": [
"SVM"
],
"free_form_answer": "",
"highlighted_evidence": [
"As a first baseline we use Bag-of-Words, a well-known and robust text representations used in various domains BIBREF21 , combined with a standard shallow classifier, namely, a Support Vector Machine with linear kernel. We used LIBSVM implementation of SVM."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As a first baseline we use Bag-of-Words, a well-known and robust text representations used in various domains BIBREF21 , combined with a standard shallow classifier, namely, a Support Vector Machine with linear kernel. We used LIBSVM implementation of SVM."
],
"extractive_spans": [],
"free_form_answer": "SVM with linear kernel using bag-of-words features",
"highlighted_evidence": [
"As a first baseline we use Bag-of-Words, a well-known and robust text representations used in various domains BIBREF21 , combined with a standard shallow classifier, namely, a Support Vector Machine with linear kernel."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"3c0d42931aaae53acefbee56b67ca230244422b4"
]
},
{
"annotation_id": [
"79ce57787f236232adaba5931ed910db7a2a505d",
"ef822e6efb769abafe0423614b15850e986e9db1"
],
"answer": [
{
"evidence": [
"In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles.",
"contains 4090 posts with associated videos from NowThisNews Facebook page collected between 07/2015 and 07/2016. For each post we collected its title and the number of views of the corresponding video, which we consider our popularity metric. Due to a fairly lengthy data collection process, we decided to normalize our data by first grouping posts according to their publication month and then labeling the posts for which the popularity metric exceeds the median monthly value as popular, the remaining part as unpopular."
],
"extractive_spans": [
"NowThisNews Facebook page"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles.\n\ncontains 4090 posts with associated videos from NowThisNews Facebook page collected between 07/2015 and 07/2016. For each post we collected its title and the number of views of the corresponding video, which we consider our popularity metric. Due to a fairly lengthy data collection process, we decided to normalize our data by first grouping posts according to their publication month and then labeling the posts for which the popularity metric exceeds the median monthly value as popular, the remaining part as unpopular."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles.",
"contains 4090 posts with associated videos from NowThisNews Facebook page collected between 07/2015 and 07/2016. For each post we collected its title and the number of views of the corresponding video, which we consider our popularity metric. Due to a fairly lengthy data collection process, we decided to normalize our data by first grouping posts according to their publication month and then labeling the posts for which the popularity metric exceeds the median monthly value as popular, the remaining part as unpopular."
],
"extractive_spans": [
"NowThisNews Facebook page"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles.",
"contains 4090 posts with associated videos from NowThisNews Facebook page collected between 07/2015 and 07/2016."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"10562160d8bcaf723be9b384ef94e9bb8f32d493",
"e3987bfaef7c4dc5a6dfd1af11ad6c9d22d01fa4"
],
"answer": [
{
"evidence": [
"BIBREF4 contains a variety of news-related information such as images, captions, geo-location information and comments which could be used as a proxy for article popularity. The articles in this dataset were collected between January and December 2014. Although we tried to retrieve the entire dataset, we were able to download only 38,182 articles due to the dead links published in the dataset. The retrieved articles were published in main news channels, such as Yahoo News, The Guardian or The Washington Post. Similarly, to The NowThisNews dataset we normalize the data by grouping articles per publisher, and classifying them as popular, when the number of comments exceeds the median value for given publisher."
],
"extractive_spans": [
"main news channels, such as Yahoo News, The Guardian or The Washington Post"
],
"free_form_answer": "",
"highlighted_evidence": [
"BIBREF4 contains a variety of news-related information such as images, captions, geo-location information and comments which could be used as a proxy for article popularity. The articles in this dataset were collected between January and December 2014. Although we tried to retrieve the entire dataset, we were able to download only 38,182 articles due to the dead links published in the dataset. The retrieved articles were published in main news channels, such as Yahoo News, The Guardian or The Washington Post. Similarly, to The NowThisNews dataset we normalize the data by grouping articles per publisher, and classifying them as popular, when the number of comments exceeds the median value for given publisher."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles."
],
"extractive_spans": [
"The BreakingNews dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section we present two datasets used in our experiments: The NowThisNews dataset, collected for the purpose of this paper, and The BreakingNews dataset BIBREF4 , publicly available dataset of news articles."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"What is the average length of the title text?",
"Which pretrained word vectors did they use?",
"What evaluation metrics are used?",
"Which shallow approaches did they experiment with?",
"Where do they obtain the news videos from?",
"What is the source of the news articles?"
],
"question_id": [
"252a645af9876241fb166e5822992ce17fec6eb6",
"ed67359889cf61fa11ee291d6c378cccf83d599d",
"425bd2ccfd95ead91d8f2b1b1c8ab9fc3446cb82",
"955de9f7412ba98a0c91998919fa048d339b1d48",
"3b371ea554fa6639c76a364060258454e4b931d4",
"ddb23a71113cbc092cbc158066d891cae261e2c6"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Fig. 1. A bidirectional LSTM architecture with 1-of-K word encoding and embedding layer proposed in this paper.",
"Table 1. Popularity prediction results on NowThisNews dataset. Our proposed BiLSTM method provides higher performances than the competitors in terms of classification accuracy.",
"Table 2. Popularity prediction results on BreakingNews dataset. Our BiLSTM method outperforms the competitors - the performance gain is especially visible with respect to the shallow architecture of BoW + SVM.",
"Table 3. Top and bottom 3 headlines from the NowThisNews dataset as predicted by our model and their views 168 hours after publication."
],
"file": [
"4-Figure1-1.png",
"8-Table1-1.png",
"8-Table2-1.png",
"9-Table3-1.png"
]
} | [
"Which shallow approaches did they experiment with?"
] | [
[
"1707.06806-Baselines-0"
]
] | [
"SVM with linear kernel using bag-of-words features"
] | 117 |
1806.04511 | Multilingual Sentiment Analysis: An RNN-Based Framework for Limited Data | Sentiment analysis is a widely studied NLP task where the goal is to determine opinions, emotions, and evaluations of users towards a product, an entity or a service that they are reviewing. One of the biggest challenges for sentiment analysis is that it is highly language dependent. Word embeddings, sentiment lexicons, and even annotated data are language specific. Further, optimizing models for each language is very time consuming and labor intensive especially for recurrent neural network models. From a resource perspective, it is very challenging to collect data for different languages. In this paper, we look for an answer to the following research question: can a sentiment analysis model trained on a language be reused for sentiment analysis in other languages, Russian, Spanish, Turkish, and Dutch, where the data is more limited? Our goal is to build a single model in the language with the largest dataset available for the task, and reuse it for languages that have limited resources. For this purpose, we train a sentiment analysis model using recurrent neural networks with reviews in English. We then translate reviews in other languages and reuse this model to evaluate the sentiments. Experimental results show that our robust approach of single model trained on English reviews statistically significantly outperforms the baselines in several different languages. | {
"paragraphs": [
[
"With the steady growth in the commercial websites and social media venues, the access to users' reviews have become easier. As the amount of data that can be mined for opinion increased, commercial companies' interests for sentiment analysis increased as well. Sentiment analysis is an important part of understanding user behavior and opinions on products, places, or services.",
"Sentiment analysis has long been studied by the research community, leading to several sentiment-related resources such as sentiment dictionaries that can be used as features for machine learning models BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . These resources help increase sentiment analysis accuracies; however, they are highly dependent on language and require researchers to build such resources for every language to process.",
"Feature engineering is a large part of the model building phase for most sentiment analysis and emotion detection models BIBREF4 . Determining the correct set of features is a task that requires thorough investigation. Furthermore, these features are mostly language and dataset dependent making it even further challenging to build models for different languages. For example, the sentiment and emotion lexicons, as well as pre-trained word embeddings are not completely transferable to other languages which replicates the efforts for every language that users would like to build sentiment classification models on. For languages and tasks where the data is limited, extracting these features, building language models, training word embeddings, and creating lexicons are big challenges. In addition to the feature engineering effort, the machine learning models' parameters also need to be tuned separately for each language to get the optimal results.",
"In this paper, we take a different approach. We build a reusable sentiment analysis model that does not utilize any lexicons. Our goal is to evaluate how well a generic model can be used to mine opinion in different languages where data is more limited than the language where the generic model is trained on. To that end, we build a training set that contains reviews from different domains in English (e.g., movie reviews, product reviews) and train a recurrent neural network (RNN) model to predict polarity of those reviews. Then focusing on a domain, we make the model specialized in that domain by using the trained weights from the larger data and further training with data on a specific domain. To evaluate the reusability of the sentiment analysis model, we test with non-English datasets. We first translate the test set to English and use the pre-trained model to score polarity in the translated text. In this way, our proposed approach eliminates the need to train language-dependent models, use of sentiment lexicons and word embeddings for each language. Our experiments show that a generalizable sentiment analysis model can be utilized successfully to perform opinion mining for languages that do not have enough resources to train specific models.",
"The contributions of this study are; 1) a robust approach that utilizes machine translation to reuse a model trained on one language in other languages, 2) an RNN-based approach to eliminate feature extraction as well as resource requirements for sentiment analysis, and 3) a technique that statistically significantly outperforms baselines for multilingual sentiment analysis task when data is limited. To the best of our knowledge, this study is the first to apply a deep learning model to the multilingual sentiment analysis task."
],
[
"There is a rich body of work in sentiment analysis including social media platforms such as Twitter BIBREF5 and Facebook BIBREF4 . One common factor in most of the sentiment analysis work is that features that are specific to sentiment analysis are extracted (e.g., sentiment lexicons) and used in different machine learning models. Lexical resources BIBREF0 , BIBREF1 , BIBREF4 for sentiment analysis such as SentiWordNet BIBREF6 , BIBREF7 , linguistic features and expressions BIBREF8 , polarity dictionaries BIBREF2 , BIBREF3 , other features such as topic-oriented features and syntax BIBREF9 , emotion tokens BIBREF10 , word vectors BIBREF11 , and emographics BIBREF12 are some of the information that are found useful for improving sentiment analysis accuracies. Although these features are beneficial, extracting them requires language-dependent data (e.g., a sentiment dictionary for Spanish is trained on Spanish data instead of using all data from different languages).",
"Our goal in this work is to streamline the feature engineering phase by not relying on any dictionary other than English word embeddings that are trained on any data (i.e. not necessarily sentiment analysis corpus). To that end, we utilize off-the-shelf machine translation tools to first translate corpora to the language where more training data is available and use the translated corpora to do inference on.",
"Machine translation for multilingual sentiment analysis has also seen attention from researchers. Hiroshi et al. BIBREF13 translated only sentiment units with a pattern-based approach. Balahur and Turchi BIBREF14 used uni-grams, bi-grams and tf-idf features for building support vector machines on translated text. Boyd-Graber and Resnik BIBREF15 built Latent Dirichlet Allocation models to investigate how multilingual concepts are clustered into topics. Mohammed et al. BIBREF16 translate Twitter posts to English as well as the English sentiment lexicons. Tellez et al. BIBREF17 propose a framework where language-dependent and independent features are used with an SVM classifier. These machine learning approaches also require a feature extraction phase where we eliminate by incorporating a deep learning approach that does the feature learning intrinsically. Further, Wan BIBREF18 uses an ensemble approach where the resources (e.g., lexicons) in both the original language and the translated language are used – requiring resources to be present in both languages. Brooke et al. BIBREF19 also use multiple dictionaries.",
"In this paper, we address the resource bottleneck of these translation-based approaches and propose a deep learning approach that does not require any dictionaries."
],
[
"In order to eliminate the need to find data and build separate models for each language, we propose a multilingual approach where a single model is built in the language where the largest resources are available. In this paper we focus on English as there are several sentiment analysis datasets in English. To make the English sentiment analysis model as generalizable as possible, we first start by training with a large dataset that has product reviews for different categories. Then, using the trained weights from the larger generic dataset, we make the model more specialized for a specific domain. We further train the model with domain-specific English reviews and use this trained model to score reviews that share the same domain from different languages. To be able to employ the trained model, test sets are first translated to English via machine translation and then inference takes place. Figure FIGREF1 shows our multilingual sentiment analysis approach. It is important to note that this approach does not utilize any resource in any of the languages of the test sets (e.g., word embeddings, lexicons, training set).",
"Deep learning approaches have been successful in many applications ranging from computer vision to natural language processing BIBREF20 . Recurrent neural network (RNN) including Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU) are subsets of deep learning algorithms where the dependencies between tokens can be used by the model. These models can also be used with variable length input vectors which makes them suitable for text input. LSTM and GRU models allow operations of sequences of vectors over time and have the capability to `remember' previous information BIBREF20 . RNN have been found useful for several natural language processing tasks including language modeling, text classification, machine translation. RNN can also utilize pre-trained word embeddings (numeric vector representations of words trained on unlabeled data) without requiring hand-crafted features. Therefore in this paper, we employ an RNN architecture that takes text and pre-trained word embeddings as inputs and generates a classification result. Word embeddings represent words as numeric vectors and capture semantic information. They are trained in an unsupervised fashion making it useful for our task.",
"The sentiment analysis model that is trained on English reviews has two bidirectional layers, each with 40 neurons and a dropout BIBREF21 of 0.2 is used. The training phase takes pre-trained word embeddings and reviews in textual format, then predicts the polarity of the reviews. For this study, an embedding length of 100 is used (i.e., each word is represented by a vector of length 100). We utilized pre-trained global vectors BIBREF22 . The training phase is depicted in Figure FIGREF2 ."
],
[
"To evaluate the proposed approach for multilingual sentiment analysis task, we conducted experiments. This section first presents the corpora used in this study followed by experimental results.",
"Throughout our experiments, we use SAS Deep Learning Toolkit. For machine translation, Google translation API is used."
],
[
"Two sets of corpora are used in this study, both are publicly available. The first set consists of English reviews and the second set contains restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian). We focus on polarity detection in reviews, therefore all datasets in this study have two class values (positive, negative).",
"With the goal of building a generalizable sentiment analysis model, we used three different training sets as provided in Table TABREF5 . One of these three datasets (Amazon reviews BIBREF23 , BIBREF24 ) is larger and has product reviews from several different categories including book reviews, electronics products reviews, and application reviews. The other two datasets are to make the model more specialized in the domain. In this paper we focus on restaurant reviews as our domain and use Yelp restaurant reviews dataset extracted from Yelp Dataset Challenge BIBREF25 and restaurant reviews dataset as part of a Kaggle competition BIBREF26 .",
"For evaluation of the multilingual approach, we use four languages. These datasets are part of SemEval-2016 Challenge Task 5 BIBREF27 , BIBREF28 . Table TABREF7 shows the number of observations in each test corpus."
],
[
"For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset. For example, if the dataset has 60% of all reviews positive and 40% negative, majority baseline would be 60% because a model that always predicts “positive” will be 60% accurate and will make mistakes 40% of the time.",
"In addition to the majority baseline, we also compare our results with a lexicon-based approach. We use SentiWordNet BIBREF29 to obtain a positive and a negative sentiment score for each token in a review. Then sum of positive sentiment scores and negative sentiment scores for each review is obtained by summing up the scores for each token. If the positive sum score for a given review is greater than the negative sum score, we accept that review as a positive review. If negative sum is larger than or equal to the positive sum, the review is labeled as a negative review.",
"RNN outperforms both baselines in all four datasets (see Table TABREF9 ). Also for Spanish restaurant review, the lexicon-based baseline is below the majority baseline which shows that solely translating data and using lexicons is not sufficient to achieve good results in multilingual sentiment analysis.",
"Among the wrong classifications for each test set, we calculated the percentage of false positives and false negatives. Table TABREF10 shows the distribution of false positives and false negatives for each class. In all four classes, the number of false negatives are more than the number of false positives. This can be explained by the unbalanced training dataset where the number of positive reviews are more than the number of negative reviews (59,577 vs 17,132).",
"To be able to see the difference between baseline and RNN, we took each method's results as a group (4 values: one for each language) and compared the means. Post hoc comparisons using the Tukey HSD test indicated that the mean accuracies for baselines (majority and lexicon-based) are significantly different than RNN accuracies as can be seen in Table TABREF12 (family-wise error rate=0.06). When RNN is compared with lexicon-based baseline and majority baseline, the null hypothesis can be rejected meaning that each test is significant. In addition to these comparisons, we also calculated the effect sizes (using Cohen's d) between the baselines and our method. The results are aligning with Tukey HSD results such that while our method versus baselines have very large effect sizes, lexicon-based baseline and majority baseline have negligible effect size.",
"Figure FIGREF11 shows the differences in minimum and maximum values of all three approaches. As the figure shows, RNN significantly outperforms both baselines for the sentiment classification task."
],
[
"One of the crucial elements while using machine translation is to have highly accurate translations. It is likely that non-English words would not have word embeddings, which will dramatically affect the effectiveness of the system. We analyzed the effect of incorrect translations into our approach. To that end, we extracted all wrong predictions from the test set and computed the ratio of misclassifications that have non-English words in them. We first extracted all misclassifications for a given language and for each observation in the misclassification set, we iterated through each token to check if the token is in English. In this way, we counted the number of observations that contained at least one non-English word and divided that with the size of the misclassifications set. We used this ratio to investigate the effect of machine translation errors.",
"We found that 25.84% of Dutch, 21.76% of Turkish, 24.46% Spanish, and 10.71% of Russian reviews that were misclassified had non-English words in them. These non-English words might be causing the misclassifications. However, a large portion of the missclassifications is not caused due to not-translated words. At the end, the machine translation errors has some but not noticeable effects on our model. Therefore, we can claim that machine translation preserves most of the information necessary for sentiment analysis.",
"We also evaluated our model with an English corpus BIBREF27 to see its performance without any interference from machine translation errors. Using the English data for testing, the model achieved 87.06% accuracy where a majority baseline was 68.37% and the lexicon-based baseline was 60.10%.",
"Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages. Building separate models for each language requires both labeled and unlabeled data. Even though having lots of labeled data in every language is the perfect case, it is unrealistic. Therefore, eliminating the resource requirement in this resource-constrained task is crucial. The fact that machine translation can be used in reusing models from different languages is promising for reducing the data requirements."
],
[
"Building effective machine learning models for text requires data and different resources such as pre-trained word embeddings and reusable lexicons. Unfortunately, most of these resources are not entirely transferable to different domains, tasks or languages. Sentiment analysis is one such task that requires additional effort to transfer knowledge between languages.",
"In this paper, we studied the research question: Can we build reusable sentiment analysis models that can be utilized for making inferences in different languages without requiring separate models and resources for each language? To that end, we built a recurrent neural network model in the language that had largest data available. We took a general-to-specific model building strategy where the larger corpus that had reviews from different domains was first used to train the RNN model and a smaller single-domain corpus of sentiment reviews was used to specialize the model on the given domain. During scoring time, we used corpora for the given domain in different languages and translated them to English to be able to classify sentiments with the trained model. Experimental results showed that the proposed multilingual approach outperforms both the majority baseline and the lexicon-based baseline.",
"In this paper we made the sentiment analysis model specific to a single domain. For future work, we would like to investigate the effectiveness of our model on different review domains including hotel reviews and on different problems such as detecting stance."
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Experiments",
"Corpora",
"Experimental Results",
"Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"2b2fe9e33cc5cb59f0f9504872a58e5e3828086e",
"d22c8180650a20e95c8b19eb364ff3224bb4d567"
],
"answer": [
{
"evidence": [
"Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages. Building separate models for each language requires both labeled and unlabeled data. Even though having lots of labeled data in every language is the perfect case, it is unrealistic. Therefore, eliminating the resource requirement in this resource-constrained task is crucial. The fact that machine translation can be used in reusing models from different languages is promising for reducing the data requirements."
],
"extractive_spans": [
"Russian"
],
"free_form_answer": "",
"highlighted_evidence": [
"Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Accuracy results (%) for RNN-based approach compared with majority baseline and lexicon-based baseline."
],
"extractive_spans": [],
"free_form_answer": "Russsian",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Accuracy results (%) for RNN-based approach compared with majority baseline and lexicon-based baseline."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"0e6e41b3faecd132893b3ec7a3e3972b613a2c90"
],
"answer": [
{
"evidence": [
"Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages. Building separate models for each language requires both labeled and unlabeled data. Even though having lots of labeled data in every language is the perfect case, it is unrealistic. Therefore, eliminating the resource requirement in this resource-constrained task is crucial. The fact that machine translation can be used in reusing models from different languages is promising for reducing the data requirements."
],
"extractive_spans": [
"Turkish"
],
"free_form_answer": "",
"highlighted_evidence": [
"Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2bb929accb1fccc2be5c8903d9d02a7160bdf61a",
"34575f1d175be1c68aa5312e497e8792cb3a75e1"
],
"answer": [
{
"evidence": [
"For evaluation of the multilingual approach, we use four languages. These datasets are part of SemEval-2016 Challenge Task 5 BIBREF27 , BIBREF28 . Table TABREF7 shows the number of observations in each test corpus."
],
"extractive_spans": [
"SemEval-2016 Challenge Task 5 BIBREF27 , BIBREF28"
],
"free_form_answer": "",
"highlighted_evidence": [
"These datasets are part of SemEval-2016 Challenge Task 5 BIBREF27 , BIBREF28 . Table TABREF7 shows the number of observations in each test corpus."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Two sets of corpora are used in this study, both are publicly available. The first set consists of English reviews and the second set contains restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian). We focus on polarity detection in reviews, therefore all datasets in this study have two class values (positive, negative)."
],
"extractive_spans": [
" English reviews ",
" restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian)"
],
"free_form_answer": "",
"highlighted_evidence": [
"Two sets of corpora are used in this study, both are publicly available. The first set consists of English reviews and the second set contains restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"11cfaadad843b87f7fb0c872103c52ddb35d143f",
"f70ac58596c06b4d0ddfc57c1170b03902897ee3"
],
"answer": [
{
"evidence": [
"In addition to the majority baseline, we also compare our results with a lexicon-based approach. We use SentiWordNet BIBREF29 to obtain a positive and a negative sentiment score for each token in a review. Then sum of positive sentiment scores and negative sentiment scores for each review is obtained by summing up the scores for each token. If the positive sum score for a given review is greater than the negative sum score, we accept that review as a positive review. If negative sum is larger than or equal to the positive sum, the review is labeled as a negative review.",
"For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset. For example, if the dataset has 60% of all reviews positive and 40% negative, majority baseline would be 60% because a model that always predicts “positive” will be 60% accurate and will make mistakes 40% of the time."
],
"extractive_spans": [
"majority baseline",
"lexicon-based approach"
],
"free_form_answer": "",
"highlighted_evidence": [
"In addition to the majority baseline, we also compare our results with a lexicon-based approach.",
"For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset. For example, if the dataset has 60% of all reviews positive and 40% negative, majority baseline would be 60% because a model that always predicts “positive” will be 60% accurate and will make mistakes 40% of the time.",
"In addition to the majority baseline, we also compare our results with a lexicon-based approach. We use SentiWordNet BIBREF29 to obtain a positive and a negative sentiment score for each token in a review. Then sum of positive sentiment scores and negative sentiment scores for each review is obtained by summing up the scores for each token. If the positive sum score for a given review is greater than the negative sum score, we accept that review as a positive review. If negative sum is larger than or equal to the positive sum, the review is labeled as a negative review."
],
"extractive_spans": [
"majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset",
"lexicon-based approach"
],
"free_form_answer": "",
"highlighted_evidence": [
"For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset. For example, if the dataset has 60% of all reviews positive and 40% negative, majority baseline would be 60% because a model that always predicts “positive” will be 60% accurate and will make mistakes 40% of the time.\n\nIn addition to the majority baseline, we also compare our results with a lexicon-based approach. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"b39ff9b1051500dac6197ca9535ec3182267c88b",
"dccccc28dadf1497243498e6b66dedc901795d03"
],
"answer": [
{
"evidence": [
"In order to eliminate the need to find data and build separate models for each language, we propose a multilingual approach where a single model is built in the language where the largest resources are available. In this paper we focus on English as there are several sentiment analysis datasets in English. To make the English sentiment analysis model as generalizable as possible, we first start by training with a large dataset that has product reviews for different categories. Then, using the trained weights from the larger generic dataset, we make the model more specialized for a specific domain. We further train the model with domain-specific English reviews and use this trained model to score reviews that share the same domain from different languages. To be able to employ the trained model, test sets are first translated to English via machine translation and then inference takes place. Figure FIGREF1 shows our multilingual sentiment analysis approach. It is important to note that this approach does not utilize any resource in any of the languages of the test sets (e.g., word embeddings, lexicons, training set).",
"Throughout our experiments, we use SAS Deep Learning Toolkit. For machine translation, Google translation API is used."
],
"extractive_spans": [],
"free_form_answer": "Using Google translation API.",
"highlighted_evidence": [
" To be able to employ the trained model, test sets are first translated to English via machine translation and then inference takes place. ",
" For machine translation, Google translation API is used."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Throughout our experiments, we use SAS Deep Learning Toolkit. For machine translation, Google translation API is used."
],
"extractive_spans": [
"Google translation API"
],
"free_form_answer": "",
"highlighted_evidence": [
"For machine translation, Google translation API is used."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"86952000b285e7f0a6a4b8bd22314fdcef431a52",
"bdd840844b92e11905eeef5524fc529e2bf7fb47"
],
"answer": [
{
"evidence": [
"With the goal of building a generalizable sentiment analysis model, we used three different training sets as provided in Table TABREF5 . One of these three datasets (Amazon reviews BIBREF23 , BIBREF24 ) is larger and has product reviews from several different categories including book reviews, electronics products reviews, and application reviews. The other two datasets are to make the model more specialized in the domain. In this paper we focus on restaurant reviews as our domain and use Yelp restaurant reviews dataset extracted from Yelp Dataset Challenge BIBREF25 and restaurant reviews dataset as part of a Kaggle competition BIBREF26 ."
],
"extractive_spans": [
"Amazon reviews",
"Yelp restaurant reviews",
"restaurant reviews"
],
"free_form_answer": "",
"highlighted_evidence": [
"With the goal of building a generalizable sentiment analysis model, we used three different training sets as provided in Table TABREF5 . One of these three datasets (Amazon reviews BIBREF23 , BIBREF24 ) is larger and has product reviews from several different categories including book reviews, electronics products reviews, and application reviews. The other two datasets are to make the model more specialized in the domain. In this paper we focus on restaurant reviews as our domain and use Yelp restaurant reviews dataset extracted from Yelp Dataset Challenge BIBREF25 and restaurant reviews dataset as part of a Kaggle competition BIBREF26 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"With the goal of building a generalizable sentiment analysis model, we used three different training sets as provided in Table TABREF5 . One of these three datasets (Amazon reviews BIBREF23 , BIBREF24 ) is larger and has product reviews from several different categories including book reviews, electronics products reviews, and application reviews. The other two datasets are to make the model more specialized in the domain. In this paper we focus on restaurant reviews as our domain and use Yelp restaurant reviews dataset extracted from Yelp Dataset Challenge BIBREF25 and restaurant reviews dataset as part of a Kaggle competition BIBREF26 ."
],
"extractive_spans": [
"Amazon reviews BIBREF23 , BIBREF24",
"Yelp restaurant reviews dataset",
" restaurant reviews dataset as part of a Kaggle competition BIBREF26"
],
"free_form_answer": "",
"highlighted_evidence": [
"With the goal of building a generalizable sentiment analysis model, we used three different training sets as provided in Table TABREF5 . One of these three datasets (Amazon reviews BIBREF23 , BIBREF24 ) is larger and has product reviews from several different categories including book reviews, electronics products reviews, and application reviews. The other two datasets are to make the model more specialized in the domain. In this paper we focus on restaurant reviews as our domain and use Yelp restaurant reviews dataset extracted from Yelp Dataset Challenge BIBREF25 and restaurant reviews dataset as part of a Kaggle competition BIBREF26 ."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"which non-english language had the best performance?",
"which non-english language was the had the worst results?",
"what datasets were used in evaluation?",
"what are the baselines?",
"how did the authors translate the reviews to other languages?",
"what dataset was used for training?"
],
"question_id": [
"e79a5b6b6680bd2f63e9f4adbaae1d7795d81e38",
"c7486d039304ca9d50d0571236429f4f6fbcfcf7",
"f1f1dcc67b3e4d554bfeb508226cdadb3c32d2e9",
"a103636c8d1dbfa53341133aeb751ffec269415c",
"55139fcfe04ce90aad407e2e5a0067a45f31e07e",
"fbaf060004f196a286fef67593d2d76826f0304e"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Multilingual sentiment analysis approach.",
"Table 2: Datasets used for testing.",
"Figure 2: Training sentiment analysis model with RNN.",
"Table 3: Accuracy results (%) for RNN-based approach compared with majority baseline and lexicon-based baseline.",
"Table 1: Datasets used for training.",
"Table 4: Percentage of false positives and false negatives of wrong classi cations.",
"Figure 3: Multiple comparisons between majority baseline, lexicon-based baseline and RNN.",
"Table 5: Multiple comparison of means."
],
"file": [
"2-Figure1-1.png",
"3-Table2-1.png",
"3-Figure2-1.png",
"3-Table3-1.png",
"3-Table1-1.png",
"3-Table4-1.png",
"4-Figure3-1.png",
"4-Table5-1.png"
]
} | [
"which non-english language had the best performance?",
"how did the authors translate the reviews to other languages?"
] | [
[
"1806.04511-3-Table3-1.png",
"1806.04511-Discussion-3"
],
[
"1806.04511-Experiments-1",
"1806.04511-Methodology-0"
]
] | [
"Russsian",
"Using Google translation API."
] | 118 |
1904.04358 | Deep Learning the EEG Manifold for Phonological Categorization from Active Thoughts | Speech-related Brain Computer Interfaces (BCI) aim primarily at finding an alternative vocal communication pathway for people with speaking disabilities. As a step towards full decoding of imagined speech from active thoughts, we present a BCI system for subject-independent classification of phonological categories exploiting a novel deep learning based hierarchical feature extraction scheme. To better capture the complex representation of high-dimensional electroencephalography (EEG) data, we compute the joint variability of EEG electrodes into a channel cross-covariance matrix. We then extract the spatio-temporal information encoded within the matrix using a mixed deep neural network strategy. Our model framework is composed of a convolutional neural network (CNN), a long-short term network (LSTM), and a deep autoencoder. We train the individual networks hierarchically, feeding their combined outputs in a final gradient boosting classification step. Our best models achieve an average accuracy of 77.9% across five different binary classification tasks, providing a significant 22.5% improvement over previous methods. As we also show visually, our work demonstrates that the speech imagery EEG possesses significant discriminative information about the intended articulatory movements responsible for natural speech synthesis. | {
"paragraphs": [
[
"Decoding intended speech or motor activity from brain signals is one of the major research areas in Brain Computer Interface (BCI) systems BIBREF0 , BIBREF1 . In particular, speech-related BCI technologies attempt to provide effective vocal communication strategies for controlling external devices through speech commands interpreted from brain signals BIBREF2 . Not only do they provide neuro-prosthetic help for people with speaking disabilities and neuro-muscular disorders like locked-in-syndrome, nasopharyngeal cancer, and amytotropic lateral sclerosis (ALS), but also equip people with a better medium to communicate and express thoughts, thereby improving the quality of rehabilitation and clinical neurology BIBREF3 , BIBREF4 . Such devices also have applications in entertainment, preventive treatments, personal communication, games, etc. Furthermore, BCI technologies can be utilized in silent communication, as in noisy environments, or situations where any sort of audio-visual communication is infeasible.",
"Among the various brain activity-monitoring modalities in BCI, electroencephalography (EEG) BIBREF5 , BIBREF6 has demonstrated promising potential to differentiate between various brain activities through measurement of related electric fields. EEG is non-invasive, portable, low cost, and provides satisfactory temporal resolution. This makes EEG suitable to realize BCI systems. EEG data, however, is challenging: these data are high dimensional, have poor SNR, and suffer from low spatial resolution and a multitude of artifacts. For these reasons, it is not particularly obvious how to decode the desired information from raw EEG signals. Although the area of BCI based speech intent recognition has received increasing attention among the research community in the past few years, most research has focused on classification of individual speech categories in terms of discrete vowels, phonemes and words BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . This includes categorization of imagined EEG signal into binary vowel categories like /a/, /u/ and rest BIBREF7 , BIBREF8 , BIBREF9 ; binary syllable classes like /ba/ and /ku/ BIBREF1 , BIBREF10 , BIBREF11 , BIBREF12 ; a handful of control words like 'up', 'down', 'left', 'right' and 'select' BIBREF15 or others like 'water', 'help', 'thanks', 'food', 'stop' BIBREF13 , Chinese characters BIBREF14 , etc. Such works mostly involve traditional signal processing or manual feature handcrafting along with linear classifiers (e.g., SVMs). In our recent work BIBREF16 , we introduced deep learning models for classification of vowels and words that achieved 23.45% improvement of accuracy over the baseline.",
"Production of articulatory speech is an extremely complicated process, thereby rendering understanding of the discriminative EEG manifold corresponding to imagined speech highly challenging. As a result, most of the existing approaches failed to achieve satisfactory accuracy on decoding speech tokens from the speech imagery EEG data. Perhaps, for these reasons, very little work has been devoted to relating the brain signals to the underlying articulation. The few exceptions include BIBREF17 , BIBREF18 . In BIBREF17 , Zhao et al. used manually handcrafted features from EEG data, combined with speech audio and facial features to achieve classification of the phonological categories varying based on the articulatory steps. However, the imagined speech classification accuracy based on EEG data alone, as reported in BIBREF17 , BIBREF18 , are not satisfactory in terms of accuracy and reliability. We now turn to describing our proposed models."
],
[
"Cognitive learning process underlying articulatory speech production involves incorporation of intermediate feedback loops and utilization of past information stored in the form of memory as well as hierarchical combination of several feature extractors. To this end, we develop our mixed neural network architecture composed of three supervised and a single unsupervised learning step, discussed in the next subsections and shown in Fig. FIGREF1 . We formulate the problem of categorizing EEG data based on speech imagery as a non-linear mapping INLINEFORM0 of a multivariate time-series input sequence INLINEFORM1 to fixed output INLINEFORM2 , i.e, mathematically INLINEFORM3 : INLINEFORM4 , where c and t denote the EEG channels and time instants respectively."
],
[
"We follow similar pre-processing steps on raw EEG data as reported in BIBREF17 (ocular artifact removal using blind source separation, bandpass filtering and subtracting mean value from each channel) except that we do not perform Laplacian filtering step since such high-pass filtering may decrease information content from the signals in the selected bandwidth."
],
[
"Multichannel EEG data is high dimensional multivariate time series data whose dimensionality depends on the number of electrodes. It is a major hurdle to optimally encode information from these EEG data into lower dimensional space. In fact, our investigation based on a development set (as we explain later) showed that well-known deep neural networks (e.g., fully connected networks such as convolutional neural networks, recurrent neural networks and autoencoders) fail to individually learn such complex feature representations from single-trial EEG data. Besides, we found that instead of using the raw multi-channel high-dimensional EEG requiring large training times and resource requirements, it is advantageous to first reduce its dimensionality by capturing the information transfer among the electrodes. Instead of the conventional approach of selecting a handful of channels as BIBREF17 , BIBREF18 , we address this by computing the channel cross-covariance, resulting in positive, semi-definite matrices encoding the connectivity of the electrodes. We define channel cross-covariance (CCV) between any two electrodes INLINEFORM0 and INLINEFORM1 as: INLINEFORM2 . Next, we reject the channels which have significantly lower cross-covariance than auto-covariance values (where auto-covariance implies CCV on same electrode). We found this measure to be essential as the higher cognitive processes underlying speech planning and synthesis involve frequent information exchange between different parts of the brain. Hence, such matrices often contain more discriminative features and hidden information than mere raw signals. This is essentially different than our previous work BIBREF16 where we extract per-channel 1-D covariance information and feed it to the networks. We present our sample 2-D EEG cross-covariance matrices (of two individuals) in Fig. FIGREF2 ."
],
[
"In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN."
],
[
"As we found the individually-trained parallel networks (CNN and LSTM) to be useful (see Table TABREF12 ), we suspected the combination of these two networks could provide a more powerful discriminative spatial and temporal representation of the data than each independent network. As such, we concatenate the last fully-connected layer from the CNN with its counterpart in the LSTM to compose a single feature vector based on these two penultimate layers. Ultimately, this forms a joint spatio-temporal encoding of the cross-covariance matrix.",
"In order to further reduce the dimensionality of the spatio-temporal encodings and cancel background noise effects BIBREF21 , we train an unsupervised deep autoenoder (DAE) on the fused heterogeneous features produced by the combined CNN and LSTM information. The DAE forms our second level of hierarchy, with 3 encoding and 3 decoding layers, and mean squared error (MSE) as the cost function."
],
[
"At the third level of hierarchy, the discrete latent vector representation of the deep autoencoder is fed into an Extreme Gradient Boost based classification layer BIBREF22 , BIBREF23 motivated by BIBREF21 . It is a regularized gradient boosted decision tree that performs well on structured problems. Since our EEG-phonological pairwise classification has an internal structure involving individual phonemes and words, it seems to be a reasonable choice of classifier. The classifier receives its input from the latent vectors of the deep autoencoder and is trained in a supervised manner to output the final predicted classes corresponding to the speech imagery."
],
[
"We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels."
],
[
"We performed two sets of experiments with the single-trial EEG data. In PHASE-ONE, our goals was to identify the best architectures and hyperparameters for our networks with a reasonable number of runs. For PHASE-ONE, we randomly shuffled and divided the data (1913 signals from 14 individuals) into train (80%), development (10%) and test sets (10%). In PHASE-TWO, in order to perform a fair comparison with the previous methods reported on the same dataset, we perform a leave-one-subject out cross-validation experiment using the best settings we learn from PHASE-ONE.",
"The architectural parameters and hyperparameters listed in Table TABREF6 were selected through an exhaustive grid-search based on the validation set of PHASE-ONE. We conducted a series of empirical studies starting from single hidden-layered networks for each of the blocks and, based on the validation accuracy, we increased the depth of each given network and selected the optimal parametric set from all possible combinations of parameters. For the gradient boosting classification, we fixed the maximum depth at 10, number of estimators at 5000, learning rate at 0.1, regularization coefficient at 0.3, subsample ratio at 0.8, and column-sample/iteration at 0.4. We did not find any notable change of accuracy while varying other hyperparameters while training gradient boost classifier."
],
[
"To demonstrate the significance of the hierarchical CNN-LSTM-DAE method, we conducted separate experiments with the individual networks in PHASE-ONE of experiments and summarized the results in Table TABREF12 From the average accuracy scores, we observe that the mixed network performs much better than individual blocks which is in agreement with the findings in BIBREF21 . A detailed analysis on repeated runs further shows that in most of the cases, LSTM alone does not perform better than chance. CNN, on the other hand, is heavily biased towards the class label which sees more training data corresponding to it. Though the situation improves with combined CNN-LSTM, our analysis clearly shows the necessity of a better encoding scheme to utilize the combined features rather than mere concatenation of the penultimate features of both networks.",
"The very fact that our combined network improves the classification accuracy by a mean margin of 14.45% than the CNN-LSTM network indeed reveals that the autoencoder contributes towards filtering out the unrelated and noisy features from the concatenated penultimate feature set. It also proves that the combined supervised and unsupervised neural networks, trained hierarchically, can learn the discriminative manifold better than the individual networks and it is crucial for improving the classification accuracy. In addition to accuracy, we also provide the kappa coefficients BIBREF24 of our method in Fig. FIGREF14 . Here, a higher mean kappa value corresponding to a task implies that the network is able to find better discriminative information from the EEG data beyond random decisions. The maximum above-chance accuracy (75.92%) is recorded for presence/absence of the vowel task and the minimum (49.14%) is recorded for the INLINEFORM0 .",
"To further investigate the feature representation achieved by our model, we plot T-distributed Stochastic Neighbor Embedding (tSNE) corresponding to INLINEFORM0 and V/C classification tasks in Fig. FIGREF8 . We particularly select these two tasks as our model exhibits respectively minimum and maximum performance for these two. The tSNE visualization reveals that the second set of features are more easily separable than the first one, thereby giving a rationale for our performance.",
"Next, we provide performance comparison of the proposed approach with the baseline methods for PHASE-TWO of our study (cross-validation experiment) in Table TABREF15 . Since the model encounters the unseen data of a new subject for testing, and given the high inter-subject variability of the EEG data, a reduction in the accuracy was expected. However, our network still managed to achieve an improvement of 18.91, 9.95, 67.15, 2.83 and 13.70 % over BIBREF17 . Besides, our best model shows more reliability compared to previous works: The standard deviation of our model's classification accuracy across all the tasks is reduced from 22.59% BIBREF17 and 17.52% BIBREF18 to a mere 5.41%."
],
[
"In an attempt to move a step towards understanding the speech information encoded in brain signals, we developed a novel mixed deep neural network scheme for a number of binary classification tasks from speech imagery EEG data. Unlike previous approaches which mostly deal with subject-dependent classification of EEG into discrete vowel or word labels, this work investigates a subject-invariant mapping of EEG data with different phonological categories, varying widely in terms of underlying articulator motions (eg: involvement or non-involvement of lips and velum, variation of tongue movements etc). Our model takes an advantage of feature extraction capability of CNN, LSTM as well as the deep learning benefit of deep autoencoders. We took BIBREF17 , BIBREF18 as the baseline works investigating the same problem and compared our performance with theirs. Our proposed method highly outperforms the existing methods across all the five binary classification tasks by a large average margin of 22.51%."
],
[
"This work was funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada and Canadian Institutes for Health Research (CIHR)."
]
],
"section_name": [
"Introduction",
"Proposed Framework",
"Preprocessing step",
"Joint variability of electrodes",
"CNN & LSTM",
"Deep autoencoder for spatio-temporal information",
"Classification with Extreme Gradient Boost",
"Dataset",
"Training and hyperparameter selection",
"Performance analysis and discussion",
"Conclusion and future direction",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"5653e01e666e6542de7df9102aa4f7ffeed12e96"
],
"answer": [
{
"evidence": [
"To further investigate the feature representation achieved by our model, we plot T-distributed Stochastic Neighbor Embedding (tSNE) corresponding to INLINEFORM0 and V/C classification tasks in Fig. FIGREF8 . We particularly select these two tasks as our model exhibits respectively minimum and maximum performance for these two. The tSNE visualization reveals that the second set of features are more easily separable than the first one, thereby giving a rationale for our performance.",
"FLOAT SELECTED: Fig. 3. tSNE feature visualization for ±nasal (left) and V/C classification (right). Red and green colours indicate the distribution of two different types of features"
],
"extractive_spans": [
"we plot T-distributed Stochastic Neighbor Embedding (tSNE) corresponding to INLINEFORM0 and V/C classification tasks in Fig. FIGREF8 ."
],
"free_form_answer": "",
"highlighted_evidence": [
"To further investigate the feature representation achieved by our model, we plot T-distributed Stochastic Neighbor Embedding (tSNE) corresponding to INLINEFORM0 and V/C classification tasks in Fig. FIGREF8 . We particularly select these two tasks as our model exhibits respectively minimum and maximum performance for these two. The tSNE visualization reveals that the second set of features are more easily separable than the first one, thereby giving a rationale for our performance.",
"FLOAT SELECTED: Fig. 3. tSNE feature visualization for ±nasal (left) and V/C classification (right). Red and green colours indicate the distribution of two different types of features"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"5aaf5c697ef418c87f89954633318d9fed2ef1cc",
"ec6b4edbdeef73c6352c5a37971964a0b2fbe16c"
],
"answer": [
{
"evidence": [
"We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels."
],
"extractive_spans": [
" presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels."
],
"free_form_answer": "",
"highlighted_evidence": [
" In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels."
],
"extractive_spans": [],
"free_form_answer": "presence/absence of consonants, presence/absence of phonemic nasal, presence/absence of bilabial, presence/absence of high-front vowels, and presence/absence of high-back vowels",
"highlighted_evidence": [
"In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1160e2d2db43330e00e762c21291704dbe6476ee",
"bae0c8f68085f310c95afab35e071659e4dbe051"
],
"answer": [
{
"evidence": [
"In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN."
],
"extractive_spans": [
"we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers."
],
"free_form_answer": "",
"highlighted_evidence": [
"In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN."
],
"extractive_spans": [],
"free_form_answer": "They use four-layered 2D CNN and two fully connected hidden layers on the channel covariance matrix to compute the spatial aspect.",
"highlighted_evidence": [
"In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"14e904a72999de9da963f54a303a874b5b6f47ab",
"be0701277a315b150b51262e0cce2f91de050198"
],
"answer": [
{
"evidence": [
"We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels."
],
"extractive_spans": [
"7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels."
],
"extractive_spans": [
"KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"46e3d7e687351c7477f698c838f8de74b97cc116",
"8c46ed7f22ff24fbaf10735a8fedd74f31e7c6cb"
],
"answer": [
{
"evidence": [
"We performed two sets of experiments with the single-trial EEG data. In PHASE-ONE, our goals was to identify the best architectures and hyperparameters for our networks with a reasonable number of runs. For PHASE-ONE, we randomly shuffled and divided the data (1913 signals from 14 individuals) into train (80%), development (10%) and test sets (10%). In PHASE-TWO, in order to perform a fair comparison with the previous methods reported on the same dataset, we perform a leave-one-subject out cross-validation experiment using the best settings we learn from PHASE-ONE."
],
"extractive_spans": [
"1913 signals"
],
"free_form_answer": "",
"highlighted_evidence": [
" For PHASE-ONE, we randomly shuffled and divided the data (1913 signals from 14 individuals) into train (80%), development (10%) and test sets (10%). In PHASE-TWO, in order to perform a fair comparison with the previous methods reported on the same dataset, we perform a leave-one-subject out cross-validation experiment using the best settings we learn from PHASE-ONE."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"3f9aee5ab2bacd7dabf81d34d7218c8231444999",
"f1eac4cc0226f4b54073e37d6185159339e46720"
],
"answer": [
{
"evidence": [
"We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels."
],
"extractive_spans": [
"14"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels."
],
"extractive_spans": [
"14 participants"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset consists of 14 participants, with each prompt presented 11 times to each individual. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"How do they demonstrate that this type of EEG has discriminative information about the intended articulatory movements responsible for speech?",
"What are the five different binary classification tasks?",
"How was the spatial aspect of the EEG signal computed?",
"What data was presented to the subjects to elicit event-related responses?",
"How many electrodes were used on the subject in EEG sessions?",
"How many subjects does the EEG data come from?"
],
"question_id": [
"7ae38f51243cb80b16a1df14872b72a1f8a2048f",
"deb89bca0925657e0f91ab5daca78b9e548de2bd",
"9c33b340aefbc1f15b6eb6fb3e23ee615ce5b570",
"e6583c60b13b87fc37af75ffc975e7e316d4f4e0",
"c7b6e6cb997de1660fd24d31759fe6bb21c7863f",
"f9f59c171531c452bd2767dc332dc74cadee5120"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Fig. 2. Cross covariance Matrices : Rows correspond to two different subjects; Columns (from left to right) correspond to sample examples for bilabial, nasal, vowel, /uw/, and /iy/.",
"Table 1. Selected parameter sets",
"Fig. 3. tSNE feature visualization for ±nasal (left) and V/C classification (right). Red and green colours indicate the distribution of two different types of features",
"Table 2. Results in accuracy on 10% test data in the first study",
"Fig. 4. Kappa coefficient values for above-chance accuracy based on Table 2",
"Table 3. Comparison of classification accuracy"
],
"file": [
"2-Figure2-1.png",
"3-Table1-1.png",
"3-Figure3-1.png",
"4-Table2-1.png",
"4-Figure4-1.png",
"4-Table3-1.png"
]
} | [
"What are the five different binary classification tasks?",
"How was the spatial aspect of the EEG signal computed?"
] | [
[
"1904.04358-Dataset-0"
],
[
"1904.04358-CNN & LSTM-0"
]
] | [
"presence/absence of consonants, presence/absence of phonemic nasal, presence/absence of bilabial, presence/absence of high-front vowels, and presence/absence of high-back vowels",
"They use four-layered 2D CNN and two fully connected hidden layers on the channel covariance matrix to compute the spatial aspect."
] | 119 |