id
stringlengths 10
10
| title
stringlengths 19
145
| abstract
stringlengths 273
1.91k
| full_text
dict | qas
dict | figures_and_tables
dict | question
sequence | retrieval_gt
sequence | answer_gt
sequence | __index_level_0__
int64 0
887
|
---|---|---|---|---|---|---|---|---|---|
2003.04707 | Neuro-symbolic Architectures for Context Understanding | Computational context understanding refers to an agent's ability to fuse disparate sources of information for decision-making and is, therefore, generally regarded as a prerequisite for sophisticated machine reasoning capabilities, such as in artificial intelligence (AI). Data-driven and knowledge-driven methods are two classical techniques in the pursuit of such machine sense-making capability. However, while data-driven methods seek to model the statistical regularities of events by making observations in the real-world, they remain difficult to interpret and they lack mechanisms for naturally incorporating external knowledge. Conversely, knowledge-driven methods, combine structured knowledge bases, perform symbolic reasoning based on axiomatic principles, and are more interpretable in their inferential processing; however, they often lack the ability to estimate the statistical salience of an inference. To combat these issues, we propose the use of hybrid AI methodology as a general framework for combining the strengths of both approaches. Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks. We further ground our discussion in two applications of neuro-symbolism and, in both cases, show that our systems maintain interpretability while achieving comparable performance, relative to the state-of-the-art. | {
"paragraphs": [
[
"Context understanding is a natural property of human cognition, that supports our decision-making capabilities in complex sensory environments. Humans are capable of fusing information from a variety of modalities|e.g., auditory, visual|in order to perform different tasks, ranging from the operation of a motor vehicle to the generation of logical inferences based on commonsense. Allen Newell and Herbert Simon described this sense-making capability in their theory of cognition BIBREF0, BIBREF1: through sensory stimuli, humans accumulate experiences, generalize, and reason over them, storing the resulting knowledge in memory; the dynamic combination of live experience and distilled knowledge during task-execution, enables humans to make time-effective decisions and evaluate how good or bad a decision was by factoring in external feedback.",
"Endowing machines with this sense-making capability has been one of the long-standing goals of Artificial Intelligence (AI) practice and research, both in industry and academia. Data-driven and knowledge-driven methods are two classical techniques in the pursuit of such machine sense-making capability. Sense-making is not only a key for improving machine autonomy, but is a precondition for enabling seamless interaction with humans. Humans communicate effectively with each other, thanks to their shared mental models of the physical world and social context BIBREF2. These models foster reciprocal trust by making contextual knowledge transparent; they are also crucial for explaining how decision-making unfolds. In a similar fashion, we can assert that `explainable AI' is a byproduct or an affordance of computational context understanding and is predicated on the extent to which humans can introspect the decision processes that enable machine sense-making BIBREF3."
],
[
"From the definitions of `explainable AI' and `context understanding,' in the previous section, we can derive the following corollary:",
"The explainability of AI algorithms is related to how context is processed, computationally, based on the machine's perceptual capabilities and on the external knowledge resources that are available.",
"Along this direction, the remainder of this chapter explores two concrete scenarios of context understanding, realized by neuro-symbolic architectures|i.e., hybrid AI frameworks that instruct machine perception (based on deep neural networks) with knowledge graphs. These examples were chosen to illustrate the general applicability of neuro-symbolism and its relevance to contemporary research problems.",
"Specifically, section SECREF3 considers context understanding for autonomous vehicles: we describe how a knowledge graph can be built from a dataset of urban driving situations and how this knowledge graph can be translated into a continuous vector-space representation. This embedding space can be used to estimate the semantic similarity of visual scenes by using neural networks as powerful, non-linear function approximators. Here, models may be trained to make danger assessments of the visual scene and, if necessary, transfer control to the human in complex scenarios. The ability to make this assessment is an important capability for autonomous vehicles, when we consider the negative ramifications for a machine to remain invariant to changing weather conditions, anomalous behavior of dynamic obstacles on the road (e.g., other vehicles, pedestrians), varied lighting conditions, and other challenging circumstances. We suggest neuro-symbolic fusion as one solution and, indeed, our results show that our embedding space preserves the semantic properties of the conceptual elements that make up visual scenes.",
"In section SECREF17, we describe context understanding for language tasks. Here, models are supplied with three separate modalities: external commonsense knowledge, unstructured textual context, and a series of answer candidates. In this task, models are tested on their ability to fuse together these disparate sources of information for making the appropriate logical inferences. We designed methods to extract adequate semantic structures (i.e., triples) from two comprehensive commonsense knowledge graphs, ConceptNet BIBREF6 and Atomic BIBREF7, and to inject this external context into language models. In general, open-domain linguistic context is useful for different tasks in Natural Language Processing (NLP), including: information-extraction, text-classification, extractive and abstractive summarization, and question-answering (QA). For ease of quantitative evaluation, we consider a QA task in section SECREF17. In particular, the task is to select the correct answer from a pool of candidates, given a question that specifically requires commonsense to resolve. For example, the question, If electrical equipment won't power on, what connection should be checked? is associated with `company', `airport', `telephone network', `wires', and `freeway'(where `wires' is the correct answer choice). We demonstrate that our proposed hybrid architecture out-performs the state-of-the-art neural approaches that do not utilize structured commonsense knowledge bases. Furthermore, we discuss how our approach maintains explainability in the model's decision-making process: the model has the joint task of learning an attention distribution over the commonsense knowledge context which, in turn, depends on the knowledge triples that were conceptually most salient for selecting the correct answer candidate, downstream. Fundamentally, the goal of this project is to make human interaction with chatbots and personal assistants more robust. For this to happen, it is crucial to equip intelligent agents with a shared understanding of general contexts, i.e., commonsense. Conventionally, machine commonsense had been computationally articulated using symbolic languages|Cyc being one of the most prominent outcomes of this approach BIBREF8. However, symbolic commonsense representations are neither scalable nor comprehensive, as they depend heavily on the knowledge engineering experts that encode them. In this regard, the advent of deep learning and, in particular, the possibility of fusing symbolic knowledge into sub-symbolic (neural) layers, has recently led to a revival of this AI research topic."
],
[
"Recently, there has been a significant increase in the investment for autonomous driving (AD) research and development, with the goal of achieving full autonomy in the next few years. Realizing this vision requires robust ML/AI algorithms that are trained on massive amounts of data. Thousands of cars, equipped with various types of sensors (e.g., LIDAR, RGB, RADAR), are now deployed around the world to collect this heterogeneous data from real-world driving scenes. The primary objective for AD is to use these data to optimize the vehicle's perception pipeline on such tasks as: 3D object detection, obstacle tracking, object trajectory forecasting, and learning an ideal driving policy. Fundamental to all of these tasks will be the vehicle's context understanding capability, which requires knowledge of the time, location, detected objects, participating events, weather, and various other aspects of a driving scene. Even though state-of-the-art AI technologies are used for this purpose, their current effectiveness and scalability are insufficient to achieve full autonomy. Humans naturally exhibit context understanding behind the wheel, where the decisions we make are the result of a continuous evaluation of perceptual cues combined with background knowledge. For instance, human drivers generally know which area of a neighborhood might have icy road conditions on a frigid winter day, where flooding is more frequent after a heavy rainfall, which streets are more likely to have kids playing after school, and which intersections have poor lighting. Currently, this type of common knowledge is not being used to assist self-driving cars and, due to the sample-inefficiency of current ML/AI algorithms, vehicle models cannot effectively learn these phenomena through statistical observation alone. On March 18, 2018, Elaine Herzberg’s death was reported as the first fatality incurred from a collision with an autonomous vehicle. An investigation into the collision, conducted by The National Transportation Safety Board (NTSB), remarks on the shortcomings of current AD and context understanding technologies. Specifically, NTSB found that the autonomous vehicle incorrectly classified Herzberg as an unknown object, a vehicle, and then a bicycle within the complex scene as she walked across the road. Further investigation revealed that the system design did not include consideration for pedestrians walking outside of a crosswalk, or jaywalking BIBREF9. Simply put, the current AD technology lacks fundamental understanding of the characteristics of objects and events within common scenes; this suggests that more research is required in order to achieve the vision of autonomous driving.",
"Knowledge Graphs (KGs) have been successfully used to manage heterogeneous data within various domains. They are able to integrate and structure data and metadata from multiple modalities into a unified semantic representation, encoded as a graph. More recently, KGs are being translated into latent vector space representations, known as Knowledge Graph Embeddings (KGEs), that have been shown to improve the performance of machine learning models when applied to certain downstream tasks, such as classification BIBREF10, BIBREF11. Given a KG as a set of triples, KGE algorithms learn to create a latent representation of the KG entities and relations as continuous KGE vectors. This encoding allows KGEs to be easily manipulated and integrated with machine learning algorithms. Motivated by the shortcomings of current context understanding technologies, along with the promising outcomes of KGEs, our research focuses on the generation and evaluation of KGEs on AD data. Before directly applying KGEs on critical AD applications, however, we evaluate the intrinsic quality of KGEs across multiple metrics and KGE algorithms BIBREF12. Additionally, we present an early investigation of using KGEs for a selected use-case from the AD domain."
],
[
"Dataset. To promote and enable further research on autonomous driving, several benchmark datasets have been made publicly available by companies in this domain BIBREF13. NuScenes is a benchmark dataset of multimodal vehicular data, recently released by Aptiv BIBREF14 and used for our experiments. NuScenes consists of a collection of 20-second driving scenes, with $\\sim $40 sub-scenes sampled per driving scene (i.e., one every 0.5 seconds). In total, NuScenes includes 850 driving scenes and 34,149 sub-scenes. Each sub-scene is annotated with detected objects and events, each defined within a taxonomy of 23 object/event categories.",
"Scene Ontology. In autonomous driving, a scene is defined as an observable volume of time and space BIBREF15. On the road, a vehicle may encounter many different situations|such as merging onto a divided highway, stopping at a traffic light, and overtaking another vehicle|all of which are considered as common driving scenes. A scene encapsulates all relevant information about a particular situation, including data from vehicular sensors, objects, events, time and location. A scene can also be divided into a sequence of sub-scenes. As an example, a 20-second drive consisting primarily of the vehicle merging into a highway could be considered as a scene. In addition, all the different situations the vehicle encounters within these 20 seconds can also be represented as (sub-)scenes. In this case, a scene may be associated with a time interval and spatial region while a sub-scene may be associated with a specific timestamp and a set of spatial coordinates. This semantic representation of a scene is formally defined in the Scene Ontology (see figure FIGREF8(a), depicted in Protege). To enable the generation of a KG from the data within NuScenes, the Scene Ontology is extended to include all the concepts (i.e., objects and event categories) found in the NuScenes dataset.",
"Generating Knowledge Graphs. The Scene Ontology identifies events and features-of-interests (FoIs) as top-level concepts. An event or a FoI may be associated with a Scene via the includes relation. FoIs are associated with events through the isParticipantOf relation. Figure FIGREF8(b) shows a subset of the FoIs and events defined by the Scene Ontology. In generating the scenes' KG, each scene and sub-scene found in NuScenes is annotated using the Scene Ontology. Table TABREF9 shows some basic statistics of the generated KG."
],
[
"KGE Algorithms. KGE algorithms enable the ability to easily feed knowledge into ML algorithms and improve the performance of learning tasks, by translating the knowledge contained in knowledge graphs into latent vector space representation of KGEs BIBREF16. To select candidate KGE algorithms for our evaluation, we referred to the classification of KGE algorithms provided by Wang et al. BIBREF17. In this work, KGE algorithms are classified into two primary categories: (1) Transitional distance-based algorithms and (2) Semantic matching-based models. Transitional distance-based algorithms define the scoring function of the model as a distance-based measure, while semantic matching-based algorithms define it as a similarity measure. Here, entity and relation vectors interact via addition and subtraction in the case of Transitional distance-based models; in semantic matching-based models, the interaction between entity and relation vectors is captured by multiplicative score functions BIBREF18.",
"Initially, for our study we had selected one algorithm from each class: TransE BIBREF19 to represent the transitional distance-based algorithms and RESCAL BIBREF20 to represent the semantic matching-based algorithms. However, after experimentation, RESCAL did not scale well for handling large KGs in our experiments. Therefore, we also included HolE BIBREF21|an efficient successor of RESCAL|in the evaluation. A brief summary of each algorithm is provided for each model, below:",
"TransE: the TransE model is often considered to be the most-representative of the class of transitional distance-based algorithms BIBREF17. Given a triple (h, r, t) from the KG, TransE encodes h, r and t as vectors, with r represented as a transition vector from h to t: $\\mathbf {h} + \\mathbf {r} \\approx \\mathbf {t}$. Since both entities and relations are represented as vectors, TransE is one of the most efficient KGE algorithms, with $\\mathcal {O}(n d + m d)$ space complexity and $\\mathcal {O}(n_t d)$ time complexity ($n_t$ is the number of training triples).",
"RESCAL: RESCAL is capable of generating an expressive knowledge graph embedding space, due to its ability to capture complex patterns over multiple hops in the KG. RESCAL encodes relations as matrices and captures the interaction between entities and relations using a bi-linear scoring function. Though the use of a matrix to encode each relation yields improved expressivity, it also limits RESCAL’s ability to scale with large KGs. It has $\\mathcal {O}(n d + m d^2)$ space complexity and $\\mathcal {O}(n_t d^2)$ time complexity.",
"HolE: HoLE is a more efficient successor of RESCAL, addressing its space and time complexity issues, by encoding relations as vectors without sacrificing the expressivity of the model. By using circular correlation operation BIBREF21, it captures the pairwise interaction of entities as composable vectors. This optimization yields $\\mathcal {O} (n d + m d)$ space complexity and $\\mathcal {O}(n_t d \\log d)$ time complexity.",
"Visualizing KGEs. In order to visualize the generated KGE, a “mini\" KG from the NuScenes-mini dataset was created. Specifically, 10 scenes were selected (along with their sub-scenes) to generate the KG, and the TransE algorithm was used to learn the embeddings. When training the KGEs, we chose the dimension of the vectors to be 100. To visualize the embeddings in 2-dimensional (2D) space, the dimensions are reduced using the t-Distributed Stochastic Neighbor Embedding (t-SNE) BIBREF22 projection. Figure FIGREF11(a) shows the resulting embeddings of the NuScenes dataset. To denote interesting patterns that manifest in the embeddings, instances of Car (a FoI) and the events in which they participate are highlighted. In this image, events such as parked car, moving car, and stopped car are clustered around entities of type Car. This shows that the isParticipantOf relations defined in the KG are maintained within the KG embeddings.",
""
],
[
"Here, we deviate slightly from the prior work in evaluating KGE algorithms, which evaluate KGEs based downstream task performance. Instead, we focus on an evaluation that uses only metrics that quantify the intrinsic quality of KGEs BIBREF23: categorization measure, coherence measure, and semantic transition distance. Categorization measures how well instances of the same type cluster together. To quantify this quality, all vectors of the same type are averaged together and the cosine similarity is computed between the averaged vector and the typed class. The Coherence measure quantifies the proportion of neighboring entities that are of the same type; the evaluation framework proposes that, if a set of entities are typed by the class, those entities should form a cluster in the embedding space with the typed class as the centroid. Adapted from the word embedding literature, Semantic Transitional Distance captures the relational semantics of the KGE: if a triple $(h,r,t)$ is correctly represented in the embedding space, the transition distance between the vectors representing $(\\mathbf {h+r})$ should be close to $\\mathbf {t}$. This is quantified by computing the cosine similarity between $(\\mathbf {h+r})$ and $\\mathbf {t}$.",
"Results. Evaluation results are reported with respect to each algorithm and metric. Figure FIGREF13 shows the evaluation results of categorization measure, coherence measure, and semantic transitional distance|for each KGE algorithm. The NuScenes KG, generated from the NuScenes-trainval dataset, is large in terms of both the number of triples and number of entities (see Table TABREF9). Hence, RESCAL did not scale well to this dataset. For this reason, we only report the evaluation results for TransE and HolE. When considering the KGE algorithms, TransE's performance is consistently better across metrics, compared to HolE's performance. However, it is interesting to note that HolE significantly outperforms TransE for some classes/relations. When considering the evaluation metrics, it is evident that the categorization measure and semantic transitional distance are able to capture the quality of type semantics and relational semantics, respectively. The value of the coherence measure, however, is zero for HoLE in most cases and close to zero for TransE in some cases. In our experimental setting, the poor performance with respect to the coherence measure may suggest that it may not be a good metric for evaluating KGEs in the AD domain."
],
[
"We report preliminary results from our investigation into using KGEs for a use-case in the AD domain. More specifically, we apply KGEs for computing scene similarity. In this case, the goal is to find (sub-)scenes that are characteristically similar, using the learned KGEs. Given a set of scene pairs, we choose the pair with the highest cosine similarity as the most similar. Figure FIGREF15 shows an illustration of the two most similar sub-scenes, when the list of pairs include sub-scenes from different scenes. An interesting observation is that the black string of objects in sub-scene (a) are Barriers (a Static Object), and the orange string of objects in sub-scene (b) are Stopped Cars. This example suggests that the KGE-based approach could identify sub-scenes that share similar characteristics even though the sub-scenes are visually dissimilar."
],
[
"We presented an investigation of using KGEs for AD context understanding, along with an evaluation of the intrinsic quality of KGEs. The evaluation suggests that KGEs are specifically able to capture the semantic properties of a scene knowledge graph (e.g., isParticipantOf relation between objects and events). More generally, KGE algorithms are capable of translating semantic knowledge, such as type and relational semantics to KGEs. When considering the different KGE algorithms, we report that the transitional distance-based algorithm, TransE, shows consistent performance across multiple quantitative KGE-quality metrics. Our evaluation further suggests that some quality metrics currently in use, such as the coherence measure, may not be effective in measuring the quality of the type semantics from KGEs, in the AD domain. Finally, in applying the learned KGEs to a use-case of importance in the AD domain, we shed some light on the effectiveness of leveraging KGEs in capturing AD scene similarity."
],
[
"Recently, many efforts have been made towards building challenging question-answering (QA) datasets that, by design, require models to synthesize external commonsense knowledge and leverage more sophisticated reasoning mechanisms BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. Two directions of work that try to solve these tasks are: purely data-oriented and purely knowledge-oriented approaches. The data-oriented approaches generally propose to pre-train language models on large linguistic corpora, such that the model would implicitly acquire “commonsense” through its statistical observations. Indeed, large pre-trained language models have achieved promising performance on many commonsense reasoning benchmarks BIBREF29, BIBREF30, BIBREF31, BIBREF32. The main downsides of this approach are that models are difficult to interpret and that they lack mechanisms for incorporating explicit commonsense knowledge. Conversely, purely knowledge-oriented approaches combine structured knowledge bases and perform symbolic reasoning, on the basis of axiomatic principles. Such models enjoy the property of interpretability, but often lack the ability to estimate the statistical salience of an inference, based on real-world observations. Hybrid models are those that attempt to fuse these two approaches, by extracting knowledge from structured knowledge bases and using the resulting information to guide the learning paradigm of statistical estimators, such as deep neural network models.",
"Different ways of injecting knowledge into models have been introduced, such as attention-based gating mechanisms BIBREF33, key-value memory mechanisms BIBREF34, BIBREF35, extrinsic scoring functions BIBREF36, and graph convolution networks BIBREF37, BIBREF38. Our approach is to combine the powerful pre-trained language models with structured knowledge, and we extend previous approaches by taking a more fine-grained view of commonsense. The subtle differences across the various knowledge types have been discussed at length in AI by philosophers, computational linguists, and cognitive psychologists BIBREF39. At the high level, we can identify declarative commonsense, whose scope encompasses factual knowledge, e.g., `the sky is blue' and `Paris is in France'; taxonomic knowledge, e.g., `football players are athletes' and `cats are mammals'; relational knowledge, e.g., `the nose is part of the skull' and `handwriting requires a hand and a writing instrument'; procedural commonsense, which includes prescriptive knowledge, e.g., `one needs an oven before baking cakes' and `the electricity should be off while the switch is being repaired' BIBREF40; sentiment knowledge, e.g., `rushing to the hospital makes people worried' and `being in vacation makes people relaxed'; and metaphorical knowledge which includes idiomatic structures, e.g., `time flies' and `raining cats and dogs'. We believe that it is important to identify the most appropriate commonsense knowledge type required for specific tasks, in order to get better downstream performance. Once the knowledge type is identified, we can then select the appropriate knowledge base(s), the corresponding knowledge-extraction pipeline, and the suitable neural injection mechanisms.",
"In this work, we conduct a comparison study of different knowledge bases and knowledge-injection methods, on top of pre-trained neural language models; we evaluate model performance on a multiple-choice QA dataset, which explicitly requires commonsense reasoning. In particular, we used ConceptNet BIBREF6 and the recently-introduced ATOMIC BIBREF7 as our external knowledge resources, incorporating them in the neural computation pipeline using the Option Comparison Network (OCN) model mechanism BIBREF41. We evaluate our models on the CommonsenseQA BIBREF42 dataset; an example question from the CommonsenseQA task is shown in Table TABREF20. Our experimental results and analysis suggest that attention-based injection is preferable for knowledge-injection and that the degree of domain overlap, between knowledge-base and dataset, is vital to model success."
],
[
"CommonsenseQA is a multiple-choice QA dataset that specifically measure commonsense reasoning BIBREF42. This dataset is constructed based on ConceptNet (see section SECREF23 for more information about this knowledge base). Specifically, a source concept is first extracted from ConceptNet, along with 3 target concepts that are connected to the source concept, i.e., a sub-graph. Crowd-workers are then asked to generate questions, using the source concept, such that only one of the target concepts can correctly answer the question. Additionally, 2 more “distractor” concepts are selected by crowd-workers, so that each question is associated with 5 answer-options. In total, the dataset contains 12,247 questions. For CommonsenseQA, we evaluate models on the development-set only, since test-set answers are not publicly available."
],
[
"The first knowledge-base we consider for our experiments is ConceptNet BIBREF6. ConceptNet contains over 21 million edges and 8 million nodes (1.5 million nodes in the partition for the English vocabulary), from which one may generate triples of the form $(C1, r, C2)$, wherein the natural-language concepts $C1$ and $C2$ are associated by commonsense relation $r$, e.g., (dinner, AtLocation, restaurant). Thanks to its coverage, ConceptNet is one of the most popular semantic networks for commonsense. ATOMIC BIBREF7 is a knowledge-base that focuses on procedural knowledge. Triples are of the form (Event, r, {Effect$|$Persona$|$Mental-state}), where head and tail are short sentences or verb phrases and $r$ represents an if-then relation type: (X compliments Y, xIntent, X wants to be nice). Since the CommonsenseQA dataset is open-domain and requires general commonsense, we think these knowledge-bases are most appropriate for our investigation."
],
[
"The model class we select is that of the Bidirectional Encoder Representations with Transformer (BERT) model BIBREF29, as it has been applied to numerous QA tasks and has achieved very promising performance, particularly on the CommonsenseQA dataset. When utilizing BERT on multiple-choice QA tasks, the standard approach is to concatenate the question with each answer-option, in order to generate a list of tokens which is then fed into BERT encoder; a linear layer is added on top, in order to predict the answer. One aspect of this strategy is that each answer-option is encoded independently, which limits the model's ability to find correlations between answer-options and with respect to the original question context. To address this issue, the Option Comparison Network (OCN) BIBREF41 was introduced to explicitly model the pairwise answer-option interactions, making OCN better-suited for multiple-choice QA task structures. The OCN model uses BERT as its base encoder: the question/option encoding is produced by BERT and further processed in a Option Comparison Cell, before being fed into linear layer. The Option Comparison Cell is illustrated in the bottom right of figure FIGREF21. We re-implemented OCN while keeping BERT as its upstream encoder (we refer an interested reader to BIBREF41, BIBREF43 for more details)."
],
[
"ConceptNet. We identify ConceptNet relations that connect questions to the answer-options. The intuition is that these relation paths would provide explicit evidence that would help the model find the answer. Formally, given a question $Q$ and an answer-option $O$, we find all ConceptNet relations (C1, r, C2), such that $C1 \\in Q$ and $C2 \\in O$, or vice versa. This rule works well for single-word concepts. However, a large number of concepts in ConceptNet are actually phrases, where finding exactly matching phrases in $Q/O$ is more challenging. To fully utilize phrase-based relations, we relaxed the exact-match constraint to the following:",
"Here, the sequence $S$ represents $Q$ or $O$, depending on which sequence we try to match the concept $C$ to. Additionally, when the part-of-speech (POS) tag for a concept is available, we make sure it matches the POS tag of the corresponding word in $Q/O$. Table TABREF27 shows the extracted ConceptNet triples for the CommonsenseQA example in Table TABREF20. It is worth noting that we are able to extract the original ConceptNet sub-graph that was used to create the question, along with some extra triples. Although not perfect, the bold ConceptNet triple provides clues that could help the model resolve the correct answer.",
"ATOMIC. We observe that many questions in the CommonsenseQA task ask about which event is likely to occur, given a condition. Superficially, this particular question type seems well-suited for ATOMIC, whose focus is on procedural knowledge. Thus, we could frame our goal as evaluating whether ATOMIC can provide relevant knowledge to help answer these questions. However, one challenge of extracting knowledge from this resource is that heads and tails of knowledge triples in ATOMIC are short sentences or verb phrases, while rare words and person-references are reduced to blanks and PersonX/PersonY, respectively."
],
[
"Given previously-extracted knowledge triples, we need to integrate them with the OCN component of our model. Inspired by BIBREF33, we propose to use attention-based injection. For ConceptNet knowledge triples, we first convert concept-relation entities into tokens from our lexicon, in order to generate a pseudo-sentence. For example, “(book, AtLocation, library)” would be converted to “book at location library.” Next, we used the knowledge injection cell to fuse the commonsense knowledge into BERT's output, before feeding the fused output into the OCN cell. Specifically, in a knowledge-injection cell, a Bi-LSTM layer is used to encode these pseudo-sentences, before computing the attention with respect to BERT output, as illustrated in bottom left of figure FIGREF21."
],
[
"Pre-training large-capacity models (e.g., BERT, GPT BIBREF30, XLNet BIBREF31) on large corpora, then fine-tuning on more domain-specific information, has led to performance improvements on various tasks. Inspired by this, our goal in this section is to observe the effect of pre-training BERT on commonsense knowledge and refining the model on task-specific content from the CommonsenseQA dataset. Essentially, we would like to test if pre-training on our external knowledge resources can help the model acquire commonsense. For the ConceptNet pre-training procedure, pre-training BERT on pseudo-sentences formulated from ConceptNet knowledge triples does not provide much gain on performance. Instead, we trained BERT on the Open Mind Common Sense (OMCS) corpus BIBREF44, the originating corpus that was used to create ConceptNet. We extracted about 930K English sentences from OMCS and randomly masked out 15% of the tokens; we then fine-tuned BERT, using a masked language model objective, where the model's objective is to predict the masked tokens, as a probability distribution over the entire lexicon. Finally, we load this fine-tuned model into OCN framework proceed with the downstream CommonsenseQA task. As for pre-training on ATOMIC, we follow previous work's pre-processing steps to convert ATOMIC knowledge triples into sentences BIBREF45; we created special tokens for 9 types of relations as well as blanks. Next, we randomly masked out 15% of the tokens, only masking out tail-tokens; we used the same OMCS pre-training procedure."
],
[
"For all of our experiments, we run 3 trials with different random seeds and we report average scores tables TABREF30 and TABREF32. Evaluated on CommonsenseQA, ConceptNet knowledge-injection provides a significant performance boost (+2.8%), compared to the OCN baseline, suggesting that explicit links from question to answer-options help the model find the correct answer. Pre-training on OMCS also provides a small performance boost to the OCN baseline. Since both ConceptNet knowledge-injection and OMCS pre-training are helpful, we combine both approaches with OCN and we are able to achieve further improvement (+4.9%). Finally, to our surprise, OCN pre-trained on ATOMIC yields a significantly lower performance."
],
[
"To better understand when a model performs better or worse with knowledge-injection, we analyzed model predictions by question type. Since all questions in CommonsenseQA require commonsense reasoning, we classify questions based on the ConceptNet relation between the question concept and correct answer concept. The intuition is that the model needs to capture this relation in order to answer the question. The accuracies for each question type are shown in Table TABREF32. Note that the number of samples by question type is very imbalanced. Thus due to the limited space, we omitted the long tail of the distribution (about 7% of all samples). We can see that with ConceptNet relation-injection, all question types got performance boosts|for both the OCN model and OCN model that was pre-trained on OMCS|suggesting that external knowledge is indeed helpful for the task. In the case of OCN pre-trained on ATOMIC, although the overall performance is much lower than the OCN baseline, it is interesting to see that performance for the “Causes” type is not significantly affected. Moreover, performance for “CausesDesire” and “Desires” types actually got much better. As noted by BIBREF7, the “Causes” relation in ConceptNet is similar to “Effects” and “Reactions” in ATOMIC; and “CausesDesire” in ConceptNet is similar to “Wants” in ATOMIC. This result suggests that models with knowledge pre-training perform better on questions that fit the knowledge domain, but perform worse on others. In this case, pre-training on ATOMIC helps the model do better on questions that are similar to ATOMIC relations, even though overall performance is inferior. Finally, we noticed that questions of type “Antonym” appear to be the hardest ones. Many questions that fall into this category contain negations, and we hypothesize that the models still lack the ability to reason over negation sentences, suggesting another direction for future improvement."
],
[
"Based on our experimental results and error analysis, we see that external knowledge is only helpful when there is alignment between questions and knowledge-base types. Thus, it is crucial to identify the question type and apply the best-suited knowledge. In terms of knowledge-injection methods, attention-based injection seems to be the better choice for pre-trained language models such as BERT. Even when alignment between knowledge-base and dataset is sub-optimal, the performance would not degrade. On the other hand, pre-training on knowledge-bases would shift the language model's weight distribution toward its own domain, greatly. If the task domain does not fit knowledge-base well, model performance is likely to drop. When the domain of the knowledge-base aligns with that of the dataset perfectly, both knowledge-injection methods bring performance boosts and a combination of them could bring further gain.",
"We have presented a survey on two popular knowledge bases (ConceptNet and ATOMIC) and recent knowledge-injection methods (attention and pre-training), on the CommonsenseQA task. We believe it is worth conducting a more comprehensive study of datasets and knowledge-bases and putting more effort towards defining an auxiliary neural learning objectives, in a multi-task learning framework, that classifies the type of knowledge required, based on data characteristics. In parallel, we are also interested in building a global commonsense knowledge base by aggregating ConceptNet, ATOMIC, and potentially other resources like FrameNet BIBREF46 and MetaNet BIBREF47, on the basis of a shared-reference ontology (following the approaches described in BIBREF48 and BIBREF49): the goal would be to assess whether injecting knowledge structures from a semantically-cohesive lexical knowledge base of commonsense would guarantee stable model accuracy across datasets."
],
[
"We illustrated two projects on computational context understanding through neuro-symbolism. The first project (section SECREF3) concerned the use of knowledge graphs to learning an embedding space for characterising visual scenes, in the context of autonomous driving. The second application (section SECREF17) focused on the extraction and integration of knowledge, encoded in commonsense knowledge bases, for guiding the learning process of neural language models in question-answering tasks. Although diverse in scope and breadth, both projects adopt a hybrid approach to building AI systems, where deep neural networks are enhanced with knowledge graphs. For instance, in the first project we demonstrated that scenes that are visually different can be discovered as sharing similar semantic characteristics by using knowledge graph embeddings; in the second project we showed that a language model is more accurate when it includes specialized modules to evaluate questions and candidate answers on the basis of a common knowledge graph. In both cases, explainability emerges as a property of the mechanisms that we implemented, through this combination of data-driven algorithms with the relevant knowledge resources.",
"We began the chapter by alluding to the way in which humans leverage a complex array of cognitive processes, in order to understand the environment; we further stated that one of the greatest challenges in AI research is learning how to endow machines with similar sense-making capabilities. In these final remarks, it is important to emphasize again (see footnote #3) that the capability we describe here need only follow from satisfying the functional requirements of context understanding, rather than concerning ourselves with how those requirements are specifically implemented in humans versus machines. In other words, our hybrid AI approach stems from the complementary nature of perception and knowledge, but does not commit to the notion of replicating human cognition in the machine: as knowledge graphs can only capture a stripped-down representation of what we know, deep neural networks can only approximate how we perceive the world and learn from it. Certainly, human knowledge (encoded in machine-consumable format) abounds in the digital world, and our work shows that these knowledge bases can be used to instruct ML models and, ultimately, enhance AI systems."
]
],
"section_name": [
"Explainability through Context Understanding",
"Context Understanding through Neuro-symbolism",
"Applications of Neuro-symbolism ::: Application I: Learning a Knowledge Graph Embedding Space for Context Understanding in Automotive Driving Scenes ::: Introduction",
"Applications of Neuro-symbolism ::: Application I: Learning a Knowledge Graph Embedding Space for Context Understanding in Automotive Driving Scenes ::: Scene Knowledge Graphs",
"Applications of Neuro-symbolism ::: Application I: Learning a Knowledge Graph Embedding Space for Context Understanding in Automotive Driving Scenes ::: Knowledge Graph Embeddings",
"Applications of Neuro-symbolism ::: Application I: Learning a Knowledge Graph Embedding Space for Context Understanding in Automotive Driving Scenes ::: Intrinsic Evaluation",
"Applications of Neuro-symbolism ::: Application I: Learning a Knowledge Graph Embedding Space for Context Understanding in Automotive Driving Scenes ::: A use-case from the AD domain",
"Applications of Neuro-symbolism ::: Application I: Learning a Knowledge Graph Embedding Space for Context Understanding in Automotive Driving Scenes ::: Discussion",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Introduction",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Dataset",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Knowledge bases",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Model architecture",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Knowledge elicitation",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Knowledge injection",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Knowledge pre-training",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Results",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Error Analysis",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Discussion",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"43aa9c9aec71e4345b87b4c2827f95b401ceeb1a",
"7bb9f8680341f126321baa89a9994b6e2424f617"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Knowledge elicitation",
"ConceptNet. We identify ConceptNet relations that connect questions to the answer-options. The intuition is that these relation paths would provide explicit evidence that would help the model find the answer. Formally, given a question $Q$ and an answer-option $O$, we find all ConceptNet relations (C1, r, C2), such that $C1 \\in Q$ and $C2 \\in O$, or vice versa. This rule works well for single-word concepts. However, a large number of concepts in ConceptNet are actually phrases, where finding exactly matching phrases in $Q/O$ is more challenging. To fully utilize phrase-based relations, we relaxed the exact-match constraint to the following:"
],
"extractive_spans": [],
"free_form_answer": "They find relations that connect questions to the answer-options.",
"highlighted_evidence": [
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Knowledge elicitation\nConceptNet. We identify ConceptNet relations that connect questions to the answer-options. The intuition is that these relation paths would provide explicit evidence that would help the model find the answer. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4811adadc3da0f5bf4d794cc165e0ac19eb561d9",
"7d7c2db1affc0629a508f9bebc6fc8c1170bcea2"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 4. Results on CommonsenseQA; the asterisk (*) denotes results taken from leaderboard."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4. Results on CommonsenseQA; the asterisk (*) denotes results taken from leaderboard."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"17563eb8b02c813988842b37fae49ae0aa1aaede",
"7074d42cd1bb87c97f4d7099a1fab71b4ff80f33"
],
"answer": [
{
"evidence": [
"Applications of Neuro-symbolism ::: Application I: Learning a Knowledge Graph Embedding Space for Context Understanding in Automotive Driving Scenes ::: Introduction",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Introduction"
],
"extractive_spans": [
"Learning a Knowledge Graph Embedding Space for Context Understanding in Automotive Driving Scenes",
"Neural Question-Answering using Commonsense Knowledge Bases"
],
"free_form_answer": "",
"highlighted_evidence": [
"Applications of Neuro-symbolism ::: Application I: Learning a Knowledge Graph Embedding Space for Context Understanding in Automotive Driving Scenes ::: Introduction",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Introduction"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Applications of Neuro-symbolism ::: Application I: Learning a Knowledge Graph Embedding Space for Context Understanding in Automotive Driving Scenes ::: Introduction",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Introduction"
],
"extractive_spans": [
"Application I: Learning a Knowledge Graph Embedding Space for Context Understanding in Automotive Driving Scenes",
"Application II: Neural Question-Answering using Commonsense Knowledge Bases"
],
"free_form_answer": "",
"highlighted_evidence": [
"Applications of Neuro-symbolism ::: Application I: Learning a Knowledge Graph Embedding Space for Context",
"Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Introduction"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How do they interpret the model?",
"Do they compare their approach to data-driven only methods?",
"What are the two applications of neuro-symbolism?"
],
"question_id": [
"58259f2e22363aab20c448e5dd7b6f432556b32d",
"b9e0b1940805a5056f71c66d176cc87829e314d4",
"b54525a0057aa82b73773fa4dacfd115d8f86f1c"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1. Scene Ontology: (a) formal definition of a Scene, and (b) a subset of Features-of-Interests and events defined within a taxonomy.",
"Figure 2. 2D visualizations of KGEs of NuScenes instances generated from the TransE algorithm.",
"Figure 3. Evaluation results of the NuScenes dataset: (a) Categorization measure, (b) Coherence measure, and (c) Semantic transitional distance",
"Figure 4. Results of scene similarity: Most similar sub-scenes computed using KGEs trained on NuScenes KG",
"Table 2. An example from the CommonsenseQA dataset; the asterisk (*) denotes the correct answer.",
"Figure 5. Option Comparison Network with Knowledge Injection",
"Table 3. Extracted ConceptNet relations for sample shown in Table 2.",
"Table 4. Results on CommonsenseQA; the asterisk (*) denotes results taken from leaderboard.",
"Table 5. Accuracies for each CommonsenseQA question type: AtLoc. means AtLocation, Cau. means Causes, Cap. means CapableOf, Ant. means Antonym, H.Pre. means HasPrerequiste, H.Sub means HasSubevent, C.Des. means CausesDesire, and Des. means Desires. Numbers beside types denote the number of questions of that type."
],
"file": [
"5-Figure1-1.png",
"7-Figure2-1.png",
"8-Figure3-1.png",
"9-Figure4-1.png",
"10-Table2-1.png",
"11-Figure5-1.png",
"12-Table3-1.png",
"13-Table4-1.png",
"14-Table5-1.png"
]
} | [
"How do they interpret the model?"
] | [
[
"2003.04707-Applications of Neuro-symbolism ::: Application II: Neural Question-Answering using Commonsense Knowledge Bases ::: Knowledge elicitation-0"
]
] | [
"They find relations that connect questions to the answer-options."
] | 348 |
1605.05166 | Digital Stylometry: Linking Profiles Across Social Networks | There is an ever growing number of users with accounts on multiple social media and networking sites. Consequently, there is increasing interest in matching user accounts and profiles across different social networks in order to create aggregate profiles of users. In this paper, we present models for Digital Stylometry, which is a method for matching users through stylometry inspired techniques. We experimented with linguistic, temporal, and combined temporal-linguistic models for matching user accounts, using standard and novel techniques. Using publicly available data, our best model, a combined temporal-linguistic one, was able to correctly match the accounts of 31% of 5,612 distinct users across Twitter and Facebook. | {
"paragraphs": [
[
"Stylometry is defined as, \"the statistical analysis of variations in literary style between one writer or genre and another\". It is a centuries-old practice, dating back the early Renaissance. It is most often used to attribute authorship to disputed or anonymous documents. Stylometry techniques have also successfully been applied to other, non-linguistic fields, such as paintings and music. The main principles of stylometry were compiled and laid out by the philosopher Wincenty Lutosławski in 1890 in his work \"Principes de stylométrie\" BIBREF0 .",
"Today, there are millions of users with accounts and profiles on many different social media and networking sites. It is not uncommon for users to have multiple accounts on different social media and networking sites. With so many networking, emailing, and photo sharing sites on the Web, a user often accumulates an abundance of account profiles. There is an increasing focus from the academic and business worlds on aggregating user information across different sites, allowing for the development of more complete user profiles. There currently exist several businesses that focus on this task BIBREF1 , BIBREF2 , BIBREF3 . These businesses use the aggregate profiles for advertising, background checks or customer service related tasks. Moreover, profile matching across social networks, can assist the growing field of social media rumor detection BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , since many malicious rumors are spread on different social media platforms by the same people, using different accounts and usernames.",
"Motivated by traditional stylometry and the growing interest in matching user accounts across Internet services, we created models for Digital Stylometry, which fuses traditional stylometry techniques with big-data driven social informatics methods used commonly in analyzing social networks. Our models use linguistic and temporal activity patterns of users on different accounts to match accounts belonging to the same person. We evaluated our models on $11,224$ accounts belonging to $5,612$ distinct users on two of the largest social media networks, Twitter and Facebook. The only information that was used in our models were the time and the linguistic content of posts by the users. We intentionally did not use any other information, especially the potentially personally identifiable information that was explicitly provided by the user, such as the screen name, birthday or location. This is in accordance with traditional stylometry techniques, since people could misstate, omit, or lie about this information. Also, we wanted to show that there are implicit clues about the identities of users in the content (language) and context (time) of the users' interactions with social networks that can be used to link their accounts across different services.",
"Other than the obvious technical goal, the purpose of this paper is to shed light on the relative ease with which seemingly innocuous information can be used to track users across social networks, even when signing up on different services using completely different account and profile information (such as name and birthday). This paper is as much of a technical contribution, as it is a warning to users who increasingly share a large part of their private lives on these services.",
"The rest of this paper is structured as follows. In the next sections we will review related work on linking profiles, followed by a description of our data collection and annotation efforts. After that, we discuss the linguistic, temporal and combined temporal-linguistic models developed for linking user profiles. Finally, we discuss and summarize our findings and contributions and discuss possible paths for future work."
],
[
"There are several recent works that attempt to match profiles across different Internet services. Some of these works utilize private user data, while some, like ours, use publicly available data. An example of a work that uses private data is Balduzzi et al. BIBREF8 . They use data from the Friend Finder system (which includes some private data) provided by various social networks to link users across services. Though one can achieve a relatively high level of success by using private data to link user accounts, we are interested in using only publicly available data for this task. In fact, as mentioned earlier, we do not even consider publicly available information that could explicitly identify a user, such as names, birthdays and locations.",
"Several methods have been proposed for matching user profiles using public data BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . These works differ from ours in two main aspects. First, in some of these works, the ground truth data is collected by assuming that all profiles that have the same screen name are from the same users BIBREF15 , BIBREF16 . This is not a valid assumption. In fact, it has been suggested that close to $20\\%$ of accounts with the same screen name in Twitter and Facebook are not matching BIBREF17 . Second, almost all of these works use features extracted from the user profiles BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . Our work, on the other hand, is blind to the profile information and only utilizes users' activity patterns (linguistic and temporal) to match their accounts across different social networks. Using profile information to match accounts is contrary to the best practices of stylometry since it assumes and relies on the honesty, consistency and willingness of the users to explicitly share identifiable information about themselves (such as location)."
],
[
"For the purposes of this paper, we focused on matching accounts between two of the largest social networks: Twitter and Facebook. In order to proceed with our study, we needed a sizeable (few thousand) number of English speaking users with accounts on both Twitter and Facebook. We also needed to know the precise matching between the Twitter and Facebook accounts for our ground truth.",
"To that end, we crawled publicly available, English-language, Google Plus accounts using the Google Plus API and scraped links to the users' other social media profiles. (Note that one of the reasons why we used Twitter and Facebook is that they were two of the most common sites linked to on Google Plus). We used a third party social media site (i.e., Google Plus), one that was not used in our analysis to compile our ground truth in order to limit selection bias in our data collection.",
"We discarded all users who did not link to an account for both Twitter and Facebook and those whose accounts on either of these sites were not public. We then used the APIs of Twitter and Facebook to collect posts made by the users on these sites. We only collected the linguistic content and the date and time at the which the posts were made. For technical and privacy reasons, we did not collect any information from the profile of the users, such as the location, screen name, or birthday.",
"Our analysis focused on activity of users for one whole year, from February 1st, 2014 to February 1st, 2015. Since we can not reliably model the behaviour patterns of users with scarce data, users with less than 20 posts in that time period on either site were discarded. Overall, we collected a dataset of $5,612$ users with each having a Facebook and Twitter account for a total of $11,224$ accounts.",
"Figure 1 shows the distribution of the number of posts per user for Twitter and Facebook for our collected dataset. In the figure, the data for the number of posts has been divided into 500 bins. For the Twitter data, each bin corresponds to 80 tweets, while for the Facebook data, it corresponds to 10 posts. Table 1 shows some statistics about the data collected, including the average number of posts per user for each of the sites."
],
[
"We developed several linguistic, temporal and combined temporal-linguistic models for our task. These models take as input a user, $u$ , from one of the sites (i.e., Twitter or Facebook) and a list of $N$ users from the other service, where one of the $N$ users, $u\\prime $ , is the same as $u$ . The models then provide a ranking among candidate matches between $u$ and each of the $N$ users. We used two criteria to evaluate our models:",
"A baseline random choice ranker would have an accuracy of $1/N$ , and an average rank of $N/2$ (since $u\\prime $ may appear anywhere in the list of $N$ items)."
],
[
"A valuable source of information in matching user accounts, one used in traditional stylometry tasks, is the way in which people use language. A speaker or writer's choice of words depends on many factors, including the rules of grammar, message content and stylistic considerations. There is a great variety of possible ways to compare the language patterns of two people. However, first we need a method for modelling the language of a given user. Below we explain how this is done.",
"Most statistical language models do not attempt to explicitly model the complete language generation process, but rather seek a compact model that adequately explains the observed linguistic data. Probabilistic models of language assign probabilities to word sequences $w_1$ . . . $w_\\ell $ , and as such the likelihood of a corpus can be used to fit model parameters as well as characterize model performance.",
"N-gram language modelling BIBREF18 , BIBREF19 , BIBREF20 is an effective technique that treats words as samples drawn from a distribution conditioned on other words, usually the immediately preceding $n-1$ words, in order to capture strong local word dependencies. The probability of a sequence of $\\ell $ words, written compactly as $w_1^\\ell $ is $\\Pr (w_1^\\ell )$ and can be factored exactly as $\\Pr (w_1^\\ell ) = \\Pr (w_1) \\prod _{i=2}^\\ell \\Pr (w_i|w_1^{i-1})$ ",
"However, parameter estimation in this full model is intractable, as the number of possible word combinations grows exponentially with sequence length. N-gram models address this with the approximation $\\tilde{\\Pr }(w_i|w_{i-n+1}^{i-1}) \\approx \\Pr (w_i|w_1^{i-1})$ using only the preceding $n-1$ words for context. A bigram model ( $n=2$ ) uses the preceding word for context, while a unigram model ( $n=1$ ) does not use any context.",
"For this work, we used unigram models in Python, utilizing some components from NLTK BIBREF21 . Probability distributions were calculated using Witten-Bell smoothing BIBREF19 . Rather than assigning word $w_i$ the maximum likelihood probability estimate $p_i = \\frac{c_i}{N}$ , where $c_i$ is the number of observations of word $w_i$ and $N$ is the total number of observed tokens, Witten-Bell smoothing discounts the probability of observed words to $p_i^* = \\frac{c_i}{N+T}$ where $T$ is the total number of observed word types. The remaining $Z$ words in the vocabulary that are unobserved (i.e. where $c_i = 0$ ) are given by $p_i^* = \\frac{T}{Z(N+T)}$ .",
"We experimented with two methods for measuring the similarity between n-gram language models. In particular, we tried approaches based on KL-divergence and perplexity BIBREF22 . We also tried two methods that do not rely on n-gram models, cosine similarity of TF-IDF vectors BIBREF23 , as well as our own novel method, called the confusion model.",
"The performance of each method is shown in Table 2 . Note that all methods outperform the random baseline in both accuracy and average rank by a great margin. Below we explain each of these metrics.",
"The first metric used for measuring the distance between the language of two user accounts is the Kullback-Leibler (KL) divergence BIBREF22 between the unigram probability distribution of the corpus corresponding to the two accounts. The KL-divergence provides an asymmetric measure of dissimilarity between two probability distribution functions $p$ and $q$ and is given by: $KL(p||q) = \\int p(x)ln\\frac{p(x)}{q(x)}$ ",
"We can modify the equation to prove a symmetric distance between distributions: $KL_{2}(p||q) = KL(p||q)+KL(q||p)$ ",
"For this method, the similarity metric is the perplexity BIBREF22 of the unigram language model generated from one account, $p$ and evaluated on another account, $q$ . Perplexity is given as: $PP(p,q) = 2^{H(p,q)}$ ",
"where $H(p,q)$ is the cross-entropy BIBREF22 between distributions of the two accounts $p$ and $q$ . More similar models lead to smaller perplexity. As with KL-divergence, we can make perplexity symmetric: $PP_{2}(p,q) = PP(p,q)+PP(q,p)$ ",
"This method outperformed the KL-divergence method in terms of average rank but not accuracy (see Table 2 ).",
"Perhaps the relatively low accuracies of perplexity and KL-divergence measures should not be too surprising. These measures are most sensitive to the variations in frequencies of most common words. For instance, in its most straightforward implementation, the KL-divergence measure would be highly sensitive to the frequency of the word “the\". Although this problem might be mitigated by the removal of stop words and applying topic modelling to the texts, we believe that this issue is more nuanced than that.",
"Different social media (such as Twitter and Facebook) are used by people for different purposes, and thus Twitter and Facebook entries by the same person are likely to be thematically different. So it is likely that straightforward comparison of language models would be inefficient for this task.",
"One possible solution for this problem is to look at users' language models not in isolation, but in comparison to the languages models of everyone else. In other words, identify features of a particular language model that are characteristic to its corresponding user, and then use these features to estimate similarity between different accounts. This is a task that Term Frequency-Inverse Document Frequency, or TF-IDF, combined with cosine similarity, can manage.",
"TF-IDF is a method of converting text into numbers so that it can be represented meaningfully by a vector BIBREF23 . TF-IDF is the product of two statistics, TF or Term Frequency and IDF or Inverse Document Frequency. Term Frequency measures the number of times a term (word) occurs in a document. Since each document will be of different size, we need to normalize the document based on its size. We do this by dividing the Term Frequency by the total number of terms.",
"TF considers all terms as equally important, however, certain terms that occur too frequently should have little effect (for example, the term “the\"). And conversely, terms that occur less in a document can be more relevant. Therefore, in order to weigh down the effects of the terms that occur too frequently and weigh up the effects of less frequently occurring terms, an Inverse Document Frequency factor is incorporated which diminishes the weight of terms that occur very frequently in the document set and increases the weight of terms that occur rarely. Generally speaking, the Inverse Document Frequency is a measure of how much information a word provides, that is, whether the term is common or rare across all documents.",
"Using TF-IDF, we derive a vector from the corpus of each account. We measure the similarity between two accounts using cosine similarity: $Similarity(d1,d2) = \\frac{d1 \\cdot d2}{||d1||\\times ||d2||}$ ",
"Here, $d1 \\cdot d2$ is the dot product of two documents, and $||d1||\\times ||d2||$ is the product of the magnitude of the two documents. Using TD-IDF and cosine similarity, we achieved significantly better results than the last two methods, with an accuracy of $0.21$ and average rank of 999.",
"TF-IDF can be thought of as a heuristic measure of the extent to which different words are characteristic of a user. We came up with a new, theoretically motivated measure of “being characteristic\" for words. We considered the following setup :",
"The whole corpus of the $11,224$ Twitter and Facebook accounts was treated as one long string;",
"For each token in the string, we know the user who produced it. Imagine that we removed this information and are now making a guess as to who the user was. This will give us a probability distribution over all users;",
"Now imagine that we are making a number of the following samples: randomly selecting a word from the string, taking the true user, $TU$ for this word and a guessed user, $GU$ from correspondent probability distribution. Intuitively, the more often a particular pair, $TU=U_{1}, GU=U_{2}$ appear together, the stronger is the similarity between $U_{1}$ and $U_{2}$ ;",
"We then use mutual information to measure the strength of association. In this case, it will be the mutual information BIBREF22 between random variables, $TU=U_{1}$ and $GU=U_{2}$ . This mutual information turns out to be proportional to the probabilities of $U_{1}$ and $U_{2}$ in the dataset, which is undesirable for a similarity measure. To correct for this, we divide it by the probabilities of $U_{1}$ and $U_{2}$ ;",
"We call this model the confusion model, as it evaluated the probability that $U_{1}$ will be confused for $U_{2}$ on the basis of a single word. The expression for the similarity value according to the model is $S\\times log(S)$ , where $S$ is: $S=\\sum _{w} p(w)p(U_{1}|w)p(U_{2}|w)$ ",
"Note that if $U_{1}=U_{2}$ , the words contributing most to the sum will be ordered by their “degree of being characteristic\". The values, $p(w)$ and $p(u|w)$ have to be estimated from the corpus. To do that, we assumed that the corpus was produced using the following auxiliary model:",
"For each token, a user is selected from a set of users by multinomial distribution;",
"A word is selected from a multinomial distribution of words for this user to produce the token.",
"We used Dirichlet distributions BIBREF24 as priors over multinomials. This method outperforms all other methods with an accuracy of $0.27$ and average rank of 859."
],
[
"Another valuable source of information in matching user accounts, is the activity patterns of users. A measure of activity is the time and the intensity at which users utilize a social network or media site. All public social networks, including publicly available Twitter and Facebook data, make this information available. Previous research has shown temporal information (and other contextual information, such as spatial information) to be correlated with the linguistic activities of people BIBREF25 , BIBREF26 .",
"We extracted the following discrete temporal features from our corpus: month (12 bins), day of month (31 bins), day of week (7 bins) and hour (24 bins). We chose these features to capture fine to coarse-level temporal patterns of user activity. For example, commuting to work is a recurring pattern linked to a time of day, while paying bills is more closely tied to the day of the month, and vacations are more closely tied to the month.",
"We treated each of these bins as a word, so that we could use the same methods used in the last section to measure the similarity between the temporal activity patterns of pairs of accounts (this will also help greatly for creating the combined model, explained in the next section). In other word, the 12 bins in month were set to $w_1$ . . . $w_{12}$ , the 31 bins in day of month to $w_{13}$ . . . $w_{43}$ , the 7 bins in day of week to $w_{44}$ . . . $w_{50}$ , and the 24 bins in time were set to $w_{51}$ . . . $w_{74}$ . Thus, we had a corpus of 74 words.",
"For example, a post on Friday, August 5th at 2 AM would be translated to $\\lbrace w_8,w_{17},w_{48},w_{53}\\rbrace $ , corresponding to August, 5th, Friday, 2 AM respectively. Since we are only using unigram models, the order of words does not matter. As with the language models described in the last section, all of the probability distributions were calculated using Witten-Bell smoothing. We used the same four methods as in the last section to create our temporal models.",
"Table 3 shows the performance of each of these models. Although the performance of the temporal models were not as strong as the linguistic ones, they all vastly outperformed the baseline. Also, note that here as with the linguistic models, the confusion model greatly outperformed the other models."
],
[
"Finally, we created a combined temporal-linguistic model. Since both the linguistic and the temporal models were built using the same framework, it was fairly simple to combine the two models. The combined model was created by merging the linguistic and temporal corpora and vocabularies. (Recall that we treated temporal features as words). We then experimented with the same four methods as in the last two sections to create our combined models.",
"Table 4 shows the performance of each of these models. Across the board, the combined models outperformed their corresponding linguistic and temporal models, though the difference with the linguistic models were not as great. These results suggest that at some level the temporal and the linguistic \"styles\" of users provide non-overlapping cues about the identity of said users. Also, note that as with the linguistic and temporal models, our combined confusion model outperformed the other combined models.",
"Another way to evaluate the performance of the different combined models is through the rank-statistics plot. This is shown in Figure 2 . The figure shows the distribution of the ranks of the $5,612$ users for different combined models. The x-axis is the rank percentile (divided into bins of $5\\%$ ), the y-axis is the percentage of the users that fall in each bin. For example, for the confusion model, $69\\%$ (3880) of the $5,612$ users were correctly linked between Twitter and Facebook when looking at the top $5\\%$ (281) of the predictions by the model. From the figure, you can clearly see that the confusion model is superior to the other models, with TF-IDF a close second. You can also see from the figure that the rank plot for the random baseline is a horizontal line, with each rank percentile bin having $5\\%$ of the users ( $5\\%$ because the rank percentiles were divided into bins of $5\\%$ )."
],
[
"Matching profiles across social networks is a hard task for humans. It is a task on par with detecting plagiarism, something a non-trained person (or sometimes even a trained person) cannot easily accomplish. (Hence the need for the development of the field of stylometry in early Renaissance.) Be that as it may, we wanted to evaluate our model against humans to make sure that it is indeed outperforming them.",
"We designed an experiment to compare the performance of human judges to our best model, the temporal-linguistic confusion model. The task had to be simple enough so that human judges could attempt it with ease. For example, it would have been ludicrous to ask the judges to sort $11,224$ accounts into $5,612$ matching pairs.",
"Thus, we randomly selected 100 accounts from distinct users from our collection of $11,224$ accounts. A unique list of 10 candidate accounts was created for each of the 100 accounts. Each list contained the correct matching account mixed in with 9 other randomly selected accounts. The judges were then presented with the 100 accounts one at a time and asked to pick the correct matching account from the list of 10 candidate accounts. For simplicity, we did not ask the judges to do any ranking other than picking the one account that they thought matched the original account. We then measured the accuracy of the judges based on how many of the 100 accounts they correctly matched. We had our model do the exact same task with the same dataset. A random baseline model would have a one in ten chance of getting the correct answer, giving it an accuracy of $0.10$ .",
"We had a total of 3 English speaking human judges from Amazon Mechanical Turk (which is an tool for crowd-sourcing of human annotation tasks) . For each task, the judges were shown the link to one of the 100 account, and its 10 corresponding candidate account links. The judges were allowed to explore each of the accounts as much as they wanted to make their decision (since all these accounts were public, there were no privacy concerns).",
"Table 5 shows the performance of each of the three human judges, our model and the random baseline. Since the task is so much simpler than pairing $11,224$ accounts, our combined confusion model had a much greater accuracy than reported in the last section. With an accuracy of $0.86$ , our model vastly outperformed even the best human judge, at $0.69$ . Overall, our model beat the average human performance by $0.26$ ( $0.86$ to $0.60$ respectively) which is a $43\\%$ relative (and $26\\%$ absolute) improvement."
],
[
"Motivated by the growing interest in matching user account across different social media and networking sites, in this paper we presented models for Digital Stylometry, which is a method for matching users through stylometry inspired techniques. We used temporal and linguistic patterns of users to do the matching.",
"We experimented with linguistic, temporal, and combined temporal-linguistic models using standard and novel techniques. The methods based on our novel confusion model outperformed the more standard ones in all cases. We showed that both temporal and linguistic information are useful for matching users, with the best temporal model performing with an accuracy of $.10$ and the best linguistic model performing with an accuracy of $0.27$ . Even though the linguistic models vastly outperformed the temporal models, when combined the temporal-linguistic models outperformed both with an accuracy of $0.31$ . The improvement in the performance of the combined models suggests that although temporal information is dwarfed by linguistic information, in terms of its contribution to digital stylometry, it nonetheless provides non-overlapping information with the linguistic data.",
"Our models were evaluated on $5,612$ users with a total of $11,224$ accounts on Twitter and Facebook combined. In contrast to other works in this area, we did not use any profile information in our matching models. The only information that was used in our models were the time and the linguistic content of posts by the users. This is in accordance with traditional stylometry techniques (since people could lie or misstate this information). Also, we wanted to show that there are implicit clues about the identity of users in the content (language) and context (time) of the users' interactions with social networks that can be used to link their accounts across different services.",
"In addition to the technical contributions (such as our confusion model), we hope that this paper is able to shed light on the relative ease with which seemingly innocuous information can be used to track users across social networks, even when signing up on different services using completely different account and profile information. In the future, we hope to extend this work to other social network sites, and to incorporate more sophisticated techniques, such as topic modelling and opinion mining, into our models."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data Collection and Datasets",
"Models",
"Linguistic Models",
"Temporal Models",
"Combined Models",
"Evaluation Against Humans",
"Discussion and Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"17afde77c24d8fc689f8d2ff8304eb3b2d91cc46",
"5ee73fd0c6d68e3a4fa98bf8db02c24145db6b50"
],
"answer": [
{
"evidence": [
"Motivated by traditional stylometry and the growing interest in matching user accounts across Internet services, we created models for Digital Stylometry, which fuses traditional stylometry techniques with big-data driven social informatics methods used commonly in analyzing social networks. Our models use linguistic and temporal activity patterns of users on different accounts to match accounts belonging to the same person. We evaluated our models on $11,224$ accounts belonging to $5,612$ distinct users on two of the largest social media networks, Twitter and Facebook. The only information that was used in our models were the time and the linguistic content of posts by the users. We intentionally did not use any other information, especially the potentially personally identifiable information that was explicitly provided by the user, such as the screen name, birthday or location. This is in accordance with traditional stylometry techniques, since people could misstate, omit, or lie about this information. Also, we wanted to show that there are implicit clues about the identities of users in the content (language) and context (time) of the users' interactions with social networks that can be used to link their accounts across different services."
],
"extractive_spans": [],
"free_form_answer": "No profile elements",
"highlighted_evidence": [
"The only information that was used in our models were the time and the linguistic content of posts by the users. We intentionally did not use any other information, especially the potentially personally identifiable information that was explicitly provided by the user, such as the screen name, birthday or location."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Motivated by traditional stylometry and the growing interest in matching user accounts across Internet services, we created models for Digital Stylometry, which fuses traditional stylometry techniques with big-data driven social informatics methods used commonly in analyzing social networks. Our models use linguistic and temporal activity patterns of users on different accounts to match accounts belonging to the same person. We evaluated our models on $11,224$ accounts belonging to $5,612$ distinct users on two of the largest social media networks, Twitter and Facebook. The only information that was used in our models were the time and the linguistic content of posts by the users. We intentionally did not use any other information, especially the potentially personally identifiable information that was explicitly provided by the user, such as the screen name, birthday or location. This is in accordance with traditional stylometry techniques, since people could misstate, omit, or lie about this information. Also, we wanted to show that there are implicit clues about the identities of users in the content (language) and context (time) of the users' interactions with social networks that can be used to link their accounts across different services."
],
"extractive_spans": [
"time and the linguistic content of posts by the users"
],
"free_form_answer": "",
"highlighted_evidence": [
"The only information that was used in our models were the time and the linguistic content of posts by the users."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"3f36bf0444f095984404b6a31a9210ea510697ab"
]
},
{
"annotation_id": [
"c564a16d4517b0ca026f207875f16f2ab6587548",
"ea9bb61707caf43841a268e8043014ac0610fbaa"
],
"answer": [
{
"evidence": [
"Other than the obvious technical goal, the purpose of this paper is to shed light on the relative ease with which seemingly innocuous information can be used to track users across social networks, even when signing up on different services using completely different account and profile information (such as name and birthday). This paper is as much of a technical contribution, as it is a warning to users who increasingly share a large part of their private lives on these services."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" the purpose of this paper is to shed light on the relative ease with which seemingly innocuous information can be used to track users across social networks",
"This paper is as much of a technical contribution, as it is a warning to users who increasingly share a large part of their private lives on these services."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"Today, there are millions of users with accounts and profiles on many different social media and networking sites. It is not uncommon for users to have multiple accounts on different social media and networking sites. With so many networking, emailing, and photo sharing sites on the Web, a user often accumulates an abundance of account profiles. There is an increasing focus from the academic and business worlds on aggregating user information across different sites, allowing for the development of more complete user profiles. There currently exist several businesses that focus on this task BIBREF1 , BIBREF2 , BIBREF3 . These businesses use the aggregate profiles for advertising, background checks or customer service related tasks. Moreover, profile matching across social networks, can assist the growing field of social media rumor detection BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , since many malicious rumors are spread on different social media platforms by the same people, using different accounts and usernames.",
"Other than the obvious technical goal, the purpose of this paper is to shed light on the relative ease with which seemingly innocuous information can be used to track users across social networks, even when signing up on different services using completely different account and profile information (such as name and birthday). This paper is as much of a technical contribution, as it is a warning to users who increasingly share a large part of their private lives on these services."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"There currently exist several businesses that focus on this task BIBREF1 , BIBREF2 , BIBREF3 . These businesses use the aggregate profiles for advertising, background checks or customer service related tasks.",
"Other than the obvious technical goal, the purpose of this paper is to shed light on the relative ease with which seemingly innocuous information can be used to track users across social networks, even when signing up on different services using completely different account and profile information (such as name and birthday). This paper is as much of a technical contribution, as it is a warning to users who increasingly share a large part of their private lives on these services."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"101dbdd2108b3e676061cb693826f0959b47891b",
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe"
]
},
{
"annotation_id": [
"78305fc2bb08272a42086d95d4dfc3be6f37d6ee",
"e86da11ba98483560f5cecd341eb9baf71fae9a5"
],
"answer": [
{
"evidence": [
"For the purposes of this paper, we focused on matching accounts between two of the largest social networks: Twitter and Facebook. In order to proceed with our study, we needed a sizeable (few thousand) number of English speaking users with accounts on both Twitter and Facebook. We also needed to know the precise matching between the Twitter and Facebook accounts for our ground truth.",
"To that end, we crawled publicly available, English-language, Google Plus accounts using the Google Plus API and scraped links to the users' other social media profiles. (Note that one of the reasons why we used Twitter and Facebook is that they were two of the most common sites linked to on Google Plus). We used a third party social media site (i.e., Google Plus), one that was not used in our analysis to compile our ground truth in order to limit selection bias in our data collection.",
"We discarded all users who did not link to an account for both Twitter and Facebook and those whose accounts on either of these sites were not public. We then used the APIs of Twitter and Facebook to collect posts made by the users on these sites. We only collected the linguistic content and the date and time at the which the posts were made. For technical and privacy reasons, we did not collect any information from the profile of the users, such as the location, screen name, or birthday."
],
"extractive_spans": [
"We used a third party social media site (i.e., Google Plus), one that was not used in our analysis to compile our ground truth",
"We discarded all users who did not link to an account for both Twitter and Facebook"
],
"free_form_answer": "",
"highlighted_evidence": [
"For the purposes of this paper, we focused on matching accounts between two of the largest social networks: Twitter and Facebook. In order to proceed with our study, we needed a sizeable (few thousand) number of English speaking users with accounts on both Twitter and Facebook. We also needed to know the precise matching between the Twitter and Facebook accounts for our ground truth.\n\nTo that end, we crawled publicly available, English-language, Google Plus accounts using the Google Plus API and scraped links to the users' other social media profiles. (Note that one of the reasons why we used Twitter and Facebook is that they were two of the most common sites linked to on Google Plus). We used a third party social media site (i.e., Google Plus), one that was not used in our analysis to compile our ground truth in order to limit selection bias in our data collection.\n\nWe discarded all users who did not link to an account for both Twitter and Facebook and those whose accounts on either of these sites were not public. We then used the APIs of Twitter and Facebook to collect posts made by the users on these sites."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To that end, we crawled publicly available, English-language, Google Plus accounts using the Google Plus API and scraped links to the users' other social media profiles. (Note that one of the reasons why we used Twitter and Facebook is that they were two of the most common sites linked to on Google Plus). We used a third party social media site (i.e., Google Plus), one that was not used in our analysis to compile our ground truth in order to limit selection bias in our data collection."
],
"extractive_spans": [
"We used a third party social media site (i.e., Google Plus)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used a third party social media site (i.e., Google Plus), one that was not used in our analysis to compile our ground truth in order to limit selection bias in our data collection."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"18f4d5a2eb93a969d55361267e74aa0c4f6f82fe",
"3f36bf0444f095984404b6a31a9210ea510697ab"
]
}
],
"nlp_background": [
"two",
"infinity",
"infinity"
],
"paper_read": [
"somewhat",
"no",
"no"
],
"question": [
"what elements of each profile did they use?",
"Does this paper discuss the potential these techniques have for invading user privacy?",
"How is the gold standard defined?"
],
"question_id": [
"f264612db9096caf938bd8ee4085848143b34f81",
"da0a2195bbf6736119ff32493898d2aadffcbcb8",
"f5513f9314b9d7b41518f98c6bc6d42b8555258d"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"social",
"social",
"social"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1: Distribution of number of posts per user for Twitter and Facebook, from our collected dataset.",
"Table 2: Performance of different linguistic models, tested on 5,612 users (11,224 accounts), sorted by accuracy. Best results are shown bold.",
"Table 3: Performance of different temporal models, tested on 5,612 users (11,224 accounts), sorted by accuracy. Best results are shown bold.",
"Table 4: Performance of different combined models, tested on 5,612 users (11,224 accounts), sorted by accuracy. Best results are shown bold.",
"Fig. 2: Rank percentiles of different combined temporal-linguistic models.",
"Table 5: Performance of the three human judges and our best model, the temporal-linguistic confusion model."
],
"file": [
"5-Figure1-1.png",
"7-Table2-1.png",
"10-Table3-1.png",
"11-Table4-1.png",
"12-Figure2-1.png",
"13-Table5-1.png"
]
} | [
"what elements of each profile did they use?"
] | [
[
"1605.05166-Introduction-2"
]
] | [
"No profile elements"
] | 349 |
1607.00167 | SentiBubbles: Topic Modeling and Sentiment Visualization of Entity-centric Tweets | Social Media users tend to mention entities when reacting to news events. The main purpose of this work is to create entity-centric aggregations of tweets on a daily basis. By applying topic modeling and sentiment analysis, we create data visualization insights about current events and people reactions to those events from an entity-centric perspective. | {
"paragraphs": [
[
"Entities play a central role in the interplay between social media and online news BIBREF0 . Everyday millions of tweets are generated about local and global news, including people reactions and opinions regarding the events displayed on those news stories. Trending personalities, organizations, companies or geographic locations are building blocks of news stories and their comments. We propose to extract entities from tweets and their associated context in order to understand what is being said on Twitter about those entities and consequently to create a picture of people reactions to recent events.",
"With this in mind and using text mining techniques, this work explores and evaluates ways to characterize given entities by finding: (a) the main terms that define that entity and (b) the sentiment associated with it. To accomplish these goals we use topic modeling BIBREF1 to extract topics and relevant terms and phrases of daily entity-tweets aggregations, as well as, sentiment analysis BIBREF2 to extract polarity of frequent subjective terms associated with the entities. Since public opinion is, in most cases, not constant through time, this analysis is performed on a daily basis. Finally we create a data visualization of topics and sentiment that aims to display these two dimensions in an unified and intelligible way.",
"The combination of Topic Modeling and Sentiment Analysis has been attempted before: one example is a model called TSM - Topic-Sentiment Mixture Model BIBREF3 that can be applied to any Weblog to determine a correlation between topic and sentiment. Another similar model has been proposed proposed BIBREF4 in which the topic extraction is achieved using LDA, similarly to the model that will be presented. Our work distinguishes from previous work by relying on daily entity-centric aggregations of tweets to create a meta-document which will be used as input for topic modeling and sentiment analysis."
],
[
"The main goal of the proposed system is to obtain a characterization of a certain entity regarding both mentioned topics and sentiment throughout time, i.e. obtain a classification for each entity/day combination."
],
[
"Figure 1 depicts an overview of the data mining process pipeline applied in this work. To collect and process raw Twitter data, we use an online reputation monitoring platform BIBREF5 which can be used by researchers interested in tracking entities on the web. It collects tweets from a pre-defined sample of users and applies named entity disambiguation BIBREF6 . In this particular scenario, we use tweets from January 2014 to December 2015. In order to extract tweets related to an entity, two main characteristics must be defined: its canonical name, that should clearly identify it (e.g. “Cristiano Ronaldo\") and a set of keywords that most likely refer to that particular entity when mentioned in a sentence (e.g.“Ronaldo\", “CR7\"). Entity related data is provided from a knowledge base of Portuguese entities. These can then be used to retrieve tweets from that entity, by selecting the ones that contain one or more of these keywords."
],
[
"Before actually analyzing the text in the tweets, we apply the following operations:",
"If any tweet has less than 40 characters it is discarded. These tweets are considered too small to have any meaningful content;",
"Remove all hyperlinks and special characters and convert all alphabetic characters to lower case;",
"Keywords used to find a particular entity are removed from tweets associated to it. This is done because these words do not contribute to either topic or sentiment;",
"A set of portuguese and english stopwords are removed - these contain very common and not meaningful words such as “the\" or “a\";",
"Every word with less than three characters is removed, except some whitelisted words that can actually be meaningful (e.g. “PSD\" may refer to a portuguese political party);",
"These steps serve the purpose of sanitizing and improving the text, as well as eliminating some words that may undermine the results of the remaining steps. The remaining words are then stored, organized by entity and day, e.g. all of the words in tweets related to Cristiano Ronaldo on the 10th of July, 2015."
],
[
"Topic extraction is achieved using LDA, BIBREF1 which can determine the topics in a set of documents (a corpus) and a document-topic distribution. Since we create each document in the corpus containing every word used in tweets related to an entity, during one day, we can retrieve the most relevant topics about an entity on a daily basis. From each of those topics we select the most related words in order to identify it. The system supports three different approaches with LDA, yielding varying results: (a) creating a single model for all entities (i.e. a single corpus), (b) creating a model for each group of entities that fit in a similar category (e.g. sports, politics) and (c) creating a single model for each entity."
],
[
"A word-level sentiment analysis was made, using Sentilex-PT BIBREF7 - a sentiment lexicon for the portuguese language, which can be used to determine the sentiment polarity of each word, i.e. a value of -1 for negative words, 0 for neutral words and 1 for positive words. A visualization system was created that displays the most mentioned words for each entity/day and their respective polarity using correspondingly colored and sized circles, which are called SentiBubbles."
],
[
"The user interface allows the user to input an entity and a time period he wants to learn about, displaying four sections. In the first one, the most frequent terms used that day are shown inside circles. These circles have two properties: size and color. Size is defined by the term's frequency and the color by it's polarity, with green being positive, red negative and blue neutral. Afterwards, it displays some example tweets with the words contained in the circles highlighted with their respective sentiment color. The user may click a circle to display tweets containing that word. A trendline is also created, displaying in a chart the number of tweets per day, throughout the two years analyzed. Finally, the main topics identified are shown, displaying the identifying set of words for each topic."
]
],
"section_name": [
"Introduction",
"Methodology",
"Tweets Collection",
"Tweets Pre-processing",
"Topic Modeling",
"Sentiment Analysis",
"Visualization"
]
} | {
"answers": [
{
"annotation_id": [
"2fcc6e42f3e4b56dbba7f8426bae3fde3dc1c735",
"ee5104d9d87ff5755f3996fd2814d336355393a1"
],
"answer": [
{
"evidence": [
"Figure 1 depicts an overview of the data mining process pipeline applied in this work. To collect and process raw Twitter data, we use an online reputation monitoring platform BIBREF5 which can be used by researchers interested in tracking entities on the web. It collects tweets from a pre-defined sample of users and applies named entity disambiguation BIBREF6 . In this particular scenario, we use tweets from January 2014 to December 2015. In order to extract tweets related to an entity, two main characteristics must be defined: its canonical name, that should clearly identify it (e.g. “Cristiano Ronaldo\") and a set of keywords that most likely refer to that particular entity when mentioned in a sentence (e.g.“Ronaldo\", “CR7\"). Entity related data is provided from a knowledge base of Portuguese entities. These can then be used to retrieve tweets from that entity, by selecting the ones that contain one or more of these keywords."
],
"extractive_spans": [
"from January 2014 to December 2015"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this particular scenario, we use tweets from January 2014 to December 2015."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Figure 1 depicts an overview of the data mining process pipeline applied in this work. To collect and process raw Twitter data, we use an online reputation monitoring platform BIBREF5 which can be used by researchers interested in tracking entities on the web. It collects tweets from a pre-defined sample of users and applies named entity disambiguation BIBREF6 . In this particular scenario, we use tweets from January 2014 to December 2015. In order to extract tweets related to an entity, two main characteristics must be defined: its canonical name, that should clearly identify it (e.g. “Cristiano Ronaldo\") and a set of keywords that most likely refer to that particular entity when mentioned in a sentence (e.g.“Ronaldo\", “CR7\"). Entity related data is provided from a knowledge base of Portuguese entities. These can then be used to retrieve tweets from that entity, by selecting the ones that contain one or more of these keywords."
],
"extractive_spans": [
"January 2014 to December 2015"
],
"free_form_answer": "",
"highlighted_evidence": [
" In this particular scenario, we use tweets from January 2014 to December 2015. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2c29b2066591ca1732e114780824dd8672e6ab0d",
"2db55d7db8c147f32094da08fe0010d2655134ef"
],
"answer": [
{
"evidence": [
"The combination of Topic Modeling and Sentiment Analysis has been attempted before: one example is a model called TSM - Topic-Sentiment Mixture Model BIBREF3 that can be applied to any Weblog to determine a correlation between topic and sentiment. Another similar model has been proposed proposed BIBREF4 in which the topic extraction is achieved using LDA, similarly to the model that will be presented. Our work distinguishes from previous work by relying on daily entity-centric aggregations of tweets to create a meta-document which will be used as input for topic modeling and sentiment analysis.",
"A word-level sentiment analysis was made, using Sentilex-PT BIBREF7 - a sentiment lexicon for the portuguese language, which can be used to determine the sentiment polarity of each word, i.e. a value of -1 for negative words, 0 for neutral words and 1 for positive words. A visualization system was created that displays the most mentioned words for each entity/day and their respective polarity using correspondingly colored and sized circles, which are called SentiBubbles."
],
"extractive_spans": [
"A word-level sentiment analysis was made, using Sentilex-PT BIBREF7 - a sentiment lexicon for the portuguese language, which can be used to determine the sentiment polarity of each word, i.e. a value of -1 for negative words, 0 for neutral words and 1 for positive words"
],
"free_form_answer": "",
"highlighted_evidence": [
"Sentiment Analysis\nA word-level sentiment analysis was made, using Sentilex-PT BIBREF7 - a sentiment lexicon for the portuguese language, which can be used to determine the sentiment polarity of each word, i.e. a value of -1 for negative words, 0 for neutral words and 1 for positive words. A visualization system was created that displays the most mentioned words for each entity/day and their respective polarity using correspondingly colored and sized circles, which are called SentiBubbles."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"A word-level sentiment analysis was made, using Sentilex-PT BIBREF7 - a sentiment lexicon for the portuguese language, which can be used to determine the sentiment polarity of each word, i.e. a value of -1 for negative words, 0 for neutral words and 1 for positive words. A visualization system was created that displays the most mentioned words for each entity/day and their respective polarity using correspondingly colored and sized circles, which are called SentiBubbles."
],
"extractive_spans": [],
"free_form_answer": "Lexicon based word-level SA.",
"highlighted_evidence": [
"A word-level sentiment analysis was made, using Sentilex-PT BIBREF7 - a sentiment lexicon for the portuguese language, which can be used to determine the sentiment polarity of each word, i.e. a value of -1 for negative words, 0 for neutral words and 1 for positive words"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"17cdc13b6e5264488baeec0b085ad27c5c2f0b27",
"89ba41e8f8b04fc49bf12a5fc9bb4260bbc46702"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"57f17f618c06fde0f63999c565c66bfb7b3bf851",
"afdcc7e21ccfc007bbf796cfb2634957a652ac87"
],
"answer": [
{
"evidence": [
"Figure 1 depicts an overview of the data mining process pipeline applied in this work. To collect and process raw Twitter data, we use an online reputation monitoring platform BIBREF5 which can be used by researchers interested in tracking entities on the web. It collects tweets from a pre-defined sample of users and applies named entity disambiguation BIBREF6 . In this particular scenario, we use tweets from January 2014 to December 2015. In order to extract tweets related to an entity, two main characteristics must be defined: its canonical name, that should clearly identify it (e.g. “Cristiano Ronaldo\") and a set of keywords that most likely refer to that particular entity when mentioned in a sentence (e.g.“Ronaldo\", “CR7\"). Entity related data is provided from a knowledge base of Portuguese entities. These can then be used to retrieve tweets from that entity, by selecting the ones that contain one or more of these keywords.",
"A set of portuguese and english stopwords are removed - these contain very common and not meaningful words such as “the\" or “a\";",
"A word-level sentiment analysis was made, using Sentilex-PT BIBREF7 - a sentiment lexicon for the portuguese language, which can be used to determine the sentiment polarity of each word, i.e. a value of -1 for negative words, 0 for neutral words and 1 for positive words. A visualization system was created that displays the most mentioned words for each entity/day and their respective polarity using correspondingly colored and sized circles, which are called SentiBubbles."
],
"extractive_spans": [
"Portuguese "
],
"free_form_answer": "",
"highlighted_evidence": [
"Entity related data is provided from a knowledge base of Portuguese entities. ",
"A set of portuguese and english stopwords are removed - these contain very common and not meaningful words such as “the\" or “a\";",
"A word-level sentiment analysis was made, using Sentilex-PT BIBREF7 - a sentiment lexicon for the portuguese language, which can be used to determine the sentiment polarity of each word, i.e. a value of -1 for negative words, 0 for neutral words and 1 for positive words. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Figure 1 depicts an overview of the data mining process pipeline applied in this work. To collect and process raw Twitter data, we use an online reputation monitoring platform BIBREF5 which can be used by researchers interested in tracking entities on the web. It collects tweets from a pre-defined sample of users and applies named entity disambiguation BIBREF6 . In this particular scenario, we use tweets from January 2014 to December 2015. In order to extract tweets related to an entity, two main characteristics must be defined: its canonical name, that should clearly identify it (e.g. “Cristiano Ronaldo\") and a set of keywords that most likely refer to that particular entity when mentioned in a sentence (e.g.“Ronaldo\", “CR7\"). Entity related data is provided from a knowledge base of Portuguese entities. These can then be used to retrieve tweets from that entity, by selecting the ones that contain one or more of these keywords.",
"A set of portuguese and english stopwords are removed - these contain very common and not meaningful words such as “the\" or “a\";",
"Every word with less than three characters is removed, except some whitelisted words that can actually be meaningful (e.g. “PSD\" may refer to a portuguese political party);"
],
"extractive_spans": [
"portuguese and english"
],
"free_form_answer": "",
"highlighted_evidence": [
"Entity related data is provided from a knowledge base of Portuguese entities.",
"A set of portuguese and english stopwords are removed - these contain very common and not meaningful words such as “the\" or “a\";\n\nEvery word with less than three characters is removed, except some whitelisted words that can actually be meaningful (e.g. “PSD\" may refer to a portuguese political party);"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"What is the timeframe of the current events?",
"What model was used for sentiment analysis?",
"How many tweets did they look at?",
"What language are the tweets in?"
],
"question_id": [
"d97843afec733410d2c580b4ec98ebca5abf2631",
"813a8156f9ed8ead53dda60ef54601f6ca8076e9",
"dd807195d10c492da2b0da8b2c56b8f7b75db20e",
"aa287673534fc05d8126c8e3486ca28821827034"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: The system pipeline"
],
"file": [
"2-Figure1-1.png"
]
} | [
"What model was used for sentiment analysis?"
] | [
[
"1607.00167-Sentiment Analysis-0",
"1607.00167-Introduction-2"
]
] | [
"Lexicon based word-level SA."
] | 350 |
1605.05156 | Tweet Acts: A Speech Act Classifier for Twitter | Speech acts are a way to conceptualize speech as action. This holds true for communication on any platform, including social media platforms such as Twitter. In this paper, we explored speech act recognition on Twitter by treating it as a multi-class classification problem. We created a taxonomy of six speech acts for Twitter and proposed a set of semantic and syntactic features. We trained and tested a logistic regression classifier using a data set of manually labelled tweets. Our method achieved a state-of-the-art performance with an average F1 score of more than $0.70$. We also explored classifiers with three different granularities (Twitter-wide, type-specific and topic-specific) in order to find the right balance between generalization and overfitting for our task. | {
"paragraphs": [
[
"In recent years, the micro-blogging platform Twitter has become a major social media platform with hundreds of millions of users. People turn to Twitter for a variety of purposes, from everyday chatter to reading about breaking news. The volume plus the public nature of Twitter (less than 10% of Twitter accounts are private BIBREF0 ) have made Twitter a great source of data for social and behavioural studies. These studies often require an understanding of what people are tweeting about. Though this can be coded manually, in order to take advantage of the volume of tweets available automatic analytic methods have to be used. There has been extensive work done on computational methods for analysing the linguistic content of tweets. However, there has been very little work done on classifying the pragmatics of tweets. Pragmatics looks beyond the literal meaning of an utterance and considers how context and intention contribute to meaning. A major element of pragmatics is the intended communicative act of an utterance, or what the utterance was meant to achieve. It is essential to study pragmatics in any linguistic system because at the core of linguistic analysis is studying what language is used for or what we do with language. Linguistic communication and meaning can not truly be studied without studying pragmatics. Proposed by Austin BIBREF1 and refined by Searle BIBREF2 , speech act theory can be used to study pragmatics. Amongst other things, the theory provides a formalized taxonomy BIBREF3 of a set of communicative acts, more commonly known as speech acts.",
"There has been extensive research done on speech act (also known as dialogue act) classification in computational linguistics, e.g., BIBREF4 . Unfortunately, these methods do not map well to Twitter, given the noisy and unconventional nature of the language used on the platform. In this work, we created a supervised speech act classifier for Twitter, using a manually annotated dataset of a few thousand tweets, in order to be better understand the meaning and intention behind tweets and uncover the rich interactions between the users of Twitter. Knowing the speech acts behind a tweet can help improve analysis of tweets and give us a better understanding of the state of mind of the users. Moreover, ws we have shown in our previous works BIBREF5 , BIBREF6 , speech act classification is essential for detection of rumors on Twitter. Finally, knowing the distribution of speech acts of tweets about a particular topic can reveal a lot about the general attitude of users about that topic (e.g., are they confused and are asking a lot of questions? Are they outraged and demanding action? Etc)."
],
[
"Speech act recognition is a multi-class classification problem. As with any other supervised classification problem, a large labelled dataset is needed. In order to create such a dataset we first created a taxonomy of speech acts for Twitter by identifying and defining a set of commonly occurring speech acts. Next, we manually annotated a large collection of tweets using our taxonomy. Our primary task was to use the expertly annotated dataset to analyse and select various syntactic and semantic features derived from tweets that are predictive of their corresponding speech acts. Using our labelled dataset and robust features we trained standard, off-the-shelf classifiers (such as SVMs, Naive Bayes, etc) for our speech act recognition task.",
"Using Searle's speech act taxonomy BIBREF3 , we established a list of six speech act categories that are commonly seen on Twitter: Assertion, Recommendation Expression, Question, Request, and Miscellaneous. Table TABREF1 shows an example tweet for each of these categories."
],
[
"Given the diversity of topics talked about on Twitter, we wanted to explore topic and type dependent speech act classifiers. We used Zhao et al.'s BIBREF7 definitions for topic and type. A topic is a subject discussed in one or more tweets (e.g., Boston Marathon bombings, Red Sox, etc). The type characterizes the nature of the topic, these are: Entity-oriented, Event-oriented topics, and Long-standing topics (topics about subjects that are commonly discussed).",
"We selected two topics for each of the three topic types described in the last section for a total of six topics (see Figure FIGREF2 for list of topics). We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. The kappa for the annotators was INLINEFORM0 . For training, we used the label that the majority of annotators agreed upon (7,563 total tweets).",
"The distribution of speech acts for each of the six topics and three types is shown in Figure FIGREF2 . There is much greater similarity between the distribution of speech acts of topics of the same type (e.g, Ashton Kutcher and Red Sox) compared to topics of different types. Though each topic type seems to have its own distinct distribution, Entity and Event types have much closer resemblance to each other than Long-standing. Assertions and expressions dominate in Entity and Event types with questions beings a distant third, while in Long-standing, recommendations are much more dominant with assertions being less so. This agrees with Zhao et al.'s BIBREF7 findings that tweets about Long-standings topics tend to be more opinionated which would result in more recommendations and expressions and fewer assertions.",
"The great variance across types and the small variance within types suggests that a type-specific classifier might be the correct granularity for Twitter speech act classification (with topic-specific being too narrow and Twitter-wide being too general). We will explore this in greater detail in the next sections of this paper."
],
[
"We studied many features before settling on the features below. Our features can be divided into two general categories: Semantic and Syntactic. Some of these features were motivated by various works on speech act classification, while others are novel features. Overall we selected 3313 binary features, composed of 1647 semantic and 1666 syntactic features."
],
[
"Opinion Words: We used the \"Harvard General Inquirer\" lexicon BIBREF8 , which is a dataset used commonly in sentiment classification tasks, to identify 2442 strong, negative and positive opinion words (such as robust, terrible, untrustworthy, etc). The intuition here is that these opinion words tend to signal certain speech acts such as expressions and recommendations. One binary feature indicates whether any of these words appear in a tweet.",
"Vulgar Words: Similar to opinion words, vulgar words can either signal great emotions or an informality mostly seen in expressions than any other kind of speech act (least seen in assertions). We used an online collection of vulgar words to collect a total of 349 vulgar words. A binary feature indicates the appearance or lack thereof of any of these words.",
"Emoticons: Emoticons have become ubiquitous in online communication and so cannot be ignored. Like vulgar words, emoticons can also signal emotions or informality. We used an online collection of text-based emoticons to collect a total of 362 emoticons. A binary feature indicates the appearance or lack thereof of any of these emoticons.",
"Speech Act Verbs: There are certain verbs (such as ask, demand, promise, report, etc) that typically signal certain speech acts. Wierzbicka BIBREF9 has compiled a total of 229 English speech act verbs divided into 37 groups. Since this is a collection of verbs, it is crucially important to only consider the verbs in a tweet and not any other word class (since some of these words can appear in multiple part-of-speech categories). In order to do this, we used Owoputi et al.'s BIBREF10 Twitter part-of-speech tagger to identify all the verbs in a tweet, which were then stemmed using Porter Stemming BIBREF11 . The stemmed verbs were then compared to the 229 speech act verbs (which were also stemmed using Porter Stemming). Thus, we have 229 binary features coding the appearance or lack thereof of each of these verbs.",
"N-grams: In addition to the verbs mentioned, there are certain phrases and non-verb words that can signal certain speech acts. For example, the phrase \"I think\" signals an expression, the phrase \"could you please\" signals a request and the phrase \"is it true\" signals a question. Similarly, the non-verb word \"should\" can signal a recommendation and \"why\" can signal a question.",
"These words and phrases are called n-grams (an n-gram is a contiguous sequence of n words). Given the relatively short sentences on Twitter, we decided to only consider unigram, bigram and trigram phrases. We generated a list of all of the unigrams, bigrams and trigrams that appear at least five times in our tweets for a total of 6,738 n-grams. From that list we selected a total of 1,415 n-grams that were most predictive of the speech act of their corresponding tweets but did not contain topic-specific terms (such as Boston, Red Sox, etc). There is a binary feature for each of these sub-trees indicating their appearance."
],
[
"Punctuations: Certain punctuations can be predictive of the speech act in a tweet. Specifically, the punctuation ? can signal a question or request while ! can signal an expression or recommendation. We have two binary features indicating the appearance or lack thereof of these symbols.",
"Twitter-specific Characters: There are certain Twitter-specific characters that can signal speech acts. These characters are #, @, and RT.The position of these characters is also important to consider since Twitter-specific characters used in the initial position of a tweet is more predictive than in other positions. Therefore, we have three additional binary features indicating whether these symbols appear in the initial position.",
"Abbreviations: Abbreviations are seen with great frequency in online communication. The use of abbreviations (such as b4 for before, jk for just kidding and irl for in real life) can signal informal speech which in turn can signal certain speech acts such as expression. We collected 944 such abbreviations from an online dictionary and Crystal's book on language used on the internet BIBREF12 . We have a binary future indicating the presence of any of the 944 abbreviations.",
"Dependency Sub-trees: Much can be gained from the inclusion of sophisticated syntactic features such as dependency sub-trees in our speech act classifier. We used Kong et al.'s BIBREF13 Twitter dependency parser for English (called the TweeboParser) to generate dependency trees for our tweets. Dependency trees capture the relationship between words in a sentence. Each node in a dependency tree is a word with edges between words capturing the relationship between the words (a word either modifies or is modified by other words). In contrast to other syntactic trees such as constituency trees, there is a one-to-one correspondence between words in a sentence and the nodes in the tree (so there are only as many nodes as there are words). Figure FIGREF8 shows the dependency tree of an example tweet.",
"We extracted sub-trees of length one and two (the length refers to the number of edges) from each dependency tree. Overall we collected 5,484 sub-trees that appeared at least five times. We then used a filtering process identical to the one used for n-grams, resulting in 1,655 sub-trees. There is a binary feature for each of these sub-trees indicating their appearance.",
"Part-of-speech: Finally, we used the part-of-speech tags generated by the dependency tree parser to identify the use of adjectives and interjections (such as yikes, dang, etc). Interjections are mostly used to convey emotion and thus can signal expressions. Similarly adjectives can signal expressions or recommendations. We have two binary features indicating the usage of these two parts-of-speech."
],
[
"We trained four different classifiers on our 3,313 binary features using the following methods: naive bayes (NB), decision tree (DT), logistic regression (LR), SVM, and a baseline max classifier BL. We trained classifiers across three granularities: Twitter-wide, Type-specific, and Topic-specific. All of our classifiers are evaluated using 20-fold cross validation. Table TABREF9 shows the performance of our five classifiers trained and evaluated on all of the data. We report the F1 score for each class. As shown in Table TABREF9 , the logistic regression was the performing classifier with a weighted average F1 score of INLINEFORM0 . Thus we picked logistic regression as our classier and the rest of the results reported will be for LR only. Table TABREF10 shows the average performance of the LR classifier for Twitter-wide, type and topic specific classifiers.",
"The topic-specific classifiers' average performance was better than that of the type-specific classifiers ( INLINEFORM0 and INLINEFORM1 respectively) which was in turn marginally better than the performance of the Twitter-wide classifier ( INLINEFORM2 ). This confirms our earlier hypothesis that the more granular type and topic specific classifiers would be superior to a more general Twitter-wide classifier.",
"Next, we wanted to measure the contributions of our semantic and syntactic features. To do so, we trained two versions of our Twitter-wide logistic regression classifier, one using only semantic features and the other using syntactic features. As shown in Table TABREF11 , the semantic and syntactic classifiers' performance was fairly similar, both being on average significantly worse than the combined classifier. The combined classifier outperformed the semantic and syntactic classifiers on all other categories, which strongly suggests that both feature categories contribute to the classification of speech acts.",
"Finally, we compared the performance of our classifier (called TweetAct) to a logistic regression classifier trained on features proposed by, as far as we know, the only other supervised Twitter speech act classifier by Zhang et al. (called Zhang). Table TABREF12 shows the results. Not only did our classifier outperform the Zhang classifier for every class, both the semantic and syntactic classifiers (see Table TABREF11 ) also generally outperformed the Zhang classifier."
],
[
"In this paper, we presented a supervised speech act classifier for Twitter. We treated speech act classification on Twitter as a multi-class classification problem and came up with a taxonomy of speech acts on Twitter with six distinct classes. We then proposed a set of semantic and syntactic features for supervised Twitter speech act classification. Using these features we were able to achieve state-of-the-art performance for Twitter speech act classification, with an average F1 score of INLINEFORM0 . Speech act classification has many applications; for instance we have used our classifier to detect rumors on Twitter in a companion paper BIBREF14 ."
]
],
"section_name": [
"Introduction",
"Problem Statement",
"Data Collection and Datasets",
"Features",
"Semantic Features",
"Syntactic Features",
"Supervised Speech Act Classifier",
"Conclusions and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"6817fd5a23e6fe7235ae2cddfed7e8b1c6dc1149",
"d396f8ba689f567527244647e543f3bc6b3ae45f"
],
"answer": [
{
"evidence": [
"Semantic Features",
"Opinion Words: We used the \"Harvard General Inquirer\" lexicon BIBREF8 , which is a dataset used commonly in sentiment classification tasks, to identify 2442 strong, negative and positive opinion words (such as robust, terrible, untrustworthy, etc). The intuition here is that these opinion words tend to signal certain speech acts such as expressions and recommendations. One binary feature indicates whether any of these words appear in a tweet.",
"Vulgar Words: Similar to opinion words, vulgar words can either signal great emotions or an informality mostly seen in expressions than any other kind of speech act (least seen in assertions). We used an online collection of vulgar words to collect a total of 349 vulgar words. A binary feature indicates the appearance or lack thereof of any of these words.",
"Emoticons: Emoticons have become ubiquitous in online communication and so cannot be ignored. Like vulgar words, emoticons can also signal emotions or informality. We used an online collection of text-based emoticons to collect a total of 362 emoticons. A binary feature indicates the appearance or lack thereof of any of these emoticons.",
"Speech Act Verbs: There are certain verbs (such as ask, demand, promise, report, etc) that typically signal certain speech acts. Wierzbicka BIBREF9 has compiled a total of 229 English speech act verbs divided into 37 groups. Since this is a collection of verbs, it is crucially important to only consider the verbs in a tweet and not any other word class (since some of these words can appear in multiple part-of-speech categories). In order to do this, we used Owoputi et al.'s BIBREF10 Twitter part-of-speech tagger to identify all the verbs in a tweet, which were then stemmed using Porter Stemming BIBREF11 . The stemmed verbs were then compared to the 229 speech act verbs (which were also stemmed using Porter Stemming). Thus, we have 229 binary features coding the appearance or lack thereof of each of these verbs.",
"N-grams: In addition to the verbs mentioned, there are certain phrases and non-verb words that can signal certain speech acts. For example, the phrase \"I think\" signals an expression, the phrase \"could you please\" signals a request and the phrase \"is it true\" signals a question. Similarly, the non-verb word \"should\" can signal a recommendation and \"why\" can signal a question.",
"Punctuations: Certain punctuations can be predictive of the speech act in a tweet. Specifically, the punctuation ? can signal a question or request while ! can signal an expression or recommendation. We have two binary features indicating the appearance or lack thereof of these symbols.",
"Twitter-specific Characters: There are certain Twitter-specific characters that can signal speech acts. These characters are #, @, and RT.The position of these characters is also important to consider since Twitter-specific characters used in the initial position of a tweet is more predictive than in other positions. Therefore, we have three additional binary features indicating whether these symbols appear in the initial position.",
"Abbreviations: Abbreviations are seen with great frequency in online communication. The use of abbreviations (such as b4 for before, jk for just kidding and irl for in real life) can signal informal speech which in turn can signal certain speech acts such as expression. We collected 944 such abbreviations from an online dictionary and Crystal's book on language used on the internet BIBREF12 . We have a binary future indicating the presence of any of the 944 abbreviations.",
"Dependency Sub-trees: Much can be gained from the inclusion of sophisticated syntactic features such as dependency sub-trees in our speech act classifier. We used Kong et al.'s BIBREF13 Twitter dependency parser for English (called the TweeboParser) to generate dependency trees for our tweets. Dependency trees capture the relationship between words in a sentence. Each node in a dependency tree is a word with edges between words capturing the relationship between the words (a word either modifies or is modified by other words). In contrast to other syntactic trees such as constituency trees, there is a one-to-one correspondence between words in a sentence and the nodes in the tree (so there are only as many nodes as there are words). Figure FIGREF8 shows the dependency tree of an example tweet.",
"Part-of-speech: Finally, we used the part-of-speech tags generated by the dependency tree parser to identify the use of adjectives and interjections (such as yikes, dang, etc). Interjections are mostly used to convey emotion and thus can signal expressions. Similarly adjectives can signal expressions or recommendations. We have two binary features indicating the usage of these two parts-of-speech."
],
"extractive_spans": [
"Opinion Words",
"Vulgar Words",
"Emoticons",
"Speech Act Verbs",
"N-grams",
"Punctuations",
"Twitter-specific Characters",
"Abbreviations",
"Dependency Sub-trees",
"Part-of-speech"
],
"free_form_answer": "",
"highlighted_evidence": [
"Semantic Features\nOpinion Words: We used the \"Harvard General Inquirer\" lexicon BIBREF8 , which is a dataset used commonly in sentiment classification tasks, to identify 2442 strong, negative and positive opinion words (such as robust, terrible, untrustworthy, etc).",
"Vulgar Words: Similar to opinion words, vulgar words can either signal great emotions or an informality mostly seen in expressions than any other kind of speech act (least seen in assertions).",
"Emoticons: Emoticons have become ubiquitous in online communication and so cannot be ignored. Like vulgar words, emoticons can also signal emotions or informality.",
"Speech Act Verbs: There are certain verbs (such as ask, demand, promise, report, etc) that typically signal certain speech acts.",
"N-grams: In addition to the verbs mentioned, there are certain phrases and non-verb words that can signal certain speech acts.",
"Punctuations: Certain punctuations can be predictive of the speech act in a tweet.",
"Twitter-specific Characters: There are certain Twitter-specific characters that can signal speech acts.",
"Abbreviations: Abbreviations are seen with great frequency in online communication.",
"Dependency Sub-trees: Much can be gained from the inclusion of sophisticated syntactic features such as dependency sub-trees in our speech act classifier.",
"Part-of-speech: Finally, we used the part-of-speech tags generated by the dependency tree parser to identify the use of adjectives and interjections (such as yikes, dang, etc)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Semantic Features",
"Opinion Words: We used the \"Harvard General Inquirer\" lexicon BIBREF8 , which is a dataset used commonly in sentiment classification tasks, to identify 2442 strong, negative and positive opinion words (such as robust, terrible, untrustworthy, etc). The intuition here is that these opinion words tend to signal certain speech acts such as expressions and recommendations. One binary feature indicates whether any of these words appear in a tweet.",
"Vulgar Words: Similar to opinion words, vulgar words can either signal great emotions or an informality mostly seen in expressions than any other kind of speech act (least seen in assertions). We used an online collection of vulgar words to collect a total of 349 vulgar words. A binary feature indicates the appearance or lack thereof of any of these words.",
"Emoticons: Emoticons have become ubiquitous in online communication and so cannot be ignored. Like vulgar words, emoticons can also signal emotions or informality. We used an online collection of text-based emoticons to collect a total of 362 emoticons. A binary feature indicates the appearance or lack thereof of any of these emoticons.",
"Speech Act Verbs: There are certain verbs (such as ask, demand, promise, report, etc) that typically signal certain speech acts. Wierzbicka BIBREF9 has compiled a total of 229 English speech act verbs divided into 37 groups. Since this is a collection of verbs, it is crucially important to only consider the verbs in a tweet and not any other word class (since some of these words can appear in multiple part-of-speech categories). In order to do this, we used Owoputi et al.'s BIBREF10 Twitter part-of-speech tagger to identify all the verbs in a tweet, which were then stemmed using Porter Stemming BIBREF11 . The stemmed verbs were then compared to the 229 speech act verbs (which were also stemmed using Porter Stemming). Thus, we have 229 binary features coding the appearance or lack thereof of each of these verbs.",
"N-grams: In addition to the verbs mentioned, there are certain phrases and non-verb words that can signal certain speech acts. For example, the phrase \"I think\" signals an expression, the phrase \"could you please\" signals a request and the phrase \"is it true\" signals a question. Similarly, the non-verb word \"should\" can signal a recommendation and \"why\" can signal a question.",
"These words and phrases are called n-grams (an n-gram is a contiguous sequence of n words). Given the relatively short sentences on Twitter, we decided to only consider unigram, bigram and trigram phrases. We generated a list of all of the unigrams, bigrams and trigrams that appear at least five times in our tweets for a total of 6,738 n-grams. From that list we selected a total of 1,415 n-grams that were most predictive of the speech act of their corresponding tweets but did not contain topic-specific terms (such as Boston, Red Sox, etc). There is a binary feature for each of these sub-trees indicating their appearance.",
"Syntactic Features",
"Punctuations: Certain punctuations can be predictive of the speech act in a tweet. Specifically, the punctuation ? can signal a question or request while ! can signal an expression or recommendation. We have two binary features indicating the appearance or lack thereof of these symbols.",
"Twitter-specific Characters: There are certain Twitter-specific characters that can signal speech acts. These characters are #, @, and RT.The position of these characters is also important to consider since Twitter-specific characters used in the initial position of a tweet is more predictive than in other positions. Therefore, we have three additional binary features indicating whether these symbols appear in the initial position.",
"Abbreviations: Abbreviations are seen with great frequency in online communication. The use of abbreviations (such as b4 for before, jk for just kidding and irl for in real life) can signal informal speech which in turn can signal certain speech acts such as expression. We collected 944 such abbreviations from an online dictionary and Crystal's book on language used on the internet BIBREF12 . We have a binary future indicating the presence of any of the 944 abbreviations.",
"Dependency Sub-trees: Much can be gained from the inclusion of sophisticated syntactic features such as dependency sub-trees in our speech act classifier. We used Kong et al.'s BIBREF13 Twitter dependency parser for English (called the TweeboParser) to generate dependency trees for our tweets. Dependency trees capture the relationship between words in a sentence. Each node in a dependency tree is a word with edges between words capturing the relationship between the words (a word either modifies or is modified by other words). In contrast to other syntactic trees such as constituency trees, there is a one-to-one correspondence between words in a sentence and the nodes in the tree (so there are only as many nodes as there are words). Figure FIGREF8 shows the dependency tree of an example tweet.",
"We extracted sub-trees of length one and two (the length refers to the number of edges) from each dependency tree. Overall we collected 5,484 sub-trees that appeared at least five times. We then used a filtering process identical to the one used for n-grams, resulting in 1,655 sub-trees. There is a binary feature for each of these sub-trees indicating their appearance.",
"Part-of-speech: Finally, we used the part-of-speech tags generated by the dependency tree parser to identify the use of adjectives and interjections (such as yikes, dang, etc). Interjections are mostly used to convey emotion and thus can signal expressions. Similarly adjectives can signal expressions or recommendations. We have two binary features indicating the usage of these two parts-of-speech."
],
"extractive_spans": [],
"free_form_answer": "Semantic Features : Opinion Words, Vulgar Words, Emoticons, Speech Act Verbs, N-grams.\nSyntactic Features: Punctuations, Twitter-specific Characters, Abbreviations, Dependency Sub-trees, Part-of-speech.",
"highlighted_evidence": [
"Semantic Features\nOpinion Words: We used the \"Harvard General Inquirer\" lexicon BIBREF8 , which is a dataset used commonly in sentiment classification tasks, to identify 2442 strong, negative and positive opinion words (such as robust, terrible, untrustworthy, etc). The intuition here is that these opinion words tend to signal certain speech acts such as expressions and recommendations. One binary feature indicates whether any of these words appear in a tweet.\n\nVulgar Words: Similar to opinion words, vulgar words can either signal great emotions or an informality mostly seen in expressions than any other kind of speech act (least seen in assertions). We used an online collection of vulgar words to collect a total of 349 vulgar words. A binary feature indicates the appearance or lack thereof of any of these words.\n\nEmoticons: Emoticons have become ubiquitous in online communication and so cannot be ignored. Like vulgar words, emoticons can also signal emotions or informality. We used an online collection of text-based emoticons to collect a total of 362 emoticons. A binary feature indicates the appearance or lack thereof of any of these emoticons.\n\nSpeech Act Verbs: There are certain verbs (such as ask, demand, promise, report, etc) that typically signal certain speech acts. Wierzbicka BIBREF9 has compiled a total of 229 English speech act verbs divided into 37 groups. Since this is a collection of verbs, it is crucially important to only consider the verbs in a tweet and not any other word class (since some of these words can appear in multiple part-of-speech categories). In order to do this, we used Owoputi et al.'s BIBREF10 Twitter part-of-speech tagger to identify all the verbs in a tweet, which were then stemmed using Porter Stemming BIBREF11 . The stemmed verbs were then compared to the 229 speech act verbs (which were also stemmed using Porter Stemming). Thus, we have 229 binary features coding the appearance or lack thereof of each of these verbs.\n\nN-grams: In addition to the verbs mentioned, there are certain phrases and non-verb words that can signal certain speech acts. For example, the phrase \"I think\" signals an expression, the phrase \"could you please\" signals a request and the phrase \"is it true\" signals a question. Similarly, the non-verb word \"should\" can signal a recommendation and \"why\" can signal a question.\n\nThese words and phrases are called n-grams (an n-gram is a contiguous sequence of n words). Given the relatively short sentences on Twitter, we decided to only consider unigram, bigram and trigram phrases. We generated a list of all of the unigrams, bigrams and trigrams that appear at least five times in our tweets for a total of 6,738 n-grams. From that list we selected a total of 1,415 n-grams that were most predictive of the speech act of their corresponding tweets but did not contain topic-specific terms (such as Boston, Red Sox, etc). There is a binary feature for each of these sub-trees indicating their appearance.\n\nSyntactic Features\nPunctuations: Certain punctuations can be predictive of the speech act in a tweet. Specifically, the punctuation ? can signal a question or request while ! can signal an expression or recommendation. We have two binary features indicating the appearance or lack thereof of these symbols.\n\nTwitter-specific Characters: There are certain Twitter-specific characters that can signal speech acts. These characters are #, @, and RT.The position of these characters is also important to consider since Twitter-specific characters used in the initial position of a tweet is more predictive than in other positions. Therefore, we have three additional binary features indicating whether these symbols appear in the initial position.\n\nAbbreviations: Abbreviations are seen with great frequency in online communication. The use of abbreviations (such as b4 for before, jk for just kidding and irl for in real life) can signal informal speech which in turn can signal certain speech acts such as expression. We collected 944 such abbreviations from an online dictionary and Crystal's book on language used on the internet BIBREF12 . We have a binary future indicating the presence of any of the 944 abbreviations.\n\nDependency Sub-trees: Much can be gained from the inclusion of sophisticated syntactic features such as dependency sub-trees in our speech act classifier. We used Kong et al.'s BIBREF13 Twitter dependency parser for English (called the TweeboParser) to generate dependency trees for our tweets. Dependency trees capture the relationship between words in a sentence. Each node in a dependency tree is a word with edges between words capturing the relationship between the words (a word either modifies or is modified by other words). In contrast to other syntactic trees such as constituency trees, there is a one-to-one correspondence between words in a sentence and the nodes in the tree (so there are only as many nodes as there are words). Figure FIGREF8 shows the dependency tree of an example tweet.\n\nWe extracted sub-trees of length one and two (the length refers to the number of edges) from each dependency tree. Overall we collected 5,484 sub-trees that appeared at least five times. We then used a filtering process identical to the one used for n-grams, resulting in 1,655 sub-trees. There is a binary feature for each of these sub-trees indicating their appearance.\n\nPart-of-speech: Finally, we used the part-of-speech tags generated by the dependency tree parser to identify the use of adjectives and interjections (such as yikes, dang, etc). Interjections are mostly used to convey emotion and thus can signal expressions. Similarly adjectives can signal expressions or recommendations. We have two binary features indicating the usage of these two parts-of-speech."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"e6287b61b7c2e973d0b641d7d6f7ef7fa30b0671",
"fc4bde3453569cc7aaf2446e7f77f928a0d8c032"
],
"answer": [
{
"evidence": [
"Using Searle's speech act taxonomy BIBREF3 , we established a list of six speech act categories that are commonly seen on Twitter: Assertion, Recommendation Expression, Question, Request, and Miscellaneous. Table TABREF1 shows an example tweet for each of these categories."
],
"extractive_spans": [
"Assertion",
"Recommendation ",
"Expression",
"Question",
"Request",
"Miscellaneous"
],
"free_form_answer": "",
"highlighted_evidence": [
"Using Searle's speech act taxonomy BIBREF3 , we established a list of six speech act categories that are commonly seen on Twitter: Assertion, Recommendation Expression, Question, Request, and Miscellaneous."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Using Searle's speech act taxonomy BIBREF3 , we established a list of six speech act categories that are commonly seen on Twitter: Assertion, Recommendation Expression, Question, Request, and Miscellaneous. Table TABREF1 shows an example tweet for each of these categories."
],
"extractive_spans": [
"Assertion, Recommendation Expression, Question, Request, and Miscellaneous"
],
"free_form_answer": "",
"highlighted_evidence": [
"Using Searle's speech act taxonomy BIBREF3 , we established a list of six speech act categories that are commonly seen on Twitter: Assertion, Recommendation Expression, Question, Request, and Miscellaneous."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1b89e57c1ba58f21392ca842171fcf0813c9b9cc",
"e0f6ca2ccb0fa2849a7154cf65f04b9fec9dbd58"
],
"answer": [
{
"evidence": [
"We trained four different classifiers on our 3,313 binary features using the following methods: naive bayes (NB), decision tree (DT), logistic regression (LR), SVM, and a baseline max classifier BL. We trained classifiers across three granularities: Twitter-wide, Type-specific, and Topic-specific. All of our classifiers are evaluated using 20-fold cross validation. Table TABREF9 shows the performance of our five classifiers trained and evaluated on all of the data. We report the F1 score for each class. As shown in Table TABREF9 , the logistic regression was the performing classifier with a weighted average F1 score of INLINEFORM0 . Thus we picked logistic regression as our classier and the rest of the results reported will be for LR only. Table TABREF10 shows the average performance of the LR classifier for Twitter-wide, type and topic specific classifiers."
],
"extractive_spans": [
"logistic regression"
],
"free_form_answer": "",
"highlighted_evidence": [
"We trained four different classifiers on our 3,313 binary features using the following methods: naive bayes (NB), decision tree (DT), logistic regression (LR), SVM, and a baseline max classifier BL. We trained classifiers across three granularities: Twitter-wide, Type-specific, and Topic-specific. ",
" As shown in Table TABREF9 , the logistic regression was the performing classifier with a weighted average F1 score of INLINEFORM0 . Thus we picked logistic regression as our classier and the rest of the results reported will be for LR only."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The topic-specific classifiers' average performance was better than that of the type-specific classifiers ( INLINEFORM0 and INLINEFORM1 respectively) which was in turn marginally better than the performance of the Twitter-wide classifier ( INLINEFORM2 ). This confirms our earlier hypothesis that the more granular type and topic specific classifiers would be superior to a more general Twitter-wide classifier."
],
"extractive_spans": [
"topic-specific classifier"
],
"free_form_answer": "",
"highlighted_evidence": [
"The topic-specific classifiers' average performance was better than that of the type-specific classifiers ( INLINEFORM0 and INLINEFORM1 respectively) which was in turn marginally better than the performance of the Twitter-wide classifier ( INLINEFORM2 ). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"908c331bc0e112e1f131690c6febba4fa83c1b86",
"96991a37d98463b6f053a037d9a93397c89512ad"
],
"answer": [
{
"evidence": [
"We selected two topics for each of the three topic types described in the last section for a total of six topics (see Figure FIGREF2 for list of topics). We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. The kappa for the annotators was INLINEFORM0 . For training, we used the label that the majority of annotators agreed upon (7,563 total tweets)."
],
"extractive_spans": [
"7,563"
],
"free_form_answer": "",
"highlighted_evidence": [
"For training, we used the label that the majority of annotators agreed upon (7,563 total tweets)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Given the diversity of topics talked about on Twitter, we wanted to explore topic and type dependent speech act classifiers. We used Zhao et al.'s BIBREF7 definitions for topic and type. A topic is a subject discussed in one or more tweets (e.g., Boston Marathon bombings, Red Sox, etc). The type characterizes the nature of the topic, these are: Entity-oriented, Event-oriented topics, and Long-standing topics (topics about subjects that are commonly discussed).",
"We selected two topics for each of the three topic types described in the last section for a total of six topics (see Figure FIGREF2 for list of topics). We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. The kappa for the annotators was INLINEFORM0 . For training, we used the label that the majority of annotators agreed upon (7,563 total tweets)."
],
"extractive_spans": [
"7,563"
],
"free_form_answer": "",
"highlighted_evidence": [
"Given the diversity of topics talked about on Twitter, we wanted to explore topic and type dependent speech act classifiers",
"We selected two topics for each of the three topic types described in the last section for a total of six topics (see Figure FIGREF2 for list of topics). We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. The kappa for the annotators was INLINEFORM0 . For training, we used the label that the majority of annotators agreed upon (7,563 total tweets)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"18512128bfb6ce6355ade7bc7d28aa4928c1f37b",
"cfb8855213f9db7624305362dc1e2b6e46b500c6"
],
"answer": [
{
"evidence": [
"We selected two topics for each of the three topic types described in the last section for a total of six topics (see Figure FIGREF2 for list of topics). We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. The kappa for the annotators was INLINEFORM0 . For training, we used the label that the majority of annotators agreed upon (7,563 total tweets)."
],
"extractive_spans": [
"three"
],
"free_form_answer": "",
"highlighted_evidence": [
"We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We selected two topics for each of the three topic types described in the last section for a total of six topics (see Figure FIGREF2 for list of topics). We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. The kappa for the annotators was INLINEFORM0 . For training, we used the label that the majority of annotators agreed upon (7,563 total tweets)."
],
"extractive_spans": [
"three"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"3ba39c1962c0669b3b540c59e7f5a8a02b14a71a",
"8b795409feaa8d435740c53e08727774ca90c485"
],
"answer": [
{
"evidence": [
"We selected two topics for each of the three topic types described in the last section for a total of six topics (see Figure FIGREF2 for list of topics). We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. The kappa for the annotators was INLINEFORM0 . For training, we used the label that the majority of annotators agreed upon (7,563 total tweets)."
],
"extractive_spans": [
"three undergraduate annotators "
],
"free_form_answer": "",
"highlighted_evidence": [
"We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We selected two topics for each of the three topic types described in the last section for a total of six topics (see Figure FIGREF2 for list of topics). We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. The kappa for the annotators was INLINEFORM0 . For training, we used the label that the majority of annotators agreed upon (7,563 total tweets)."
],
"extractive_spans": [
"three undergraduate annotators"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"1d5281a278427c98073012a9bee6654f64e1f97e",
"66f4b9ec8a8320907374a11db7d36e9f591555c2"
],
"answer": [
{
"evidence": [
"We studied many features before settling on the features below. Our features can be divided into two general categories: Semantic and Syntactic. Some of these features were motivated by various works on speech act classification, while others are novel features. Overall we selected 3313 binary features, composed of 1647 semantic and 1666 syntactic features.",
"Opinion Words: We used the \"Harvard General Inquirer\" lexicon BIBREF8 , which is a dataset used commonly in sentiment classification tasks, to identify 2442 strong, negative and positive opinion words (such as robust, terrible, untrustworthy, etc). The intuition here is that these opinion words tend to signal certain speech acts such as expressions and recommendations. One binary feature indicates whether any of these words appear in a tweet.",
"Vulgar Words: Similar to opinion words, vulgar words can either signal great emotions or an informality mostly seen in expressions than any other kind of speech act (least seen in assertions). We used an online collection of vulgar words to collect a total of 349 vulgar words. A binary feature indicates the appearance or lack thereof of any of these words.",
"Emoticons: Emoticons have become ubiquitous in online communication and so cannot be ignored. Like vulgar words, emoticons can also signal emotions or informality. We used an online collection of text-based emoticons to collect a total of 362 emoticons. A binary feature indicates the appearance or lack thereof of any of these emoticons.",
"Speech Act Verbs: There are certain verbs (such as ask, demand, promise, report, etc) that typically signal certain speech acts. Wierzbicka BIBREF9 has compiled a total of 229 English speech act verbs divided into 37 groups. Since this is a collection of verbs, it is crucially important to only consider the verbs in a tweet and not any other word class (since some of these words can appear in multiple part-of-speech categories). In order to do this, we used Owoputi et al.'s BIBREF10 Twitter part-of-speech tagger to identify all the verbs in a tweet, which were then stemmed using Porter Stemming BIBREF11 . The stemmed verbs were then compared to the 229 speech act verbs (which were also stemmed using Porter Stemming). Thus, we have 229 binary features coding the appearance or lack thereof of each of these verbs.",
"N-grams: In addition to the verbs mentioned, there are certain phrases and non-verb words that can signal certain speech acts. For example, the phrase \"I think\" signals an expression, the phrase \"could you please\" signals a request and the phrase \"is it true\" signals a question. Similarly, the non-verb word \"should\" can signal a recommendation and \"why\" can signal a question."
],
"extractive_spans": [
"Opinion Words",
"Vulgar Words",
"Emoticons",
"Speech Act Verbs",
"N-grams"
],
"free_form_answer": "",
"highlighted_evidence": [
"We studied many features before settling on the features below. Our features can be divided into two general categories: Semantic and Syntactic",
"Opinion Words: We used the \"Harvard General Inquirer\" lexicon BIBREF8 , which is a dataset used commonly in sentiment classification tasks, to identify 2442 strong, negative and positive opinion words (such as robust, terrible, untrustworthy, etc). The intuition here is that these opinion words tend to signal certain speech acts such as expressions and recommendations. One binary feature indicates whether any of these words appear in a tweet.",
"Vulgar Words: Similar to opinion words, vulgar words can either signal great emotions or an informality mostly seen in expressions than any other kind of speech act (least seen in assertions). We used an online collection of vulgar words to collect a total of 349 vulgar words. A binary feature indicates the appearance or lack thereof of any of these words.",
"Emoticons: Emoticons have become ubiquitous in online communication and so cannot be ignored. Like vulgar words, emoticons can also signal emotions or informality. We used an online collection of text-based emoticons to collect a total of 362 emoticons. A binary feature indicates the appearance or lack thereof of any of these emoticons.",
"Speech Act Verbs: There are certain verbs (such as ask, demand, promise, report, etc) that typically signal certain speech acts. Wierzbicka BIBREF9 has compiled a total of 229 English speech act verbs divided into 37 groups. Since this is a collection of verbs, it is crucially important to only consider the verbs in a tweet and not any other word class (since some of these words can appear in multiple part-of-speech categories). In order to do this, we used Owoputi et al.'s BIBREF10 Twitter part-of-speech tagger to identify all the verbs in a tweet, which were then stemmed using Porter Stemming BIBREF11 . The stemmed verbs were then compared to the 229 speech act verbs (which were also stemmed using Porter Stemming). Thus, we have 229 binary features coding the appearance or lack thereof of each of these verbs.",
"N-grams: In addition to the verbs mentioned, there are certain phrases and non-verb words that can signal certain speech acts. For example, the phrase \"I think\" signals an expression, the phrase \"could you please\" signals a request and the phrase \"is it true\" signals a question. Similarly, the non-verb word \"should\" can signal a recommendation and \"why\" can signal a question."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Semantic Features",
"Opinion Words: We used the \"Harvard General Inquirer\" lexicon BIBREF8 , which is a dataset used commonly in sentiment classification tasks, to identify 2442 strong, negative and positive opinion words (such as robust, terrible, untrustworthy, etc). The intuition here is that these opinion words tend to signal certain speech acts such as expressions and recommendations. One binary feature indicates whether any of these words appear in a tweet.",
"Vulgar Words: Similar to opinion words, vulgar words can either signal great emotions or an informality mostly seen in expressions than any other kind of speech act (least seen in assertions). We used an online collection of vulgar words to collect a total of 349 vulgar words. A binary feature indicates the appearance or lack thereof of any of these words.",
"Emoticons: Emoticons have become ubiquitous in online communication and so cannot be ignored. Like vulgar words, emoticons can also signal emotions or informality. We used an online collection of text-based emoticons to collect a total of 362 emoticons. A binary feature indicates the appearance or lack thereof of any of these emoticons.",
"Speech Act Verbs: There are certain verbs (such as ask, demand, promise, report, etc) that typically signal certain speech acts. Wierzbicka BIBREF9 has compiled a total of 229 English speech act verbs divided into 37 groups. Since this is a collection of verbs, it is crucially important to only consider the verbs in a tweet and not any other word class (since some of these words can appear in multiple part-of-speech categories). In order to do this, we used Owoputi et al.'s BIBREF10 Twitter part-of-speech tagger to identify all the verbs in a tweet, which were then stemmed using Porter Stemming BIBREF11 . The stemmed verbs were then compared to the 229 speech act verbs (which were also stemmed using Porter Stemming). Thus, we have 229 binary features coding the appearance or lack thereof of each of these verbs.",
"N-grams: In addition to the verbs mentioned, there are certain phrases and non-verb words that can signal certain speech acts. For example, the phrase \"I think\" signals an expression, the phrase \"could you please\" signals a request and the phrase \"is it true\" signals a question. Similarly, the non-verb word \"should\" can signal a recommendation and \"why\" can signal a question.",
"These words and phrases are called n-grams (an n-gram is a contiguous sequence of n words). Given the relatively short sentences on Twitter, we decided to only consider unigram, bigram and trigram phrases. We generated a list of all of the unigrams, bigrams and trigrams that appear at least five times in our tweets for a total of 6,738 n-grams. From that list we selected a total of 1,415 n-grams that were most predictive of the speech act of their corresponding tweets but did not contain topic-specific terms (such as Boston, Red Sox, etc). There is a binary feature for each of these sub-trees indicating their appearance."
],
"extractive_spans": [],
"free_form_answer": "Binary features indicating opinion words, vulgar words, emoticons, speech act verbs and unigram, bigram and trigram that appear at least five times in the dataset",
"highlighted_evidence": [
"Semantic Features\nOpinion Words: We used the \"Harvard General Inquirer\" lexicon BIBREF8 , which is a dataset used commonly in sentiment classification tasks, to identify 2442 strong, negative and positive opinion words (such as robust, terrible, untrustworthy, etc).",
"One binary feature indicates whether any of these words appear in a tweet.",
"Vulgar Words: Similar to opinion words, vulgar words can either signal great emotions or an informality mostly seen in expressions than any other kind of speech act (least seen in assertions). ",
"A binary feature indicates the appearance or lack thereof of any of these words.",
"Emoticons: Emoticons have become ubiquitous in online communication and so cannot be ignored. ",
"A binary feature indicates the appearance or lack thereof of any of these emoticons.",
"Speech Act Verbs: There are certain verbs (such as ask, demand, promise, report, etc) that typically signal certain speech acts. ",
"Thus, we have 229 binary features coding the appearance or lack thereof of each of these verbs.",
"N-grams: In addition to the verbs mentioned, there are certain phrases and non-verb words that can signal certain speech acts.",
"Given the relatively short sentences on Twitter, we decided to only consider unigram, bigram and trigram phrases. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"5c7063bc210ae7564d94e24e0f447a010baf58c2",
"c20b1ccbc0cb0e4fb5ac31e35f8b7fbc41c7a576"
],
"answer": [
{
"evidence": [
"We studied many features before settling on the features below. Our features can be divided into two general categories: Semantic and Syntactic. Some of these features were motivated by various works on speech act classification, while others are novel features. Overall we selected 3313 binary features, composed of 1647 semantic and 1666 syntactic features.",
"Syntactic Features",
"Punctuations: Certain punctuations can be predictive of the speech act in a tweet. Specifically, the punctuation ? can signal a question or request while ! can signal an expression or recommendation. We have two binary features indicating the appearance or lack thereof of these symbols.",
"Twitter-specific Characters: There are certain Twitter-specific characters that can signal speech acts. These characters are #, @, and RT.The position of these characters is also important to consider since Twitter-specific characters used in the initial position of a tweet is more predictive than in other positions. Therefore, we have three additional binary features indicating whether these symbols appear in the initial position.",
"Abbreviations: Abbreviations are seen with great frequency in online communication. The use of abbreviations (such as b4 for before, jk for just kidding and irl for in real life) can signal informal speech which in turn can signal certain speech acts such as expression. We collected 944 such abbreviations from an online dictionary and Crystal's book on language used on the internet BIBREF12 . We have a binary future indicating the presence of any of the 944 abbreviations.",
"Dependency Sub-trees: Much can be gained from the inclusion of sophisticated syntactic features such as dependency sub-trees in our speech act classifier. We used Kong et al.'s BIBREF13 Twitter dependency parser for English (called the TweeboParser) to generate dependency trees for our tweets. Dependency trees capture the relationship between words in a sentence. Each node in a dependency tree is a word with edges between words capturing the relationship between the words (a word either modifies or is modified by other words). In contrast to other syntactic trees such as constituency trees, there is a one-to-one correspondence between words in a sentence and the nodes in the tree (so there are only as many nodes as there are words). Figure FIGREF8 shows the dependency tree of an example tweet.",
"Part-of-speech: Finally, we used the part-of-speech tags generated by the dependency tree parser to identify the use of adjectives and interjections (such as yikes, dang, etc). Interjections are mostly used to convey emotion and thus can signal expressions. Similarly adjectives can signal expressions or recommendations. We have two binary features indicating the usage of these two parts-of-speech."
],
"extractive_spans": [
"Punctuations",
"Twitter-specific Characters",
"Abbreviations",
"Dependency Sub-trees",
"Part-of-speech"
],
"free_form_answer": "",
"highlighted_evidence": [
"We studied many features before settling on the features below. Our features can be divided into two general categories: Semantic and Syntactic.",
"Syntactic Features\nPunctuations: Certain punctuations can be predictive of the speech act in a tweet. Specifically, the punctuation ? can signal a question or request while ! can signal an expression or recommendation. We have two binary features indicating the appearance or lack thereof of these symbols.",
"Twitter-specific Characters: There are certain Twitter-specific characters that can signal speech acts. These characters are #, @, and RT.The position of these characters is also important to consider since Twitter-specific characters used in the initial position of a tweet is more predictive than in other positions. Therefore, we have three additional binary features indicating whether these symbols appear in the initial position.",
"Abbreviations: Abbreviations are seen with great frequency in online communication. The use of abbreviations (such as b4 for before, jk for just kidding and irl for in real life) can signal informal speech which in turn can signal certain speech acts such as expression. We collected 944 such abbreviations from an online dictionary and Crystal's book on language used on the internet BIBREF12 . We have a binary future indicating the presence of any of the 944 abbreviations.",
"Dependency Sub-trees: Much can be gained from the inclusion of sophisticated syntactic features such as dependency sub-trees in our speech act classifier. We used Kong et al.'s BIBREF13 Twitter dependency parser for English (called the TweeboParser) to generate dependency trees for our tweets. Dependency trees capture the relationship between words in a sentence. Each node in a dependency tree is a word with edges between words capturing the relationship between the words (a word either modifies or is modified by other words). In contrast to other syntactic trees such as constituency trees, there is a one-to-one correspondence between words in a sentence and the nodes in the tree (so there are only as many nodes as there are words). Figure FIGREF8 shows the dependency tree of an example tweet.",
"Part-of-speech: Finally, we used the part-of-speech tags generated by the dependency tree parser to identify the use of adjectives and interjections (such as yikes, dang, etc). Interjections are mostly used to convey emotion and thus can signal expressions. Similarly adjectives can signal expressions or recommendations. We have two binary features indicating the usage of these two parts-of-speech."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Syntactic Features",
"Punctuations: Certain punctuations can be predictive of the speech act in a tweet. Specifically, the punctuation ? can signal a question or request while ! can signal an expression or recommendation. We have two binary features indicating the appearance or lack thereof of these symbols.",
"Twitter-specific Characters: There are certain Twitter-specific characters that can signal speech acts. These characters are #, @, and RT.The position of these characters is also important to consider since Twitter-specific characters used in the initial position of a tweet is more predictive than in other positions. Therefore, we have three additional binary features indicating whether these symbols appear in the initial position.",
"Abbreviations: Abbreviations are seen with great frequency in online communication. The use of abbreviations (such as b4 for before, jk for just kidding and irl for in real life) can signal informal speech which in turn can signal certain speech acts such as expression. We collected 944 such abbreviations from an online dictionary and Crystal's book on language used on the internet BIBREF12 . We have a binary future indicating the presence of any of the 944 abbreviations.",
"Dependency Sub-trees: Much can be gained from the inclusion of sophisticated syntactic features such as dependency sub-trees in our speech act classifier. We used Kong et al.'s BIBREF13 Twitter dependency parser for English (called the TweeboParser) to generate dependency trees for our tweets. Dependency trees capture the relationship between words in a sentence. Each node in a dependency tree is a word with edges between words capturing the relationship between the words (a word either modifies or is modified by other words). In contrast to other syntactic trees such as constituency trees, there is a one-to-one correspondence between words in a sentence and the nodes in the tree (so there are only as many nodes as there are words). Figure FIGREF8 shows the dependency tree of an example tweet.",
"We extracted sub-trees of length one and two (the length refers to the number of edges) from each dependency tree. Overall we collected 5,484 sub-trees that appeared at least five times. We then used a filtering process identical to the one used for n-grams, resulting in 1,655 sub-trees. There is a binary feature for each of these sub-trees indicating their appearance.",
"These words and phrases are called n-grams (an n-gram is a contiguous sequence of n words). Given the relatively short sentences on Twitter, we decided to only consider unigram, bigram and trigram phrases. We generated a list of all of the unigrams, bigrams and trigrams that appear at least five times in our tweets for a total of 6,738 n-grams. From that list we selected a total of 1,415 n-grams that were most predictive of the speech act of their corresponding tweets but did not contain topic-specific terms (such as Boston, Red Sox, etc). There is a binary feature for each of these sub-trees indicating their appearance.",
"Part-of-speech: Finally, we used the part-of-speech tags generated by the dependency tree parser to identify the use of adjectives and interjections (such as yikes, dang, etc). Interjections are mostly used to convey emotion and thus can signal expressions. Similarly adjectives can signal expressions or recommendations. We have two binary features indicating the usage of these two parts-of-speech."
],
"extractive_spans": [],
"free_form_answer": "Binary features indicating appeance of punctuations, twitter-specific characters - @, #, and RT, abbreviations, length one and two sub-trees extracted from dependency sub-tree and Part-of-speech - adjectives and interjections.",
"highlighted_evidence": [
"Syntactic Features\nPunctuations: Certain punctuations can be predictive of the speech act in a tweet. ",
"We have two binary features indicating the appearance or lack thereof of these symbols.",
"Twitter-specific Characters: There are certain Twitter-specific characters that can signal speech acts. These characters are #, @, and RT.T",
"Therefore, we have three additional binary features indicating whether these symbols appear in the initial position.",
"Abbreviations: Abbreviations are seen with great frequency in online communication. ",
"We have a binary future indicating the presence of any of the 944 abbreviations.",
"Dependency Sub-trees: Much can be gained from the inclusion of sophisticated syntactic features such as dependency sub-trees in our speech act classifier.",
"We extracted sub-trees of length one and two (the length refers to the number of edges) from each dependency tree. Overall we collected 5,484 sub-trees that appeared at least five times. ",
"There is a binary feature for each of these sub-trees indicating their appearance.",
"Part-of-speech: Finally, we used the part-of-speech tags generated by the dependency tree parser to identify the use of adjectives and interjections (such as yikes, dang, etc). ",
"We have two binary features indicating the usage of these two parts-of-speech."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"3b0255580ed4f00435fb5d27cb68e60e7a6313f4",
"c3c9c5f9c6175a8ac8082644cf79a8dde0744131"
],
"answer": [
{
"evidence": [
"Given the diversity of topics talked about on Twitter, we wanted to explore topic and type dependent speech act classifiers. We used Zhao et al.'s BIBREF7 definitions for topic and type. A topic is a subject discussed in one or more tweets (e.g., Boston Marathon bombings, Red Sox, etc). The type characterizes the nature of the topic, these are: Entity-oriented, Event-oriented topics, and Long-standing topics (topics about subjects that are commonly discussed).",
"We selected two topics for each of the three topic types described in the last section for a total of six topics (see Figure FIGREF2 for list of topics). We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. The kappa for the annotators was INLINEFORM0 . For training, we used the label that the majority of annotators agreed upon (7,563 total tweets).",
"We studied many features before settling on the features below. Our features can be divided into two general categories: Semantic and Syntactic. Some of these features were motivated by various works on speech act classification, while others are novel features. Overall we selected 3313 binary features, composed of 1647 semantic and 1666 syntactic features.",
"Opinion Words: We used the \"Harvard General Inquirer\" lexicon BIBREF8 , which is a dataset used commonly in sentiment classification tasks, to identify 2442 strong, negative and positive opinion words (such as robust, terrible, untrustworthy, etc). The intuition here is that these opinion words tend to signal certain speech acts such as expressions and recommendations. One binary feature indicates whether any of these words appear in a tweet.",
"Vulgar Words: Similar to opinion words, vulgar words can either signal great emotions or an informality mostly seen in expressions than any other kind of speech act (least seen in assertions). We used an online collection of vulgar words to collect a total of 349 vulgar words. A binary feature indicates the appearance or lack thereof of any of these words.",
"Emoticons: Emoticons have become ubiquitous in online communication and so cannot be ignored. Like vulgar words, emoticons can also signal emotions or informality. We used an online collection of text-based emoticons to collect a total of 362 emoticons. A binary feature indicates the appearance or lack thereof of any of these emoticons.",
"Speech Act Verbs: There are certain verbs (such as ask, demand, promise, report, etc) that typically signal certain speech acts. Wierzbicka BIBREF9 has compiled a total of 229 English speech act verbs divided into 37 groups. Since this is a collection of verbs, it is crucially important to only consider the verbs in a tweet and not any other word class (since some of these words can appear in multiple part-of-speech categories). In order to do this, we used Owoputi et al.'s BIBREF10 Twitter part-of-speech tagger to identify all the verbs in a tweet, which were then stemmed using Porter Stemming BIBREF11 . The stemmed verbs were then compared to the 229 speech act verbs (which were also stemmed using Porter Stemming). Thus, we have 229 binary features coding the appearance or lack thereof of each of these verbs."
],
"extractive_spans": [],
"free_form_answer": "A dataset they annotated, \"Harvard General Inquirer\" Lexicon for opinion words, a collection of vulgar words, an online collection of text-based emoticons, and Wierzbicka's collection of English speech act verbs",
"highlighted_evidence": [
"Given the diversity of topics talked about on Twitter, we wanted to explore topic and type dependent speech act classifiers.",
"We selected two topics for each of the three topic types described in the last section for a total of six topics (see Figure FIGREF2 for list of topics). We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. The kappa for the annotators was INLINEFORM0 . For training, we used the label that the majority of annotators agreed upon (7,563 total tweets).",
"We studied many features before settling on the features below. Our features can be divided into two general categories: Semantic and Syntactic.",
"Opinion Words: We used the \"Harvard General Inquirer\" lexicon BIBREF8 , which is a dataset used commonly in sentiment classification tasks, to identify 2442 strong, negative and positive opinion words (such as robust, terrible, untrustworthy, etc). ",
"Vulgar Words: Similar to opinion words, vulgar words can either signal great emotions or an informality mostly seen in expressions than any other kind of speech act (least seen in assertions). We used an online collection of vulgar words to collect a total of 349 vulgar words.",
"Emoticons: Emoticons have become ubiquitous in online communication and so cannot be ignored. Like vulgar words, emoticons can also signal emotions or informality. We used an online collection of text-based emoticons to collect a total of 362 emoticons.",
"Speech Act Verbs: There are certain verbs (such as ask, demand, promise, report, etc) that typically signal certain speech acts. Wierzbicka BIBREF9 has compiled a total of 229 English speech act verbs divided into 37 groups"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We selected two topics for each of the three topic types described in the last section for a total of six topics (see Figure FIGREF2 for list of topics). We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). We then asked three undergraduate annotators to independently annotate each of the tweets with one of the speech act categories described earlier. The kappa for the annotators was INLINEFORM0 . For training, we used the label that the majority of annotators agreed upon (7,563 total tweets)."
],
"extractive_spans": [],
"free_form_answer": "Twitter data",
"highlighted_evidence": [
" We collected a few thousand tweets from the Twitter public API for each of these topics using topic-specific queries (e.g., #fergusonriots, #redsox, etc). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"",
"",
"",
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"",
"",
"",
"",
"",
"",
""
],
"question": [
"What syntactic and semantic features are proposed?",
"Which six speech acts are included in the taxonomy?",
"what classifier had better performance?",
"how many tweets were labeled?",
"how many annotators were there?",
"who labelled the tweets?",
"what are the proposed semantic features?",
"what syntactic features are proposed?",
"what datasets were used?"
],
"question_id": [
"415014a5bcd83df52c9307ad16fab1f03d80f705",
"b79c85fa84712d3028cb5be2af873c634e51140e",
"dc473819b196c0ea922773e173a6b283fa778791",
"9207f19e65422bdf28f20e270ede6c725a38e5f9",
"8ddf78dbdc6ac964a7102ae84df18582841f2e3c",
"079e654c97508c521c07ab4d24cdaaede5602c61",
"7efbd9adbc403de4be6b1fb1999dd5bed9d6262c",
"95bbd91badbfe979899cca6655afc945ea8a6926",
"76ae794ced3b5ae565f361451813f2f3bc85b214"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"twitter",
"twitter",
"",
"",
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"",
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: Distribution of speech acts for all six topics and three types.",
"Table 1: Example tweets for each speech act type.",
"Figure 2: The dependency tree and the part of speech tags of a sample tweet.",
"Table 2: F1 scores for each speech act category. The best scores for each category are highlighted.",
"Table 3: F1 scores for Twitter-wide, type-specific and topicspecific classifiers.)",
"Table 4: F1 scores for each speech act category for semantic and syntactic features."
],
"file": [
"3-Figure1-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"5-Table2-1.png",
"5-Table3-1.png",
"5-Table4-1.png"
]
} | [
"What syntactic and semantic features are proposed?",
"what are the proposed semantic features?",
"what syntactic features are proposed?",
"what datasets were used?"
] | [
[
"1605.05156-Semantic Features-2",
"1605.05156-Semantic Features-1",
"1605.05156-Syntactic Features-2",
"1605.05156-Syntactic Features-0",
"1605.05156-Semantic Features-0",
"1605.05156-Semantic Features-4",
"1605.05156-Syntactic Features-5",
"1605.05156-Semantic Features-5",
"1605.05156-Semantic Features-3",
"1605.05156-Syntactic Features-4",
"1605.05156-Syntactic Features-3",
"1605.05156-Syntactic Features-1"
],
[
"1605.05156-Semantic Features-2",
"1605.05156-Semantic Features-1",
"1605.05156-Semantic Features-0",
"1605.05156-Features-0",
"1605.05156-Semantic Features-4",
"1605.05156-Semantic Features-3",
"1605.05156-Semantic Features-5"
],
[
"1605.05156-Syntactic Features-2",
"1605.05156-Syntactic Features-0",
"1605.05156-Semantic Features-5",
"1605.05156-Features-0",
"1605.05156-Syntactic Features-5",
"1605.05156-Syntactic Features-4",
"1605.05156-Syntactic Features-3",
"1605.05156-Syntactic Features-1"
],
[
"1605.05156-Data Collection and Datasets-1",
"1605.05156-Semantic Features-2",
"1605.05156-Semantic Features-1",
"1605.05156-Data Collection and Datasets-0",
"1605.05156-Semantic Features-0",
"1605.05156-Features-0",
"1605.05156-Semantic Features-3"
]
] | [
"Semantic Features : Opinion Words, Vulgar Words, Emoticons, Speech Act Verbs, N-grams.\nSyntactic Features: Punctuations, Twitter-specific Characters, Abbreviations, Dependency Sub-trees, Part-of-speech.",
"Binary features indicating opinion words, vulgar words, emoticons, speech act verbs and unigram, bigram and trigram that appear at least five times in the dataset",
"Binary features indicating appeance of punctuations, twitter-specific characters - @, #, and RT, abbreviations, length one and two sub-trees extracted from dependency sub-tree and Part-of-speech - adjectives and interjections.",
"Twitter data"
] | 352 |
1804.05306 | Transcribing Lyrics From Commercial Song Audio: The First Step Towards Singing Content Processing | Spoken content processing (such as retrieval and browsing) is maturing, but the singing content is still almost completely left out. Songs are human voice carrying plenty of semantic information just as speech, and may be considered as a special type of speech with highly flexible prosody. The various problems in song audio, for example the significantly changing phone duration over highly flexible pitch contours, make the recognition of lyrics from song audio much more difficult. This paper reports an initial attempt towards this goal. We collected music-removed version of English songs directly from commercial singing content. The best results were obtained by TDNN-LSTM with data augmentation with 3-fold speed perturbation plus some special approaches. The WER achieved (73.90%) was significantly lower than the baseline (96.21%), but still relatively high. | {
"paragraphs": [
[
"The exploding multimedia content over the Internet, has created a new world of spoken content processing, for example the retrieval BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , browsing BIBREF5 , summarization BIBREF0 , BIBREF5 , BIBREF6 , BIBREF7 , and comprehension BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 of spoken content. On the other hand, we may realize there still exists a huge part of multimedia content not yet taken care of, i.e., the singing content or those with audio including songs. Songs are human voice carrying plenty of semantic information just as speech. It will be highly desired if the huge quantities of singing content can be similarly retrieved, browsed, summarized or comprehended by machine based on the lyrics just as speech. For example, it is highly desired if song retrieval can be achieved based on the lyrics in addition.",
"Singing voice can be considered as a special type of speech with highly flexible and artistically designed prosody: the rhythm as artistically designed duration, pause and energy patterns, the melody as artistically designed pitch contours with much wider range, the lyrics as artistically authored sentences to be uttered by the singer. So transcribing lyrics from song audio is an extended version of automatic speech recognition (ASR) taking into account these differences.",
"On the other hand, singing voice and speech differ widely in both acoustic and linguistic characteristics. Singing signals are often accompanied with some extra music and harmony, which are noisy for recognition. The highly flexible pitch contours with much wider range BIBREF12 , BIBREF13 , the significantly changing phone durations in songs, including the prolonged vowels BIBREF14 , BIBREF15 over smoothly varying pitch contours, create much more problems not existing in speech. The falsetto in singing voice may be an extra type of human voice not present in normal speech. Regarding linguistic characteristics BIBREF16 , BIBREF17 , word repetition and meaningless words (e.g.oh) frequently appear in the artistically authored lyrics in singing voice.",
"Applying ASR technologies to singing voice has been studied for long. However, not too much work has been reported, probably because the recognition accuracy remained to be relatively low compared to the experiences for speech. But such low accuracy is actually natural considering the various difficulties caused by the significant differences between singing voice and speech. An extra major problem is probably the lack of singing voice database, which pushed the researchers to collect their own closed datasets BIBREF12 , BIBREF15 , BIBREF17 , which made it difficult to compare results from different works.",
"Having the language model learned from a data set of lyrics is definitely helpful BIBREF15 , BIBREF17 . Hosoya et al. BIBREF16 achieved this with finite state automaton. Sasou et al. BIBREF12 actually prepared a language model for each song. In order to cope with the acoustic characteristics of singing voice, Sasou et al. BIBREF12 , BIBREF14 proposed AR-HMM to take care of the high-pitched sounds and prolonged vowels, while recently Kawai et al. BIBREF15 handled the prolonged vowels by extending the vowel parts in the lexicon, both achieving good improvement. Adaptation from models trained with speech was attractive, and various approaches were compared by Mesaros el al. BIBREF18 .",
"In this paper, we wish our work can be compatible to more available singing content, therefore in the initial effort we collected about five hours of music-removed version of English songs directly from commercial singing content on YouTube. The descriptive term \"music-removed\" implies the background music have been removed somehow. Because many very impressive works were based on Japanese songs BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , the comparison is difficult. We analyzed various approaches with HMM, deep learning with data augmentation, and acoustic adaptation on fragment, song, singer, and genre levels, primarily based on fMLLR BIBREF19 . We also trained the language model with a corpus of lyrics, and modify the pronunciation lexicon and increase the transition probability of HMM for prolonged vowels. Initial results are reported."
],
[
"To make our work easier and compatible to more available singing content, we collected 130 music-removed (or vocal-only) English songs from www.youtube.com so as to consider only the vocal line.The music-removing processes are conducted by the video owners, containing the original vocal recordings by the singers and vocal elements for remix purpose. ",
"After initial test by speech recognition system trained with LibriSpeech BIBREF20 , we dropped 20 songs, with WERs exceeding 95%. The remaining 110 pieces of music-removed version of commercial English popular songs were produced by 15 male singers, 28 female singers and 19 groups. The term group means by more than one person. No any further preprocessing was performed on the data, so the data preserves many characteristics of the vocal extracted from commercial polyphonic music, such as harmony, scat, and silent parts. Some pieces also contain overlapping verses and residual background music, and some frequency components may be truncated. Below this database is called vocal data here.",
"These songs were manually segmented into fragments with duration ranging from 10 to 35 sec primarily at the end of the verses. Then we randomly divided the vocal data by the singer and split it into training and testing sets. We got a total of 640 fragments in the training set and 97 fragments in the testing set. The singers in the two sets do not overlap. The details of the vocal data are listed in Table. TABREF3 .",
"Because music genre may affect the singing style and the audio, for example, hiphop has some rap parts, and rock has some shouting vocal, we obtained five frequently observed genre labels of the vocal data from wikipedia BIBREF21 : pop, electronic, rock, hiphop, and R&B/soul. The details are also listed in Table. TABREF3 . Note that a song may belong to multiple genres.",
"To train initial models for speech for adaptation to singing voice, we used 100 hrs of English clean speech data of LibriSpeech."
],
[
"In addition to the data set from LibriSpeech (803M words, 40M sentences), we collected 574k pieces of lyrics text (totally 129.8M words) from lyrics.wikia.com, a lyric website, and the lyrics were normalized by removing punctuation marks and unnecessary words (like ’[CHORUS]’). Also, those lyrics for songs within our vocal data were removed from the data set."
],
[
"Fig. FIGREF5 shows the overall structure based on Kaldi BIBREF22 for training the acoustic models used in this work. The right-most block is the vocal data, and the series of blocks on the left are the feature extraction processes over the vocal data. Features I, II, III, IV represent four different versions of features used here. For example, Feature IV was derived from splicing Feature III with 4 left-context and 4 right-context frames, and Feature III was obtained by performing fMLLR transformation over Feature II, while Feature I has been mean and variance normalized, etc.",
"The series of second right boxes are forced alignment processes performed over the various versions of features of the vocal data. The results are denoted as Alignment a, b, c, d, e. For example, Alignment a is the forced alignment results obtained by aligning Feature I of the vocal data with the LibriSpeech SAT triphone model (denoted as Model A at the top middle).",
"The series of blocks in the middle of Fig. FIGREF5 are the different versions of trained acoustic models. For example, model B is a monophone model trained with Feature I of the vocal data based on alignment a. Model C is very similar, except based on alignment b which is obtained with Model B, etc. Another four sets of Models E, F, G, H are below. For example Model E includes models E-1, 2, 3, 4, Models F,G and H include F-1,2 , G-1,2,3, and H-1,2,3.",
"We take Model E-4 with fragment-level adaptation within model E as the example. Here every fragment of song (10-35 sec long) was used to train a distinct fragment-level fMLLR matrix, with which Feature III was obtained. Using all these fragment-level fMLLR features, a single Model E-4 was trained with Alignment d. Similarly for Models E-1, 2, 3 on genre, singer and song levels. The fragment-level Model E-4 turned out to be the best in model E in the experiments."
],
[
"The deep learning models (Models F,G,H) are based on alignment e, produced by the best GMM-HMM model. Models F-1,2 are respectively for regular DNN and multi-target, LibriSpeech phonemes and vocal data phonemes taken as two targets. The latter tried to adapt the speech model to the vocal model, with the first several layers shared, while the final layers separated.",
"Data augmentation with speed perturbation BIBREF23 was implemented in Models G, H to increase the quantity of training data and deal with the problem of changing singing rates. For 3-fold, two copies of extra training data were obtained by modifying the audio speed by 0.9 and 1.1. For 5-fold, the speed factors were empirically obtained as 0.9, 0.95, 1.05, 1.1. 1-fold means the original training data.",
"Models G-1,2,3 used projected LSTM (LSTMP) BIBREF24 with 40 dimension MFCCs and 50 dimension i-vectors with output delay of 50ms. BLSTMs were used at 1-fold, 3-fold and 5-fold.",
"Models H-1,2,3 used TDNN-LSTM BIBREF25 , also at 1-fold, 3-fold and 5-fold, with the same features as Model G."
],
[
"Consider the many errors caused by the frequently appearing prolonged vowels in song audio, we considered two approaches below.",
"The previously proposed approach BIBREF15 was adopted here as shown by the example in Fig. FIGREF10 (a). For the word “apple”, each vowel within the word ( but not the consonants) can be either repeated or not, so for a word with INLINEFORM0 vowels, INLINEFORM1 pronunciations become possible. In the experiments below, we only did it for words with INLINEFORM2 .",
"This is also shown in Fig. FIGREF10 . Assume an vowel HMM have INLINEFORM0 states (including an end state). Let the original self-looped probability of state INLINEFORM1 is denoted INLINEFORM2 and the probability of transition to the next state is INLINEFORM3 . We increased the self-looped transition probabilities by replacing INLINEFORM4 by INLINEFORM5 . This was also done for vowel HMMs only but not for consonants."
],
[
"We analyzed the perplexity and out-of-vocabulary(OOV) rate of the two language models (trained with LibriSpeech and Lyrics respectively) tested on the transcriptions of the testing set of vocal data. Both models are 3-gram, pruned with SRILM with the same threshold. LM trained with lyrics was found to have a significantly lower perplexity(123.92 vs 502.06) and a much lower OOV rate (0.55% vs 1.56%).",
"Fig. FIGREF12 depicts the histogram for pitch distribution for speech and different genders of vocal. We can see the pitch values of vocal are significantly higher with a much wider range, and female singers produce slightly higher pitch values than male singers and groups."
],
[
"The primary recognition results are listed in Table. TABREF14 . Word error rate (WER) is taken as the major performance measure, while phone error rate (PER) is also listed as references. Rows (1)(2) on the top are for the language model trained with LibriSpeech data, while rows (3)-(16) for the language model trained with lyrics corpus. In addition, in rows (4)-(16) the lexicon was extended with possible repetition of vowels as explained in subsection UID8 . Rows (1)-(8) are for GMM-HMM only, while rows (9)-(16) with DNNs, BLSTMs and TDNN-LSTMs.",
"Row(1) is for Model A in Fig. FIGREF5 taken as the baseline, which was trained on LibriSpeech data with SAT, together with the language model also trained with LibriSpeech. The extremely high WER (96.21%) indicated the wide mismatch between speech and song audio, and the high difficulties in transcribing song audio. This is taken as the baseline of this work. After going through the series of Alignments a, b, c, d and training the series of Models B, C, D, we finally obtained the best GMM-HMM model, Model E-4 in Model E with fMLLR on the fragment level, as explained in section SECREF3 and shown in Fig. FIGREF5 . As shown in row(2) of Table. TABREF14 , with the same LibriSpeech LM, Model E-4 reduced WER to 88.26%, and brought an absolute improvement of 7.95% (rows (2) vs. (1)), which shows the achievements by the series of GMM-HMM alone. When we replaced the LibriSpeech language model with Lyrics language model but with the same Model E-4, we obtained an WER of 80.40% or an absolute improvement of 7.86% (rows (3) vs. (2)). This shows the achievement by the Lyrics language model alone.",
"We then substituted the normal lexicon with the extended one (with vowels repeated or not as described in subsection UID8 ), while using exactly the same model E-4, the WER of 77.08% in row (7) indicated the extended lexicon alone brought an absolute improvement of 3.32% (rows (7) vs. (3)). Furthermore, the increased self-looped transition probability ( INLINEFORM0 ) in subsection UID9 for vowel HMMs also brought an 0.46% improvement when applied on top of the extended lexicon (rows (8) vs. (7)). The results show that prolonged vowels did cause problems in recognition, and the proposed approaches did help.",
"Rows (4)(5)(6) for Models B, C, D show the incremental improvements when training the acoustic models with a series of improved alignments a, b, c, which led to the Model E-4 in row (7). Some preliminary tests with p-norm DNN with varying parameters were then performed. The best results for the moment were obtained with 4 hidden layers, 600 and 150 hidden units for p-norm nonlinearity BIBREF26 . The result in rows (9) shows absolute improvements of 1.52% (row (9) for Model F-1 vs. row (7)) for regular DNN. Rows(10) is for Models F-1 DNN (multi-target).",
"Rows (11)(12)(13) show the results of BLSTMs with different factors of data augmentation described in SECREF6 . Models G-1,2,3 used three layers with 400 hidden states and 100 units for recurrent and projection layer, however, since the amount of training data were different, the number of training epoches were 15, 7 and 5 respectively. Data augmentation brought much improvement of 5.62% (rows (12) v.s.(11)), while 3-fold BLSTM outperformed 5-fold by 1.03%. Trend for Model H (rows (14)(15)(16)) is the same as Model G, 3-fold turned out to be the best. Row (15) of Model TDNN-LSTM achieved the lowest WER(%) of 73.90%, with architecture INLINEFORM0 , while INLINEFORM1 and INLINEFORM2 denotes that the size of TDNN layer was INLINEFORM3 and the size of hidden units of forward LSTM was INLINEFORM4 . The WER achieved here are relatively high, indicating the difficulties and the need for further research."
],
[
"In Fig. FIGREF5 Model E includes different models obtained with fMLLR over different levels, Models E-1,2,3,4. But in Table. TABREF14 only Model E-4 is listed. Complete results for Models E-1,2,3,4 are listed in Table. TABREF19 , all for Lyrics Language Model with extended lexicon. Row (4) here is for Model E-4, or fMLLR over fragment level, exactly row (7) of Table. TABREF14 . Rows (1)(2)(3) are the same as row (5) here, except over levels of genre, singer and song. We see fragment level is the best, probably because fragment(10-35 sec long) is the smallest unit and the acoustic characteristic of signals within a fragment is almost uniform (same genre, same singer and the same song)."
],
[
"From the data, we found errors frequently occurred under some specific circumstances, such as high-pitched voice, widely varying phone duration, overlapping verses (multiple people sing simultaneously), and residual background music.",
"Figure FIGREF15 shows a sample recognition results obtained with Model E-4 as in row(7) of Table. TABREF14 , showing the error caused by high-pitched voice and overlapping verses. At first, the model successfully decoded the words, \"what doesn't kill you makes\", but afterward the pitch went high and a lower pitch harmony was added, the recognition results then went totally wrong."
],
[
"In this paper we report some initial results of transcribing lyrics from commercial song audio using different sets of acoustic models, adaptation approaches, language models and lexicons. Techniques for special characteristics of song audio were considered. The achieved WER was relatively high compared to experiences in speech recognition. However, considering the much more difficult problems in song audio and the wide difference between speech and singing voice, the results here may serve as good references for future work to be continued."
]
],
"section_name": [
"Introduction",
"Acoustic Corpus",
"Linguistic Corpus",
"Recognition Approaches and System Structure",
"DNN, BLSTM and TDNN-LSTM",
"Special Approaches for Prolonged Vowels",
"Data Analysis",
"Recognition Results",
"Different Levels of fMLLR Adaptation",
"Error Analysis",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"19240a2f30d9fe3be28a4c512222820af3fda419",
"20eef34bd5379ab896d0fd373afda6afa13f061a"
],
"answer": [
{
"evidence": [
"Row(1) is for Model A in Fig. FIGREF5 taken as the baseline, which was trained on LibriSpeech data with SAT, together with the language model also trained with LibriSpeech. The extremely high WER (96.21%) indicated the wide mismatch between speech and song audio, and the high difficulties in transcribing song audio. This is taken as the baseline of this work. After going through the series of Alignments a, b, c, d and training the series of Models B, C, D, we finally obtained the best GMM-HMM model, Model E-4 in Model E with fMLLR on the fragment level, as explained in section SECREF3 and shown in Fig. FIGREF5 . As shown in row(2) of Table. TABREF14 , with the same LibriSpeech LM, Model E-4 reduced WER to 88.26%, and brought an absolute improvement of 7.95% (rows (2) vs. (1)), which shows the achievements by the series of GMM-HMM alone. When we replaced the LibriSpeech language model with Lyrics language model but with the same Model E-4, we obtained an WER of 80.40% or an absolute improvement of 7.86% (rows (3) vs. (2)). This shows the achievement by the Lyrics language model alone."
],
"extractive_spans": [],
"free_form_answer": "Model A, which was trained on LibriSpeech data with SAT, together with the language model also trained with LibriSpeech.",
"highlighted_evidence": [
"Row(1) is for Model A in Fig. FIGREF5 taken as the baseline, which was trained on LibriSpeech data with SAT, together with the language model also trained with LibriSpeech. ",
"Model A in Fig. FIGREF5 taken as the baseline, which was trained on LibriSpeech data with SAT, together with the language model also trained with LibriSpeech"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Row(1) is for Model A in Fig. FIGREF5 taken as the baseline, which was trained on LibriSpeech data with SAT, together with the language model also trained with LibriSpeech. The extremely high WER (96.21%) indicated the wide mismatch between speech and song audio, and the high difficulties in transcribing song audio. This is taken as the baseline of this work. After going through the series of Alignments a, b, c, d and training the series of Models B, C, D, we finally obtained the best GMM-HMM model, Model E-4 in Model E with fMLLR on the fragment level, as explained in section SECREF3 and shown in Fig. FIGREF5 . As shown in row(2) of Table. TABREF14 , with the same LibriSpeech LM, Model E-4 reduced WER to 88.26%, and brought an absolute improvement of 7.95% (rows (2) vs. (1)), which shows the achievements by the series of GMM-HMM alone. When we replaced the LibriSpeech language model with Lyrics language model but with the same Model E-4, we obtained an WER of 80.40% or an absolute improvement of 7.86% (rows (3) vs. (2)). This shows the achievement by the Lyrics language model alone."
],
"extractive_spans": [],
"free_form_answer": "a model trained on LibriSpeech data with SAT and a with a LM also trained with LibriSpeech",
"highlighted_evidence": [
"Row(1) is for Model A in Fig. FIGREF5 taken as the baseline, which was trained on LibriSpeech data with SAT, together with the language model also trained with LibriSpeech. The extremely high WER (96.21%) indicated the wide mismatch between speech and song audio, and the high difficulties in transcribing song audio. This is taken as the baseline of this work. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"276327d8175297dc7eda58bad837596a044d76c9",
"d6d7506a3505160537578ff2dd31d11b651ce30c"
],
"answer": [
{
"evidence": [
"To make our work easier and compatible to more available singing content, we collected 130 music-removed (or vocal-only) English songs from www.youtube.com so as to consider only the vocal line.The music-removing processes are conducted by the video owners, containing the original vocal recordings by the singers and vocal elements for remix purpose.",
"After initial test by speech recognition system trained with LibriSpeech BIBREF20 , we dropped 20 songs, with WERs exceeding 95%. The remaining 110 pieces of music-removed version of commercial English popular songs were produced by 15 male singers, 28 female singers and 19 groups. The term group means by more than one person. No any further preprocessing was performed on the data, so the data preserves many characteristics of the vocal extracted from commercial polyphonic music, such as harmony, scat, and silent parts. Some pieces also contain overlapping verses and residual background music, and some frequency components may be truncated. Below this database is called vocal data here."
],
"extractive_spans": [
"110 pieces of music-removed version of commercial English popular songs"
],
"free_form_answer": "",
"highlighted_evidence": [
"To make our work easier and compatible to more available singing content, we collected 130 music-removed (or vocal-only) English songs from www.youtube.com so as to consider only the vocal line.The music-removing processes are conducted by the video owners, containing the original vocal recordings by the singers and vocal elements for remix purpose.\n\nAfter initial test by speech recognition system trained with LibriSpeech BIBREF20 , we dropped 20 songs, with WERs exceeding 95%. The remaining 110 pieces of music-removed version of commercial English popular songs were produced by 15 male singers, 28 female singers and 19 groups. The term group means by more than one person. No any further preprocessing was performed on the data, so the data preserves many characteristics of the vocal extracted from commercial polyphonic music, such as harmony, scat, and silent parts. Some pieces also contain overlapping verses and residual background music, and some frequency components may be truncated."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To make our work easier and compatible to more available singing content, we collected 130 music-removed (or vocal-only) English songs from www.youtube.com so as to consider only the vocal line.The music-removing processes are conducted by the video owners, containing the original vocal recordings by the singers and vocal elements for remix purpose."
],
"extractive_spans": [
"130 "
],
"free_form_answer": "",
"highlighted_evidence": [
"To make our work easier and compatible to more available singing content, we collected 130 music-removed (or vocal-only) English songs from www.youtube.com so as to consider only the vocal line."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"What was the baseline?",
"How many songs were collected?"
],
"question_id": [
"2a9c7243744b42f1e9fed9ff2ab17c6f156b1ba4",
"f8f64da7172e72e684f0e024a19411b43629ff55"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Table 1. Information of training and testing sets in vocal data. The lengths are all measured in minutes.",
"Fig. 1. The overall structure for training the acoustic models.",
"Fig. 2. Approaches for prolonged vowels: (a) extended lexicon (vowels can be repeated or not), (b) increased self-loop transition probabilities (transition probabilities to the next state reduced by r).",
"Table 2. Word error rate (WER) and phone error rate (PER) over the test set of vocal data.",
"Fig. 3. Histogram of pitch distribution.",
"Fig. 4. Sample recognition errors produced by Model E-4 : fragment-level in row(7) of Table.2.",
"Table 3. Model E : GMM-HMM with fMLLR over different levels."
],
"file": [
"2-Table1-1.png",
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Table2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"4-Table3-1.png"
]
} | [
"What was the baseline?"
] | [
[
"1804.05306-Recognition Results-1"
]
] | [
"a model trained on LibriSpeech data with SAT and a with a LM also trained with LibriSpeech"
] | 353 |
1802.02614 | Enhance word representation for out-of-vocabulary on Ubuntu dialogue corpus | Ubuntu dialogue corpus is the largest public available dialogue corpus to make it feasible to build end-to-end deep neural network models directly from the conversation data. One challenge of Ubuntu dialogue corpus is the large number of out-of-vocabulary words. In this paper we proposed a method which combines the general pre-trained word embedding vectors with those generated on the task-specific training set to address this issue. We integrated character embedding into Chen et al's Enhanced LSTM method (ESIM) and used it to evaluate the effectiveness of our proposed method. For the task of next utterance selection, the proposed method has demonstrated a significant performance improvement against original ESIM and the new model has achieved state-of-the-art results on both Ubuntu dialogue corpus and Douban conversation corpus. In addition, we investigated the performance impact of end-of-utterance and end-of-turn token tags. | {
"paragraphs": [
[
"The ability for a machine to converse with human in a natural and coherent manner is one of challenging goals in AI and natural language understanding. One problem in chat-oriented human-machine dialog system is to reply a message within conversation contexts. Existing methods can be divided into two categories: retrieval-based methods BIBREF0 , BIBREF1 , BIBREF2 and generation based methods BIBREF3 . The former is to rank a list of candidates and select a good response. For the latter, encoder-decoder framework BIBREF3 or statistical translation method BIBREF4 are usually used to generate a response. It is not easy to main the fluency of the generated texts.",
"Ubuntu dialogue corpus BIBREF5 is the public largest unstructured multi-turns dialogue corpus which consists of about one-million two-person conversations. The size of the corpus makes it attractive for the exploration of deep neural network modeling in the context of dialogue systems. Most deep neural networks use word embedding as the first layer. They either use fixed pre-trained word embedding vectors generated on a large text corpus or learn word embedding for the specific task. The former is lack of flexibility of domain adaptation. The latter requires a very large training corpus and significantly increases model training time. Word out-of-vocabulary issue occurs for both cases. Ubuntu dialogue corpus also contains many technical words (e.g. “ctrl+alt+f1\", “/dev/sdb1\"). The ubuntu corpus (V2) contains 823057 unique tokens whereas only 22% tokens occur in the pre-built GloVe word vectors. Although character-level representation which models sub-word morphologies can alleviate this problem to some extent BIBREF6 , BIBREF7 , BIBREF8 , character-level representation still have limitations: learn only morphological and orthographic similarity, other than semantic similarity (e.g. `car' and `bmw') and it cannot be applied to Asian languages (e.g. Chinese characters).",
"In this paper, we generate word embedding vectors on the training corpus based on word2vec BIBREF9 . Then we propose an algorithm to combine the generated one with the pre-trained word embedding vectors on a large general text corpus based on vector concatenation. The new word representation maintains information learned from both general text corpus and task-domain. The nice property of the algorithm is simplicity and little extra computational cost will be added. It can address word out-of-vocabulary issue effectively. This method can be applied to most NLP deep neural network models and is language-independent. We integrated our methods with ESIM(baseline model) BIBREF10 . The experimental results have shown that the proposed method has significantly improved the performance of original ESIM model and obtained state-of-the-art results on both Ubuntu Dialogue Corpus and Douban Conversation Corpus BIBREF11 . On Ubuntu Dialogue Corpus (V2), the improvement to the previous best baseline model (single) on INLINEFORM0 is 3.8% and our ensemble model on INLINEFORM1 is 75.9%. On Douban Conversation Corpus, the improvement to the previous best model (single) on INLINEFORM2 is 3.6%.",
"Our contributions in this paper are summarized below:",
"The rest paper is organized as follows. In Section SECREF2 , we review the related work. In Section SECREF3 we provide an overview of ESIM (baseline) model and describe our methods to address out-of-vocabulary issues. In Section SECREF4 , we conduct extensive experiments to show the effectiveness of the proposed method. Finally we conclude with remarks and summarize our findings and outline future research directions."
],
[
"Character-level representation has been widely used in information retrieval, tagging, language modeling and question answering. BIBREF12 represented a word based on character trigram in convolution neural network for web-search ranking. BIBREF7 represented a word by the sum of the vector representation of character n-gram. Santos et al BIBREF13 , BIBREF14 and BIBREF8 used convolution neural network to generate character-level representation (embedding) of a word. The former combined both word-level and character-level representation for part-of-speech and name entity tagging tasks while the latter used only character-level representation for language modeling. BIBREF15 employed a deep bidirectional GRU network to learn character-level representation and then concatenated word-level and character-level representation vectors together. BIBREF16 used a fine-grained gating mechanism to combine the word-level and character-level representation for reading comprehension. Character-level representation can help address out-of-vocabulary issue to some extent for western languages, which is mainly used to capture character ngram similarity.",
"The other work related to enrich word representation is to combine the pre-built embedding produced by GloVe and word2vec with structured knowledge from semantic network ConceptNet BIBREF17 and merge them into a common representation BIBREF18 . The method obtained very good performance on word-similarity evaluations. But it is not very clear how useful the method is for other tasks such as question answering. Furthermore, this method does not directly address out-of-vocabulary issue.",
"Next utterance selection is related to response selection from a set of candidates. This task is similar to ranking in search, answer selection in question answering and classification in natural language inference. That is, given a context and response pair, assign a decision score BIBREF19 . BIBREF1 formalized short-text conversations as a search problem where rankSVM was used to select response. The model used the last utterance (a single-turn message) for response selection. On Ubuntu dialogue corpus, BIBREF5 proposed Long Short-Term Memory(LSTM) BIBREF20 siamese-style neural architecture to embed both context and response into vectors and response were selected based on the similarity of embedded vectors. BIBREF21 built an ensemble of convolution neural network (CNN) BIBREF22 and Bi-directional LSTM. BIBREF19 employed a deep neural network structure BIBREF23 where CNN was applied to extract features after bi-directional LSTM layer. BIBREF24 treated each turn in multi-turn context as an unit and joined word sequence view and utterance sequence view together by deep neural networks. BIBREF11 explicitly used multi-turn structural info on Ubuntu dialogue corpus to propose a sequential matching method: match each utterance and response first on both word and sub-sequence levels and then aggregate the matching information by recurrent neural network.",
"The latest developments have shown that attention and matching aggregation are effective in NLP tasks such as question/answering and natural language inference. BIBREF25 introduced context-to-query and query-to-context attentions mechanisms and employed bi-directional LSTM network to capture the interactions among the context words conditioned on the query. BIBREF26 compared a word in one sentence and the corresponding attended word in the other sentence and aggregated the comparison vectors by summation. BIBREF10 enhanced local inference information by the vector difference and element-wise product between the word in premise an the attended word in hypothesis and aggregated local matching information by LSTM neural network and obtained the state-of-the-art results on the Stanford Natural Language Inference (SNLI) Corpus. BIBREF27 introduced several local matching mechanisms before aggregation, other than only word-by-word matching."
],
[
"In this section, we first review ESIM model BIBREF10 and introduce our modifications and extensions. Then we introduce a string matching algorithm for out-of-vocabulary words."
],
[
"In our notation, given a context with multi-turns INLINEFORM0 with length INLINEFORM1 and a response INLINEFORM2 with length INLINEFORM3 where INLINEFORM4 and INLINEFORM5 is the INLINEFORM6 th and INLINEFORM7 th word in context and response, respectively. For next utterance selection, the response is selected based on estimating a conditional probability INLINEFORM8 which represents the confidence of selecting INLINEFORM9 from the context INLINEFORM10 . Figure FIGREF6 shows high-level overview of our model and its details will be explained in the following sections.",
"Word Representation Layer. Each word in context and response is mapped into INLINEFORM0 -dimensional vector space. We construct this vector space with word-embedding and character-composed embedding. The character-composed embedding, which is newly introduced here and was not part of the original forumulation of ESIM, is generated by concatenating the final state vector of the forward and backward direction of bi-directional LSTM (BiLSTM). Finally, we concatenate word embedding and character-composed embedding as word representation.",
"Context Representation Layer. As in base model, context and response embedding vector sequences are fed into BiLSTM. Here BiLSTM learns to represent word and its local sequence context. We concatenate the hidden states at each time step for both directions as local context-aware new word representation, denoted by INLINEFORM0 and INLINEFORM1 for context and response, respectively. DISPLAYFORM0 ",
" where INLINEFORM0 is word vector representation from the previous layer.",
"Attention Matching Layer. As in ESIM model, the co-attention matrix INLINEFORM0 where INLINEFORM1 . INLINEFORM2 computes the similarity of hidden states between context and response. For each word in context, we find the most relevant response word by computing the attended response vector in Equation EQREF8 . The similar operation is used to compute attended context vector in Equation . DISPLAYFORM0 ",
" After the above attended vectors are calculated, vector difference and element-wise product are used to enrich the interaction information further between context and response as shown in Equation EQREF9 and . DISPLAYFORM0 ",
" where the difference and element-wise product are concatenated with the original vectors.",
"Matching Aggregation Layer. As in ESIM model, BiLSTM is used to aggregate response-aware context representation as well as context-aware response representation. The high-level formula is given by DISPLAYFORM0 ",
"Pooling Layer. As in ESIM model, we use max pooling. Instead of using average pooling in the original ESIM model, we combine max pooling and final state vectors (concatenation of both forward and backward one) to form the final fixed vector, which is calculated as follows: DISPLAYFORM0 ",
"Prediction Layer. We feed INLINEFORM0 in Equation into a 2-layer fully-connected feed-forward neural network with ReLu activation. In the last layer the sigmoid function is used. We minimize binary cross-entropy loss for training."
],
[
"Many pre-trained word embedding vectors on general large text-corpus are available. For domain-specific tasks, out-of-vocabulary may become an issue. Here we propose algorithm SECREF12 to combine pre-trained word vectors with those word2vec BIBREF9 generated on the training set. Here the pre-trainined word vectors can be from known methods such as GloVe BIBREF28 , word2vec BIBREF9 and FastText BIBREF7 .",
"[H] InputInputOutputOutput A dictionary with word embedding vectors of dimension INLINEFORM0 for INLINEFORM1 . INLINEFORM2 ",
" INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 res INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 res INLINEFORM8 res INLINEFORM9 Return res Combine pre-trained word embedding with those generated on training set.",
"where INLINEFORM0 is vector concatenation operator. The remaining words which are in INLINEFORM1 and are not in the above output dictionary are initialized with zero vectors. The above algorithm not only alleviates out-of-vocabulary issue but also enriches word embedding representation."
],
[
"We evaluate our model on the public Ubuntu Dialogue Corpus V2 BIBREF29 since this corpus is designed for response selection study of multi turns human-computer conversations. The corpus is constructed from Ubuntu IRC chat logs. The training set consists of 1 million INLINEFORM0 triples where the original context and corresponding response are labeled as positive and negative response are selected randomly on the dataset. On both validation and test sets, each context contains one positive response and 9 negative responses. Some statistics of this corpus are presented in Table TABREF15 .",
"Douban conversation corpus BIBREF11 which are constructed from Douban group (a popular social networking service in China) is also used in experiments. Response candidates on the test set are collected by Lucene retrieval model, other than negative sampling without human judgment on Ubuntu Dialogue Corpus. That is, the last turn of each Douban dialogue with additional keywords extracted from the context on the test set was used as query to retrieve 10 response candidates from the Lucene index set (Details are referred to section 4 in BIBREF11 ). For the performance measurement on test set, we ignored samples with all negative responses or all positive responses. As a result, 6,670 context-response pairs were left on the test set. Some statistics of Douban conversation corpus are shown below:"
],
[
"Our model was implemented based on Tensorflow BIBREF30 . ADAM optimization algorithm BIBREF31 was used for training. The initial learning rate was set to 0.001 and exponentially decayed during the training . The batch size was 128. The number of hidden units of biLSTM for character-level embedding was set to 40. We used 200 hidden units for both context representation layers and matching aggregation layers. In the prediction layer, the number of hidden units with ReLu activation was set to 256. We did not use dropout and regularization.",
"Word embedding matrix was initialized with pre-trained 300-dimensional GloVe vectors BIBREF28 . For character-level embedding, we used one hot encoding with 69 characters (68 ASCII characters plus one unknown character). Both word embedding and character embedding matrix were fixed during the training. After algorithm SECREF12 was applied, the remaining out-of-vocabulary words were initialized as zero vectors. We used Stanford PTBTokenizer BIBREF32 on the Ubuntu corpus. The same hyper-parameter settings are applied to both Ubuntu Dialogue and Douban conversation corpus. For the ensemble model, we use the average prediction output of models with different runs. On both corpuses, the dimension of word2vec vectors generated on the training set is 100."
],
[
"Since the output scores are used for ranking candidates, we use Recall@k (recall at position k in 10 candidates, denotes as R@1, R@2 below), P@1 (precision at position 1), MAP(mean average precision) BIBREF33 , MRR (Mean Reciprocal Rank) BIBREF34 to measure the model performance. Table TABREF23 and Table TABREF24 show the performance comparison of our model and others on Ubuntu Dialogue Corpus V2 and Douban conversation corpus, respectively.",
"On Douban conversation corpus, FastText BIBREF7 pre-trained Chinese embedding vectors are used in ESIM + enhanced word vector whereas word2vec generated on training set is used in baseline model (ESIM). It can be seen from table TABREF23 that character embedding enhances the performance of original ESIM. Enhanced Word representation in algorithm SECREF12 improves the performance further and has shown that the proposed method is effective. Most models (RNN, CNN, LSTM, BiLSTM, Dual-Encoder) which encode the whole context (or response) into compact vectors before matching do not perform well. INLINEFORM0 directly models sequential structure of multi utterances in context and achieves good performance whereas ESIM implicitly makes use of end-of-utterance(__eou__) and end-of-turn (__eot__) token tags as shown in subsection SECREF41 ."
],
[
"In this section we evaluated word representation with the following cases on Ubuntu Dialogue corpus and compared them with that in algorithm SECREF12 .",
"Used the fixed pre-trained GloVe vectors .",
"Word embedding were initialized by GloVe vectors and then updated during the training.",
"Generated word2vec embeddings on the training set BIBREF9 and updated them during the training (dropout).",
"Used the pre-built ConceptNet NumberBatch BIBREF39 .",
"Used the fixed pre-built FastText vectors where word vectors for out-of-vocabulary words were computed based on built model.",
"Enhanced word representation in algorithm SECREF12 .",
"We used gensim to generate word2vec embeddings of dim 100.",
"It can be observed that tuning word embedding vectors during the training obtained the worse performance. The ensemble of word embedding from ConceptNet NumberBatch did not perform well since it still suffers from out-of-vocabulary issues. In order to get insights into the performance improvement of WP5, we show word coverage on Ubuntu Dialogue Corpus.",
"__eou__ and __eot__ are missing from pre-trained GloVe vectors. But this two tokens play an important role in the model performance shown in subsection SECREF41 . For word2vec generated on the training set, the unique token coverage is low. Due to the limited size of training corpus, the word2vec representation power could be degraded to some extent. WP5 combines advantages of both generality and domain adaptation."
],
[
"In order to check whether the effectiveness of enhanced word representation in algorithm SECREF12 depends on the specific model and datasets, we represent a doc (context, response or query) as the simple average of word vectors. Cosine similarity is used to rank the responses. The performances of the simple model on the test sets are shown in Figure FIGREF40 .",
"where WikiQA BIBREF40 is an open-domain question answering dataset from Microsoft research. The results on the enhanced vectors are better on the above three datasets. This indicates that enhanced vectors may fuse the domain-specific info into pre-built vectors for a better representation."
],
[
"There are two special token tags (__eou__ and __eot__) on ubuntu dialogue corpus. __eot__ tag is used to denote the end of a user's turn within the context and __eou__ tag is used to denote of a user utterance without a change of turn. Table TABREF42 shows the performance with/without two special tags.",
"It can be observed that the performance is significantly degraded without two special tags. In order to understand how the two tags helps the model identify the important information, we perform a case study. We randomly selected a context-response pair where model trained with tags succeeded and model trained without tags failed. Since max pooling is used in Equations EQREF11 and , we apply max operator to each context token vector in Equation EQREF10 as the signal strength. Then tokens are ranked in a descending order by it. The same operation is applied to response tokens.",
"It can be seen from Table TABREF43 that __eou__ and __eot__ carry useful information. __eou__ and __eot__ captures utterance and turn boundary structure information, respectively. This may provide hints to design a better neural architecture to leverage this structure information."
],
[
"We propose an algorithm to combine pre-trained word embedding vectors with those generated on training set as new word representation to address out-of-vocabulary word issues. The experimental results have shown that the proposed method is effective to solve out-of-vocabulary issue and improves the performance of ESIM, achieving the state-of-the-art results on Ubuntu Dialogue Corpus and Douban conversation corpus. In addition, we investigate the performance impact of two special tags: end-of-utterance and end-of-turn. In the future, we may design a better neural architecture to leverage utterance structure in multi-turn conversations."
]
],
"section_name": [
"Introduction",
"Related work",
"Our model",
"ESIM model",
"Methods for out-of-vocabulary",
"Dataset",
"Implementation details",
"Overall Results",
"Evaluation of several word embedding representations",
"Evaluation of enhanced representation on a simple model",
"The roles of utterance and turn tags",
"Conclusion and future work"
]
} | {
"answers": [
{
"annotation_id": [
"8b0671b4e70e2a30dda1425bd9a0e5a1ecaae7b2",
"cc081aa72bd311eab1c803937556fc6c0110ce09"
],
"answer": [
{
"evidence": [
"It can be observed that the performance is significantly degraded without two special tags. In order to understand how the two tags helps the model identify the important information, we perform a case study. We randomly selected a context-response pair where model trained with tags succeeded and model trained without tags failed. Since max pooling is used in Equations EQREF11 and , we apply max operator to each context token vector in Equation EQREF10 as the signal strength. Then tokens are ranked in a descending order by it. The same operation is applied to response tokens."
],
"extractive_spans": [],
"free_form_answer": "Performance degrades if the tags are not used.",
"highlighted_evidence": [
"It can be observed that the performance is significantly degraded without two special tags. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"It can be observed that the performance is significantly degraded without two special tags. In order to understand how the two tags helps the model identify the important information, we perform a case study. We randomly selected a context-response pair where model trained with tags succeeded and model trained without tags failed. Since max pooling is used in Equations EQREF11 and , we apply max operator to each context token vector in Equation EQREF10 as the signal strength. Then tokens are ranked in a descending order by it. The same operation is applied to response tokens.",
"FLOAT SELECTED: Table 7: Performance comparison with/without eou and eot tags on Ubuntu Dialogue Corpus (V2)."
],
"extractive_spans": [],
"free_form_answer": "The performance is significantly degraded without two special tags (0,025 in MRR)",
"highlighted_evidence": [
"It can be observed that the performance is significantly degraded without two special tags. ",
"FLOAT SELECTED: Table 7: Performance comparison with/without eou and eot tags on Ubuntu Dialogue Corpus (V2)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"19aac5c9b07ed87a132198dc2c28783e81b2f1d8",
"fef93a742c1c7d5bb3db6240b63ebf2e34ba6421"
],
"answer": [
{
"evidence": [
"In this paper, we generate word embedding vectors on the training corpus based on word2vec BIBREF9 . Then we propose an algorithm to combine the generated one with the pre-trained word embedding vectors on a large general text corpus based on vector concatenation. The new word representation maintains information learned from both general text corpus and task-domain. The nice property of the algorithm is simplicity and little extra computational cost will be added. It can address word out-of-vocabulary issue effectively. This method can be applied to most NLP deep neural network models and is language-independent. We integrated our methods with ESIM(baseline model) BIBREF10 . The experimental results have shown that the proposed method has significantly improved the performance of original ESIM model and obtained state-of-the-art results on both Ubuntu Dialogue Corpus and Douban Conversation Corpus BIBREF11 . On Ubuntu Dialogue Corpus (V2), the improvement to the previous best baseline model (single) on INLINEFORM0 is 3.8% and our ensemble model on INLINEFORM1 is 75.9%. On Douban Conversation Corpus, the improvement to the previous best model (single) on INLINEFORM2 is 3.6%."
],
"extractive_spans": [
"ESIM"
],
"free_form_answer": "",
"highlighted_evidence": [
"We integrated our methods with ESIM(baseline model) BIBREF10 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we generate word embedding vectors on the training corpus based on word2vec BIBREF9 . Then we propose an algorithm to combine the generated one with the pre-trained word embedding vectors on a large general text corpus based on vector concatenation. The new word representation maintains information learned from both general text corpus and task-domain. The nice property of the algorithm is simplicity and little extra computational cost will be added. It can address word out-of-vocabulary issue effectively. This method can be applied to most NLP deep neural network models and is language-independent. We integrated our methods with ESIM(baseline model) BIBREF10 . The experimental results have shown that the proposed method has significantly improved the performance of original ESIM model and obtained state-of-the-art results on both Ubuntu Dialogue Corpus and Douban Conversation Corpus BIBREF11 . On Ubuntu Dialogue Corpus (V2), the improvement to the previous best baseline model (single) on INLINEFORM0 is 3.8% and our ensemble model on INLINEFORM1 is 75.9%. On Douban Conversation Corpus, the improvement to the previous best model (single) on INLINEFORM2 is 3.6%."
],
"extractive_spans": [
"ESIM"
],
"free_form_answer": "",
"highlighted_evidence": [
" We integrated our methods with ESIM(baseline model) BIBREF10 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"3539a198581e0f73266511071d6525eb7385b7ec",
"7c72ffa0d69ac7149b78f10411a25ac053ac7d3a"
],
"answer": [
{
"evidence": [
"Douban conversation corpus BIBREF11 which are constructed from Douban group (a popular social networking service in China) is also used in experiments. Response candidates on the test set are collected by Lucene retrieval model, other than negative sampling without human judgment on Ubuntu Dialogue Corpus. That is, the last turn of each Douban dialogue with additional keywords extracted from the context on the test set was used as query to retrieve 10 response candidates from the Lucene index set (Details are referred to section 4 in BIBREF11 ). For the performance measurement on test set, we ignored samples with all negative responses or all positive responses. As a result, 6,670 context-response pairs were left on the test set. Some statistics of Douban conversation corpus are shown below:"
],
"extractive_spans": [],
"free_form_answer": "Conversations that are typical for a social networking service.",
"highlighted_evidence": [
"Douban conversation corpus BIBREF11 which are constructed from Douban group (a popular social networking service in China) is also used in experiments. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Douban conversation corpus BIBREF11 which are constructed from Douban group (a popular social networking service in China) is also used in experiments. Response candidates on the test set are collected by Lucene retrieval model, other than negative sampling without human judgment on Ubuntu Dialogue Corpus. That is, the last turn of each Douban dialogue with additional keywords extracted from the context on the test set was used as query to retrieve 10 response candidates from the Lucene index set (Details are referred to section 4 in BIBREF11 ). For the performance measurement on test set, we ignored samples with all negative responses or all positive responses. As a result, 6,670 context-response pairs were left on the test set. Some statistics of Douban conversation corpus are shown below:"
],
"extractive_spans": [],
"free_form_answer": "Conversations from popular social networking service in China",
"highlighted_evidence": [
"popular social networking service in China"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"72fcf2bfbce7fbf7fe5f4f37dc1592e8de3d3cad",
"75f32c2edadb97ef0f7e3167ff39826f2ae10662"
],
"answer": [
{
"evidence": [
"Word embedding matrix was initialized with pre-trained 300-dimensional GloVe vectors BIBREF28 . For character-level embedding, we used one hot encoding with 69 characters (68 ASCII characters plus one unknown character). Both word embedding and character embedding matrix were fixed during the training. After algorithm SECREF12 was applied, the remaining out-of-vocabulary words were initialized as zero vectors. We used Stanford PTBTokenizer BIBREF32 on the Ubuntu corpus. The same hyper-parameter settings are applied to both Ubuntu Dialogue and Douban conversation corpus. For the ensemble model, we use the average prediction output of models with different runs. On both corpuses, the dimension of word2vec vectors generated on the training set is 100.",
"On Douban conversation corpus, FastText BIBREF7 pre-trained Chinese embedding vectors are used in ESIM + enhanced word vector whereas word2vec generated on training set is used in baseline model (ESIM). It can be seen from table TABREF23 that character embedding enhances the performance of original ESIM. Enhanced Word representation in algorithm SECREF12 improves the performance further and has shown that the proposed method is effective. Most models (RNN, CNN, LSTM, BiLSTM, Dual-Encoder) which encode the whole context (or response) into compact vectors before matching do not perform well. INLINEFORM0 directly models sequential structure of multi utterances in context and achieves good performance whereas ESIM implicitly makes use of end-of-utterance(__eou__) and end-of-turn (__eot__) token tags as shown in subsection SECREF41 ."
],
"extractive_spans": [
"GloVe",
"FastText "
],
"free_form_answer": "",
"highlighted_evidence": [
"Word embedding matrix was initialized with pre-trained 300-dimensional GloVe vectors BIBREF28 . ",
"On Douban conversation corpus, FastText BIBREF7 pre-trained Chinese embedding vectors are used in ESIM + enhanced word vector whereas word2vec generated on training set is used in baseline model (ESIM)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Word embedding matrix was initialized with pre-trained 300-dimensional GloVe vectors BIBREF28 . For character-level embedding, we used one hot encoding with 69 characters (68 ASCII characters plus one unknown character). Both word embedding and character embedding matrix were fixed during the training. After algorithm SECREF12 was applied, the remaining out-of-vocabulary words were initialized as zero vectors. We used Stanford PTBTokenizer BIBREF32 on the Ubuntu corpus. The same hyper-parameter settings are applied to both Ubuntu Dialogue and Douban conversation corpus. For the ensemble model, we use the average prediction output of models with different runs. On both corpuses, the dimension of word2vec vectors generated on the training set is 100."
],
"extractive_spans": [
"300-dimensional GloVe vectors"
],
"free_form_answer": "",
"highlighted_evidence": [
"Word embedding matrix was initialized with pre-trained 300-dimensional GloVe vectors BIBREF28 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"how does end of utterance and token tags affect the performance",
"what are the baselines?",
"what kind of conversations are in the douban conversation corpus?",
"what pretrained word embeddings are used?"
],
"question_id": [
"8da8c4651979a4b1d1d3008c1f77bc7e9397183b",
"8cf52ba480d372fc15024b3db704952f10fdca27",
"d8ae36ae1b4d3af5b59ebd24efe94796101c1c12",
"2bd702174e915d97884d1571539fb1b5b0b7123a"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: A high-level overview of ESIM layout. Compared with the original one in (Chen et al., 2017), the diagram addes character-level embedding and replaces average pooling by LSTM last state summary vector.",
"Table 1: Statistics of the Ubuntu Dialogue Corpus (V2).",
"Table 2: Statistics of Douban Conversation Corpus (Wu et al., 2017).",
"Table 3: Performance of the models on Ubuntu Dialogue Corpus V2. ESIMa: ESIM + character embedding + enhanced word vector. Note: * means results on dataset V1 which are not directly comparable.",
"Table 4: Performance of the models on Douban Conversation Corpus.",
"Table 5: Performance comparisons of several word representations on Ubuntu Dialogue Corpus V2.",
"Table 6: Word coverage statistics of different word representations on Ubuntu Dialogue Corpus V2.",
"Figure 2: Performance comparisons of the simple average model.",
"Table 7: Performance comparison with/without eou and eot tags on Ubuntu Dialogue Corpus (V2).",
"Table 8: Tagged outputs from models trained with/without eou and eot tags. The top 6 tokens with the highest signal strength are highlighted in blue color. The value inside the parentheses is signal strength."
],
"file": [
"4-Figure1-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"8-Table5-1.png",
"8-Table6-1.png",
"8-Figure2-1.png",
"9-Table7-1.png",
"9-Table8-1.png"
]
} | [
"how does end of utterance and token tags affect the performance",
"what kind of conversations are in the douban conversation corpus?"
] | [
[
"1802.02614-9-Table7-1.png",
"1802.02614-The roles of utterance and turn tags-1"
],
[
"1802.02614-Dataset-1"
]
] | [
"The performance is significantly degraded without two special tags (0,025 in MRR)",
"Conversations from popular social networking service in China"
] | 354 |
1910.12203 | Do Sentence Interactions Matter? Leveraging Sentence Level Representations for Fake News Classification | The rising growth of fake news and misleading information through online media outlets demands an automatic method for detecting such news articles. Of the few limited works which differentiate between trusted vs other types of news article (satire, propaganda, hoax), none of them model sentence interactions within a document. We observe an interesting pattern in the way sentences interact with each other across different kind of news articles. To capture this kind of information for long news articles, we propose a graph neural network-based model which does away with the need of feature engineering for fine grained fake news classification. Through experiments, we show that our proposed method beats strong neural baselines and achieves state-of-the-art accuracy on existing datasets. Moreover, we establish the generalizability of our model by evaluating its performance in out-of-domain scenarios. Code is available at this https URL | {
"paragraphs": [
[
"In today's day and age of social media, there are ample opportunities for fake news production, dissemination and consumption. BIBREF0 break down fake news into three categories, hoax, propaganda and satire. A hoax article typically tries to convince the reader about a cooked-up story while propaganda ones usually mislead the reader into believing a false political or social agenda. BIBREF1 defines a satirical article as the one which deliberately exposes real-world individuals, organisations and events to ridicule.",
"Previous works BIBREF2, BIBREF0 rely on various linguistic and hand-crafted semantic features for differentiating between news articles. However, none of them try to model the interaction of sentences within the document. We observed a pattern in the way sentences cluster in different kind of news articles. Specifically, satirical articles had a more coherent story and thus all the sentences in the document seemed similar to each other. On the other hand, the trusted news articles were also coherent but the similarity between sentences from different parts of the document was not that strong, as depicted in Figure FIGREF1. We believe that the reason for such kind of behaviour is the presence of factual jumps across sections in a trusted document.",
"In this work, we propose a graph neural network-based model to classify news articles while capturing the interaction of sentences across the document. We present a series of experiments on News Corpus with Varying Reliability dataset BIBREF0 and Satirical Legitimate News dataset BIBREF2. Our results demonstrate that the proposed model achieves state-of-the-art performance on these datasets and provides interesting insights. Experiments performed in out-of-domain settings establish the generalizability of our proposed method."
],
[
"Satire, according to BIBREF5, is complicated because it occupies more than one place in the framework for humor, proposed by BIBREF6: it clearly has an aggressive and social function, and often expresses an intellectual aspect as well. BIBREF2 defines news satire as a genre of satire that mimics the format and style of journalistic reporting. Datasets created for the task of identifying satirical news articles from the trusted ones are often constructed by collecting documents from different online sources BIBREF2. BIBREF7 hypothesized that this encourages the models to learn characteristics for different publication sources rather than characteristics of satire. In this work, we show that our proposed model generalizes to articles from unseen publication sources.",
"BIBREF0 extends BIBREF2's work by offering a quantitative study of linguistic differences found in articles of different types of fake news such as hoax, propaganda and satire. They also proposed predictive models for graded deception across multiple domains. BIBREF0 found that neural methods didn't perform well for this task and proposed to use a Max-Entropy classifier. We show that our proposed neural network based on graph convolutional layers can outperform this model. Recent works by BIBREF8, BIBREF9 show that sophisticated neural models can be used for satirical news detection. To the best of our knowledge, none of the previous works represent individual documents as graphs where the nodes represent the sentences for performing classification using a graph neural network."
],
[
"We use SLN: Satirical and Legitimate News Database BIBREF2, RPN: Random Political News Dataset BIBREF10 and LUN: Labeled Unreliable News Dataset BIBREF0 for our experiments. Table TABREF4 shows the statistics. Since all of the previous methods on the aforementioned datasets are non-neural, we implement the following neural baselines,",
"CNN: In this model, we apply a 1-d CNN (Convolutional Neural Network) layer BIBREF11 with filter size 3 over the word embeddings of the sentences within a document. This is followed by a max-pooling layer to get a single document vector which is passed to a fully connected projection layer to get the logits over output classes.",
"LSTM: In this model, we encode the document using a LSTM (Long Short-Term Memory) layer BIBREF12. We use the hidden state at the last time step as the document vector which is passed to a fully connected projection layer to get the logits over output classes.",
"BERT: In this model, we extract the sentence vector (representation corresponding to [CLS] token) using BERT (Bidirectional Encoder Representations from Transformers) BIBREF4 for each sentence in the document. We then apply a LSTM layer on the sentence embeddings, followed by a projection layer to make the prediction for each document."
],
[
"Capturing sentence interactions in long documents is not feasible using a recurrent network because of the vanishing gradient problem BIBREF13. Thus, we propose a novel way of encoding documents as described in the next subsection. Figure FIGREF5 shows the overall framework of our graph based neural network."
],
[
"Each document in the corpus is represented as a graph. The nodes of the graph represent the sentences of a document while the edges represent the semantic similarity between a pair of sentences. Representing a document as a fully connected graph allows the model to directly capture the interaction of each sentence with every other sentence in the document. Formally,",
"We initialize the edge scores using BERT BIBREF4 finetuned on the semantic textual similarity task for computing the semantic similarity (SS) between two sentences. Refer to the Supplementary Material for more details regarding the SS model. Note that this representation drops the sentence order information but is better able to capture the interaction between far off sentences within a document."
],
[
"We reformulate the fake news classification problem as a graph classification task, where a graph represents a document. Given a graph $G= (E,S)$ where $E$ is the adjacency matrix and $S$ is the sentence feature matrix. We randomly initialize the word embeddings and use the last hidden state of a LSTM layer as the sentence embedding, shown in Figure FIGREF5. We experiment with two kinds of graph neural networks,"
],
[
"The graph convolutional network BIBREF14 is a spectral convolutional operation denoted by $f(Z^l, E|W^l)$,",
"Here, $Z^l$ is the output feature corresponding to the nodes after $l^{th}$ convolution. $W^l$ is the parameter associated with the $l^{th}$ layer. We set $Z^0 = S$. Based on the above operation, we can define arbitrarily deep networks. For our experiments, we just use a single layer unless stated otherwise. By default, the adjacency matrix ($E$) is fully connected i.e. all the elements are 1 except the diagonal elements which are all set to 0. We set $E$ based on semantic similarity model in our GCN + SS model. For the GCN + Attn model, we just add a self attention layer BIBREF15 after the GCN layer and before the pooling layer."
],
[
"BIBREF16 introduced graph attention networks to address various shortcomings of GCNs. Most importantly, they enable nodes to attend over their neighborhoods’ features without depending on the graph structure upfront. The key idea is to compute the hidden representations of each node in the graph, by attending over its neighbors, following a self-attention BIBREF15 strategy. By default, there is one attention head in the GAT model. For our GAT + 2 Attn Heads model, we use two attention heads and concatenate the node embeddings obtained from different heads before passing it to the pooling layer. For a fully connected graph, the GAT model allows every node to attend on every other node and learn the edge weights. Thus, initializing the edge weights using the SS model is useless as they are being learned. Mathematical details are provided in the Supplementary Material."
],
[
"We use a randomly initialized embedding matrix with 100 dimensions. We use a single layer LSTM to encode the sentences prior to the graph neural networks. All the hidden dimensions used in our networks are set to 100. The node embedding dimension is 32. For GCN and GAT, we set $\\sigma $ as LeakyRelU with slope 0.2. We train the models for a maximum of 10 epochs and use Adam optimizer with learning rate 0.001. For all the models, we use max-pool for pooling, which is followed by a fully connected projection layer with output nodes equal to the number of classes for classification."
],
[
"We conduct experiments across various settings and datasets. We report macro-averaged scores in all the settings.",
"2-way classification b/w satire and trusted articles: We use the satirical and trusted news articles from LUN-train for training, and from LUN-test as the development set. We evaluate our model on the entire SLN dataset. This is done to emulate a real-world scenario where we want to see the performance of our classifier on an out of domain dataset. We don't use SLN for training purposes because it just contains 360 examples which is too little for training our model and we want to have an unseen test set. The best performing model on SLN is used to evaluate the performance on RPN.",
"4-way classification b/w satire, propaganda, hoax and trusted articles: We split the LUN-train into a 80:20 split to create our training and development set. We use the LUN-test as our out of domain test set."
],
[
"Table TABREF20 shows the quantitative results for the two way classification between satirical and trusted news articles. Our proposed GAT method with 2 attention heads outperforms SoTA. The semantic similarity model does not seem to have much impact on the GCN model, and considering the computing cost, we don't experiment with it for the 4-way classification scenario. Given that we use SLN as an out of domain test set (just one overlapping source, no overlap in articles), whereas the SoTA paper BIBREF2 reports a 10-fold cross validation number on SLN. We believe that our results are quite strong, the GAT + 2 Attn Heads model achieves an accuracy of 87% on the entire RPN dataset when used as an out-of-domain test set. The SoTA paper BIBREF10 on RPN reports a 5-fold cross validation accuracy of 91%. These results indicate the generalizability of our proposed model across datasets. We also present results of four way classification in Table TABREF21. All of our proposed methods outperform SoTA on both the in-domain and out of domain test set.",
"To further understand the working of our proposed model, we closely inspect the attention maps generated by the GAT model for satirical and trusted news articles for the SLN dataset. From Figure FIGREF16, we can see that the attention map generated for the trusted news article only focuses on two specific sentence whereas the attention weights are much more distributed in case of a satirical article. Interestingly enough the highlighted sentences in case of the trusted news article were the starting sentence of two different paragraphs in the article indicating the presence of similar sentence clusters within a document. This opens a new avenue for understanding the differences between different kind of text articles for future research."
],
[
"This paper introduces a novel way of encoding articles for fake news classification. The intuition behind representing documents as a graph is motivated by the fact that sentences interact differently with each other across different kinds of article. Recurrent networks are unable to maintain long term dependencies in large documents, whereas a fully connected graph captures the interaction between sentences at unit distance. The quantitative result shows the effectiveness of our proposed model and the qualitative results validate our hypothesis about difference in sentence interaction across different articles. Further, we show that our proposed model generalizes to unseen datasets."
],
[
"We would like to thank the AWS Educate program for donating computational GPU resources used in this work. We also appreciate the anonymous reviewers for their insightful comments and suggestions to improve the paper."
],
[
"The supplementary material is available along with the code which provides mathematical details of the GAT model and few additional qualitative results."
]
],
"section_name": [
"Introduction",
"Related Work",
"Dataset and Baseline",
"Proposed Model",
"Proposed Model ::: Input Representation",
"Proposed Model ::: Graph based Neural Networks",
"Proposed Model ::: Graph based Neural Networks ::: Graph Convolution Network (GCN)",
"Proposed Model ::: Graph based Neural Networks ::: Graph Attention Network (GAT)",
"Proposed Model ::: Hyperparameters",
"Experimental Setting",
"Results",
"Conclusion",
"Acknowledgement",
"Supplementary Material"
]
} | {
"answers": [
{
"annotation_id": [
"598c24c2929238943443984a58fbd9e0979639f0",
"b6b844f14574b159384f4a333a43f4b25bb3edd9"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: 2-way classification results on SLN. *n-fold cross validation (precision, recall) as reported in SoTA.",
"FLOAT SELECTED: Table 3: 4-way classification results for different models. We only report F1-score following the SoTA paper.",
"Table TABREF20 shows the quantitative results for the two way classification between satirical and trusted news articles. Our proposed GAT method with 2 attention heads outperforms SoTA. The semantic similarity model does not seem to have much impact on the GCN model, and considering the computing cost, we don't experiment with it for the 4-way classification scenario. Given that we use SLN as an out of domain test set (just one overlapping source, no overlap in articles), whereas the SoTA paper BIBREF2 reports a 10-fold cross validation number on SLN. We believe that our results are quite strong, the GAT + 2 Attn Heads model achieves an accuracy of 87% on the entire RPN dataset when used as an out-of-domain test set. The SoTA paper BIBREF10 on RPN reports a 5-fold cross validation accuracy of 91%. These results indicate the generalizability of our proposed model across datasets. We also present results of four way classification in Table TABREF21. All of our proposed methods outperform SoTA on both the in-domain and out of domain test set."
],
"extractive_spans": [],
"free_form_answer": "Precision and recall for 2-way classification and F1 for 4-way classification.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: 2-way classification results on SLN. *n-fold cross validation (precision, recall) as reported in SoTA.",
"FLOAT SELECTED: Table 3: 4-way classification results for different models. We only report F1-score following the SoTA paper.",
"Table TABREF20 shows the quantitative results for the two way classification between satirical and trusted news articles.",
"We also present results of four way classification in Table TABREF21."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF20 shows the quantitative results for the two way classification between satirical and trusted news articles. Our proposed GAT method with 2 attention heads outperforms SoTA. The semantic similarity model does not seem to have much impact on the GCN model, and considering the computing cost, we don't experiment with it for the 4-way classification scenario. Given that we use SLN as an out of domain test set (just one overlapping source, no overlap in articles), whereas the SoTA paper BIBREF2 reports a 10-fold cross validation number on SLN. We believe that our results are quite strong, the GAT + 2 Attn Heads model achieves an accuracy of 87% on the entire RPN dataset when used as an out-of-domain test set. The SoTA paper BIBREF10 on RPN reports a 5-fold cross validation accuracy of 91%. These results indicate the generalizability of our proposed model across datasets. We also present results of four way classification in Table TABREF21. All of our proposed methods outperform SoTA on both the in-domain and out of domain test set.",
"FLOAT SELECTED: Table 2: 2-way classification results on SLN. *n-fold cross validation (precision, recall) as reported in SoTA.",
"FLOAT SELECTED: Table 3: 4-way classification results for different models. We only report F1-score following the SoTA paper.",
"We conduct experiments across various settings and datasets. We report macro-averaged scores in all the settings."
],
"extractive_spans": [],
"free_form_answer": "Macro-averaged F1-score, macro-averaged precision, macro-averaged recall",
"highlighted_evidence": [
"Table TABREF20 shows the quantitative results for the two way classification between satirical and trusted news articles. ",
"We also present results of four way classification in Table TABREF21. All of our proposed methods outperform SoTA on both the in-domain and out of domain test set.",
"FLOAT SELECTED: Table 2: 2-way classification results on SLN. *n-fold cross validation (precision, recall) as reported in SoTA.",
"FLOAT SELECTED: Table 3: 4-way classification results for different models. We only report F1-score following the SoTA paper.",
"We report macro-averaged scores in all the settings."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"38ff970615afb4800b93bcf7e8d9c188fa1ccd5f",
"dcb05ba5362ccec15bcc0fd770a9fb033e8fac72"
],
"answer": [
{
"evidence": [
"2-way classification b/w satire and trusted articles: We use the satirical and trusted news articles from LUN-train for training, and from LUN-test as the development set. We evaluate our model on the entire SLN dataset. This is done to emulate a real-world scenario where we want to see the performance of our classifier on an out of domain dataset. We don't use SLN for training purposes because it just contains 360 examples which is too little for training our model and we want to have an unseen test set. The best performing model on SLN is used to evaluate the performance on RPN.",
"4-way classification b/w satire, propaganda, hoax and trusted articles: We split the LUN-train into a 80:20 split to create our training and development set. We use the LUN-test as our out of domain test set."
],
"extractive_spans": [],
"free_form_answer": "In 2-way classification they used LUN-train for training, LUN-test for development and the entire SLN dataset for testing. In 4-way classification they used LUN-train for training and development and LUN-test for testing.",
"highlighted_evidence": [
"2-way classification b/w satire and trusted articles: We use the satirical and trusted news articles from LUN-train for training, and from LUN-test as the development set. We evaluate our model on the entire SLN dataset. This is done to emulate a real-world scenario where we want to see the performance of our classifier on an out of domain dataset. ",
"4-way classification b/w satire, propaganda, hoax and trusted articles: We split the LUN-train into a 80:20 split to create our training and development set. We use the LUN-test as our out of domain test set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"2-way classification b/w satire and trusted articles: We use the satirical and trusted news articles from LUN-train for training, and from LUN-test as the development set. We evaluate our model on the entire SLN dataset. This is done to emulate a real-world scenario where we want to see the performance of our classifier on an out of domain dataset. We don't use SLN for training purposes because it just contains 360 examples which is too little for training our model and we want to have an unseen test set. The best performing model on SLN is used to evaluate the performance on RPN.",
"4-way classification b/w satire, propaganda, hoax and trusted articles: We split the LUN-train into a 80:20 split to create our training and development set. We use the LUN-test as our out of domain test set."
],
"extractive_spans": [
"entire SLN dataset",
" LUN-test as our out of domain test set"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use the satirical and trusted news articles from LUN-train for training, and from LUN-test as the development set. We evaluate our model on the entire SLN dataset. This is done to emulate a real-world scenario where we want to see the performance of our classifier on an out of domain dataset.",
"4-way classification b/w satire, propaganda, hoax and trusted articles: We split the LUN-train into a 80:20 split to create our training and development set. We use the LUN-test as our out of domain test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"19d57174a1af62dd65394773552ceb4fd47e6c9c",
"934e00789f36245d118cb4f7d8d413598c714ac1"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: 2-way classification results on SLN. *n-fold cross validation (precision, recall) as reported in SoTA.",
"FLOAT SELECTED: Table 3: 4-way classification results for different models. We only report F1-score following the SoTA paper.",
"Table TABREF20 shows the quantitative results for the two way classification between satirical and trusted news articles. Our proposed GAT method with 2 attention heads outperforms SoTA. The semantic similarity model does not seem to have much impact on the GCN model, and considering the computing cost, we don't experiment with it for the 4-way classification scenario. Given that we use SLN as an out of domain test set (just one overlapping source, no overlap in articles), whereas the SoTA paper BIBREF2 reports a 10-fold cross validation number on SLN. We believe that our results are quite strong, the GAT + 2 Attn Heads model achieves an accuracy of 87% on the entire RPN dataset when used as an out-of-domain test set. The SoTA paper BIBREF10 on RPN reports a 5-fold cross validation accuracy of 91%. These results indicate the generalizability of our proposed model across datasets. We also present results of four way classification in Table TABREF21. All of our proposed methods outperform SoTA on both the in-domain and out of domain test set."
],
"extractive_spans": [],
"free_form_answer": "In 2-way classification precision score was 88% and recall 82%. In 4-way classification on LUN-dev F1-score was 91% and on LUN-test F1-score was 65%.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: 2-way classification results on SLN. *n-fold cross validation (precision, recall) as reported in SoTA.",
"FLOAT SELECTED: Table 3: 4-way classification results for different models. We only report F1-score following the SoTA paper.",
"Table TABREF20 shows the quantitative results for the two way classification between satirical and trusted news articles. ",
"We also present results of four way classification in Table TABREF21. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF20 shows the quantitative results for the two way classification between satirical and trusted news articles. Our proposed GAT method with 2 attention heads outperforms SoTA. The semantic similarity model does not seem to have much impact on the GCN model, and considering the computing cost, we don't experiment with it for the 4-way classification scenario. Given that we use SLN as an out of domain test set (just one overlapping source, no overlap in articles), whereas the SoTA paper BIBREF2 reports a 10-fold cross validation number on SLN. We believe that our results are quite strong, the GAT + 2 Attn Heads model achieves an accuracy of 87% on the entire RPN dataset when used as an out-of-domain test set. The SoTA paper BIBREF10 on RPN reports a 5-fold cross validation accuracy of 91%. These results indicate the generalizability of our proposed model across datasets. We also present results of four way classification in Table TABREF21. All of our proposed methods outperform SoTA on both the in-domain and out of domain test set."
],
"extractive_spans": [
"accuracy of 87% on the entire RPN dataset when used as an out-of-domain test set"
],
"free_form_answer": "",
"highlighted_evidence": [
"We believe that our results are quite strong, the GAT + 2 Attn Heads model achieves an accuracy of 87% on the entire RPN dataset when used as an out-of-domain test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7462215612beab32468a06291c15715310779be7",
"839cd481fd9dfae657651f62172239627cdc5040"
],
"answer": [
{
"evidence": [
"We use SLN: Satirical and Legitimate News Database BIBREF2, RPN: Random Political News Dataset BIBREF10 and LUN: Labeled Unreliable News Dataset BIBREF0 for our experiments. Table TABREF4 shows the statistics. Since all of the previous methods on the aforementioned datasets are non-neural, we implement the following neural baselines,"
],
"extractive_spans": [
"Satirical and Legitimate News Database",
"Random Political News Dataset",
"Labeled Unreliable News Dataset"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use SLN: Satirical and Legitimate News Database BIBREF2, RPN: Random Political News Dataset BIBREF10 and LUN: Labeled Unreliable News Dataset BIBREF0 for our experiments. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use SLN: Satirical and Legitimate News Database BIBREF2, RPN: Random Political News Dataset BIBREF10 and LUN: Labeled Unreliable News Dataset BIBREF0 for our experiments. Table TABREF4 shows the statistics. Since all of the previous methods on the aforementioned datasets are non-neural, we implement the following neural baselines,"
],
"extractive_spans": [
"Satirical and Legitimate News Database BIBREF2",
"RPN: Random Political News Dataset BIBREF10",
"LUN: Labeled Unreliable News Dataset BIBREF0"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use SLN: Satirical and Legitimate News Database BIBREF2, RPN: Random Political News Dataset BIBREF10 and LUN: Labeled Unreliable News Dataset BIBREF0 for our experiments."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"90026f075b0d2336fb12e6c3e0a02325858dca88",
"9d104af9bec66bceddc79e6985466e461e73d4fd"
],
"answer": [
{
"evidence": [
"We use SLN: Satirical and Legitimate News Database BIBREF2, RPN: Random Political News Dataset BIBREF10 and LUN: Labeled Unreliable News Dataset BIBREF0 for our experiments. Table TABREF4 shows the statistics. Since all of the previous methods on the aforementioned datasets are non-neural, we implement the following neural baselines,",
"CNN: In this model, we apply a 1-d CNN (Convolutional Neural Network) layer BIBREF11 with filter size 3 over the word embeddings of the sentences within a document. This is followed by a max-pooling layer to get a single document vector which is passed to a fully connected projection layer to get the logits over output classes.",
"LSTM: In this model, we encode the document using a LSTM (Long Short-Term Memory) layer BIBREF12. We use the hidden state at the last time step as the document vector which is passed to a fully connected projection layer to get the logits over output classes.",
"BERT: In this model, we extract the sentence vector (representation corresponding to [CLS] token) using BERT (Bidirectional Encoder Representations from Transformers) BIBREF4 for each sentence in the document. We then apply a LSTM layer on the sentence embeddings, followed by a projection layer to make the prediction for each document."
],
"extractive_spans": [
"CNN",
"LSTM",
"BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since all of the previous methods on the aforementioned datasets are non-neural, we implement the following neural baselines,\n\nCNN: In this model, we apply a 1-d CNN (Convolutional Neural Network) layer BIBREF11 with filter size 3 over the word embeddings of the sentences within a document. This is followed by a max-pooling layer to get a single document vector which is passed to a fully connected projection layer to get the logits over output classes.\n\nLSTM: In this model, we encode the document using a LSTM (Long Short-Term Memory) layer BIBREF12. We use the hidden state at the last time step as the document vector which is passed to a fully connected projection layer to get the logits over output classes.\n\nBERT: In this model, we extract the sentence vector (representation corresponding to [CLS] token) using BERT (Bidirectional Encoder Representations from Transformers) BIBREF4 for each sentence in the document. We then apply a LSTM layer on the sentence embeddings, followed by a projection layer to make the prediction for each document."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use SLN: Satirical and Legitimate News Database BIBREF2, RPN: Random Political News Dataset BIBREF10 and LUN: Labeled Unreliable News Dataset BIBREF0 for our experiments. Table TABREF4 shows the statistics. Since all of the previous methods on the aforementioned datasets are non-neural, we implement the following neural baselines,",
"CNN: In this model, we apply a 1-d CNN (Convolutional Neural Network) layer BIBREF11 with filter size 3 over the word embeddings of the sentences within a document. This is followed by a max-pooling layer to get a single document vector which is passed to a fully connected projection layer to get the logits over output classes.",
"LSTM: In this model, we encode the document using a LSTM (Long Short-Term Memory) layer BIBREF12. We use the hidden state at the last time step as the document vector which is passed to a fully connected projection layer to get the logits over output classes.",
"BERT: In this model, we extract the sentence vector (representation corresponding to [CLS] token) using BERT (Bidirectional Encoder Representations from Transformers) BIBREF4 for each sentence in the document. We then apply a LSTM layer on the sentence embeddings, followed by a projection layer to make the prediction for each document."
],
"extractive_spans": [
"CNN",
"LSTM",
"BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"Since all of the previous methods on the aforementioned datasets are non-neural, we implement the following neural baselines,\n\nCNN: In this model, we apply a 1-d CNN (Convolutional Neural Network) layer BIBREF11 with filter size 3 over the word embeddings of the sentences within a document.",
"LSTM: In this model, we encode the document using a LSTM (Long Short-Term Memory) layer BIBREF12. ",
"BERT: In this model, we extract the sentence vector (representation corresponding to [CLS] token) using BERT (Bidirectional Encoder Representations from Transformers) BIBREF4 for each sentence in the document. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"What other evaluation metrics are reported?",
"What out of domain scenarios did they evaluate on?",
"What was their state of the art accuracy score?",
"Which datasets did they use?",
"What are the neural baselines mentioned?"
],
"question_id": [
"0c247a04f235a4375dd3b0fd0ce8d0ec72ef2256",
"66dfcdab1db6a8fcdf392157a478b4cca0d87961",
"7ef34b4996ada33a4965f164a8f96e20af7470c0",
"6e80386b33fbfba8bc1ab811a597d844ae67c578",
"1c182b4805b336bd6e1a3f43dc84b07db3908d4a"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: TSNE visualization (Van Der Maaten, 2014) of sentence embeddings obtained using BERT (Devlin et al., 2019) for two kind of news articles from SLN. A point denotes a sentence and the number indicates which paragraph it belonged to in the article.",
"Table 1: Statistics about different dataset sources. GN refers to Gigaword News.",
"Figure 2: Proposed semantic graph neural network based model for fake news classification.",
"Figure 3: Attention heatmaps generated by GAT for 2-way classification. Left: Trusted, Right: Satire.",
"Table 2: 2-way classification results on SLN. *n-fold cross validation (precision, recall) as reported in SoTA.",
"Table 3: 4-way classification results for different models. We only report F1-score following the SoTA paper."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"2-Figure2-1.png",
"4-Figure3-1.png",
"4-Table2-1.png",
"4-Table3-1.png"
]
} | [
"What other evaluation metrics are reported?",
"What out of domain scenarios did they evaluate on?",
"What was their state of the art accuracy score?"
] | [
[
"1910.12203-Experimental Setting-0",
"1910.12203-4-Table3-1.png",
"1910.12203-Results-0",
"1910.12203-4-Table2-1.png"
],
[
"1910.12203-Experimental Setting-1",
"1910.12203-Experimental Setting-2"
],
[
"1910.12203-4-Table3-1.png",
"1910.12203-Results-0",
"1910.12203-4-Table2-1.png"
]
] | [
"Macro-averaged F1-score, macro-averaged precision, macro-averaged recall",
"In 2-way classification they used LUN-train for training, LUN-test for development and the entire SLN dataset for testing. In 4-way classification they used LUN-train for training and development and LUN-test for testing.",
"In 2-way classification precision score was 88% and recall 82%. In 4-way classification on LUN-dev F1-score was 91% and on LUN-test F1-score was 65%."
] | 355 |
1911.07620 | Exploiting Token and Path-based Representations of Code for Identifying Security-Relevant Commits | Public vulnerability databases such as CVE and NVD account for only 60% of security vulnerabilities present in open-source projects, and are known to suffer from inconsistent quality. Over the last two years, there has been considerable growth in the number of known vulnerabilities across projects available in various repositories such as NPM and Maven Central. Such an increasing risk calls for a mechanism to infer the presence of security threats in a timely manner. We propose novel hierarchical deep learning models for the identification of security-relevant commits from either the commit diff or the source code for the Java classes. By comparing the performance of our model against code2vec, a state-of-the-art model that learns from path-based representations of code, and a logistic regression baseline, we show that deep learning models show promising results in identifying security-related commits. We also conduct a comparative analysis of how various deep learning models learn across different input representations and the effect of regularization on the generalization of our models. | {
"paragraphs": [
[
"The use of open-source software has been steadily increasing for some time now, with the number of Java packages in Maven Central doubling in 2018. However, BIBREF0 states that there has been an 88% growth in the number of vulnerabilities reported over the last two years. In order to develop secure software, it is essential to analyze and understand security vulnerabilities that occur in software systems and address them in a timely manner. While there exist several approaches in the literature for identifying and managing security vulnerabilities, BIBREF1 show that an effective vulnerability management approach must be code-centric. Rather than relying on metadata, efforts must be based on analyzing vulnerabilities and their fixes at the code level.",
"Common Vulnerabilities and Exposures (CVE) is a list of publicly known cybersecurity vulnerabilities, each with an identification number. These entries are used in the National Vulnerability Database (NVD), the U.S. government repository of standards based vulnerability management data. The NVD suffers from poor coverage, as it contains only 10% of the open-source vulnerabilities that have received a CVE identifier BIBREF2. This could be due to the fact that a number of security vulnerabilities are discovered and fixed through informal communication between maintainers and their users in an issue tracker. To make things worse, these public databases are too slow to add vulnerabilities as they lag behind a private database such as Snyk's DB by an average of 92 days BIBREF0 All of the above pitfalls of public vulnerability management databases (such as NVD) call for a mechanism to automatically infer the presence of security threats in open-source projects, and their corresponding fixes, in a timely manner.",
"We propose a novel approach using deep learning in order to identify commits in open-source repositories that are security-relevant. We build regularized hierarchical deep learning models that encode features first at the file level, and then aggregate these file-level representations to perform the final classification. We also show that code2vec, a model that learns from path-based representations of code and claimed by BIBREF3 to be suitable for a wide range of source code classification tasks, performs worse than our logistic regression baseline.",
"In this study, we seek to answer the following research questions:",
"[leftmargin=*]",
"RQ1: Can we effectively identify security-relevant commits using only the commit diff? For this research question, we do not use any of the commit metadata such as the commit message or information about the author. We treat source code changes like unstructured text without using path-based representations from the abstract syntax tree.",
"RQ2: Does extracting class-level features before and after the change instead of using only the commit diff improve the identification of security-relevant commits? For this research question, we test the hypothesis that the source code of the entire Java class contains more information than just the commit diff and could potentially improve the performance of our model.",
"RQ3: Does exploiting path-based representations of Java source code before and after the change improve the identification of security-relevant commits? For this research question, we test whether code2vec, a state-of-the-art model that learns from path-based representations of code, performs better than our model that treats source code as unstructured text.",
"RQ4: Is mining commits using regular expression matching of commit messages an effective means of data augmentation for improving the identification of security-relevant commits? Since labelling commits manually is an expensive task, it is not easy to build a dataset large enough to train deep learning models. For this research question, we explore if collecting coarse data samples using a high-precision approach is an effective way to augment the ground-truth dataset.",
"The main contributions of this paper are:",
"[leftmargin=*]",
"Novel hierarchical deep learning models for the identification of security-relevant commits based on either the diff or the modified source code of the Java classes.",
"A comparative analysis of how various deep learning models perform across different input representations and how various regularization techniques help with the generalization of our models.",
"We envision that this work would ultimately allow for monitoring open-source repositories in real-time, in order to automatically detect security-relevant changes such as vulnerability fixes."
],
[
"In computational linguistics, there has been a lot of effort over the last few years to create a continuous higher dimensional vector space representation of words, sentences, and even documents such that similar entities are closer to each other in that space BIBREF4, BIBREF5, BIBREF6. BIBREF4 introduced word2vec, a class of two-layer neural network models that are trained on a large corpus of text to produce word embeddings for natural language. Such learned distributed representations of words have accelerated the application of deep learning techniques for natural language processing (NLP) tasks BIBREF7.",
"BIBREF8 show that convolutional neural networks (CNNs) can achieve state-of-the-art results in single-sentence sentiment prediction, among other sentence classification tasks. In this approach, the vector representations of the words in a sentence are concatenated vertically to create a two-dimensional matrix for each sentence. The resulting matrix is passed through a CNN to extract higher-level features for performing the classification. BIBREF9 introduce the hierarchical attention network (HAN), where a document vector is progressively built by aggregating important words into sentence vectors, and then aggregating important sentences vectors into document vectors.",
"Deep neural networks are prone to overfitting due to the possibility of the network learning complicated relationships that exist in the training set but not in unseen test data. Dropout prevents complex co-adaptations of hidden units on training data by randomly removing (i.e. dropping out) hidden units along with their connections during training BIBREF10. Embedding dropout, used by BIBREF11 for neural language modeling, performs dropout on entire word embeddings. This effectively removes a proportion of the input tokens randomly at each training iteration, in order to condition the model to be robust against missing input.",
"While dropout works well for regularizing fully-connected layers, it is less effective for convolutional layers due to the spatial correlation of activation units in convolutional layers. There have been a number of attempts to extend dropout to convolutional neural networks BIBREF12. DropBlock is a form of structured dropout for convolutional layers where units in a contiguous region of a feature map are dropped together BIBREF13."
],
[
"While building usable embeddings for source code that capture the complex characteristics involving both syntax and semantics is a challenging task, such embeddings have direct downstream applications in tasks such as semantic code clone detection, code captioning, and code completion BIBREF14, BIBREF15. In the same vein as BIBREF4, neural networks have been used for representing snippets of code as continuous distributed vectors BIBREF16. They represent a code snippet as a bag of contexts and each context is represented by a context vector, followed by a path-attention network that learns how to aggregate these context vectors in a weighted manner.",
"A number of other code embedding techniques are also available in the literature. BIBREF17 learn word embeddings from abstractions of traces obtained from the symbolic execution of a program. They evaluate their learned embeddings on a benchmark of API-usage analogies extracted from the Linux kernel and achieved 93% top-1 accuracy. BIBREF18 describe a pipeline that leverages deep learning for semantic search of code. To achieve this, they train a sequence-to-sequence model that learns to summarize Python code by predicting the corresponding docstring from the code blob, and in the process provide code representations for Python."
],
[
"There exist a handful of papers in software engineering that perform commit classification to identify security vulnerabilities or fixes. BIBREF19 describe an efficient vulnerability identification system geared towards tracking large-scale projects in real time using latent information underlying commit messages and bug reports in open-source projects. While BIBREF19 classify commits based on the commit message, we use only the commit diff or the corresponding source code as features for our model. BIBREF2 propose a machine learning approach to identify security-relevant commits. However, they treat source code as documents written in natural language and use well-known document classification methods to perform the actual classification. BIBREF20 conduct an analysis to identify which security vulnerabilities can be discovered during code review, or what characteristics of developers are likely to introduce vulnerabilities."
],
[
"This section details the methodology used in this study to build the training dataset, the models used for classification and the evaluation procedure. All of the experiments are conducted on Python 3.7 running on an Intel Core i7 6800K CPU and a Nvidia GTX 1080 GPU. All the deep learning models are implemented in PyTorch 0.4.1 BIBREF21, while Scikit-learn 0.19.2 BIBREF22 is used for computing the tf–idf vectors and performing logistic regression.",
"For training our classification models, we use a manually-curated dataset of publicly disclosed vulnerabilities in 205 distinct open-source Java projects mapped to commits fixing them, provided by BIBREF23. These repositories are split into training, validation, and test splits containing 808, 265, and 264 commits, respectively. In order to minimize the occurrence of duplicate commits in two of these splits (such as in both training and test), commits from no repository belong to more than one split. However, 808 commits may not be sufficient to train deep learning models. Hence, in order to answer RQ4, we augment the training split with commits mined using regular expression matching on the commit messages from the same set of open-source Java projects. This almost doubles the number of commits in the training split to 1493. We then repeat our experiments for the first three research questions on the augmented dataset, and evaluate our trained models on the same validation and test splits.",
"We also compare the quality of randomly-initialized embeddings with pre-trained ones. Since the word2vec embeddings only need unlabelled data to train, the data collection and preprocessing stage is straightforward. GitHub, being a very large host of source code, contains enough code for training such models. However, a significant proportion of code in GitHub does not belong to engineered software projects BIBREF24. To reduce the amount of noise in our training data, we filter repositories based on their size, commit history, number of issues, pull requests, and contributors, and build a corpus of the top 1000 Java repositories. We limit the number of repositories to 1000 due to GitHub API limitations. It is worth noting that using a larger training corpus might provide better results. For instance, code2vec is pre-trained on a corpus that is ten times larger.",
"To extract token-level features for our model, we use the lexer and tokenizer provided as a part of the Python javalang library. We ensure that we only use the code and not code comments or metadata, as it is possible for comments or commit messages to include which vulnerabilities are fixed, as shown in Figure FIGREF12. Our models would then overfit on these features rather than learning the features from the code. For extracting path-based representations from Java code, we use ASTMiner."
],
[
"We learn token-level vectors for code using the CBOW architecture BIBREF4, with negative sampling and a context window size of 5. Using CBOW over skip-gram is a deliberate design decision. While skip-gram is better for infrequent words, we felt that it is more important to focus on the more frequent words (inevitably, the keywords in a programming language) when it comes to code. Since we only perform minimal preprocessing on the code (detailed below), the most infrequent words will usually be variable identifiers. Following the same line of reasoning, we choose negative sampling over hierarchical-softmax as the training algorithm.",
"We do not normalize variable identifiers into generic tokens as they could contain contextual information. However, we do perform minimal preprocessing on the code before training the model. This includes:",
"The removal of comments and whitespace when performing tokenization using a lexer.",
"The conversion of all numbers such as integers and floating point units into reserved tokens.",
"The removal of tokens whose length is greater than or equal to 64 characters.",
"Thresholding the size of the vocabulary to remove infrequent tokens."
],
[
"We modify our model accordingly for every research question, based on changes in the input representation. To benchmark the performance of our deep learning models, we compare them against a logistic regression (LR) baseline that learns on one-hot representations of the Java tokens extracted from the commit diffs. For all of our models, we employ dropout on the fully-connected layer for regularization. We use Adam BIBREF25 for optimization, with a learning rate of 0.001, and batch size of 16 for randomly initialized embeddings and 8 for pre-trained embeddings.",
"For RQ1, we use a hierarchical CNN (H-CNN) with either randomly-initialized or pre-trained word embeddings in order to extract features from the commit diff. We represent the commit diff as a concatenation of 300-dimensional vectors for each corresponding token from that diff. This resultant matrix is then passed through three temporal convolutional layers in parallel, with filter windows of size 3, 5, and 7. A temporal max-pooling operation is applied to these feature maps to retain the feature with the highest value in every map. We also present a regularized version of this model (henceforth referred to as HR-CNN) with embedding dropout applied on the inputs, and DropBlock on the activations of the convolutional layers.",
"For RQ2, we made a modification to both the H-CNN and HR-CNN models in order to extract features from the source code for the Java classes before and after the commit. Both of these models use a siamese architecture between the two CNN-based encoders as shown in Figure FIGREF20. We then concatenate the results from both of these encoders and pass it through a fully-connected layer followed by softmax for prediction.",
"For RQ3, we adapt the code2vec model used by BIBREF16 for predicting method names into a model for predicting whether a commit is security-relevant by modifying the final layer. We then repeat our experiments on both the ground-truth and augmented dataset."
],
[
"The results for all of our models on both the ground-truth and augmented datasets are given in Table TABREF22.",
"RQ1: Can we effectively identify security-relevant commits using only the commit diff?",
"Without using any of the metadata present in a commit, such as the commit message or information about the author, we are able to correctly classify commits based on their security-relevance with an accuracy of 65.3% and $\\text{F}_1$of 77.6% on unseen test data. Table TABREF22, row 5, shows that using our regularized HR-CNN model with pre-trained embeddings provides the best overall results on the test split when input features are extracted from the commit diff. Table TABREF22, row 3, shows that while H-CNN provides the most accurate results on the validation split, it doesn't generalize as well to unseen test data. While these results are usable, H-CNN and HR-CNN only perform 3 points better than the LR baseline (Table TABREF22, row 1) in terms of $\\text{F}_1$and 2 points better in terms of accuracy.",
"RQ2: Does extracting class-level features before and after the change instead of using only the commit diff improve the identification of security-relevant commits?",
"When extracting features from the complete source code of the Java classes which are modified in the commit, the performance of HR-CNN increases noticeably. Table TABREF22, row 9, shows that the accuracy of HR-CNN when using pre-trained embeddings increases to 72.6% and $\\text{F}_1$increases to 79.7%. This is considerably above the LR baseline and justifies the use of a more complex deep learning model. Meanwhile, the performance of H-CNN with randomly-initialized embeddings (Table TABREF22, row 6) does not improve when learning on entire Java classes, but there is a marked improvement in $\\text{F}_1$of about 6 points when using pre-trained embeddings. Hence, we find that extracting class-level features from the source code before and after the change, instead of using only the commit diff, improves the identification of security-relevant commits.",
"RQ3: Does exploiting path-based representations of the Java classes before and after the change improve the identification of security-relevant commits?",
"Table TABREF22, row 10, shows that training the modified code2vec model to identify security-aware commits from scratch results in a model that performs worse than the LR baseline. The model only achieves an accuracy of 63.8% on the test split, with an $\\text{F}_1$score of 72.7%, which is two points less than that of LR. The code2vec model performs much worse compared to H-CNN and HR-CNN with randomly-initialized embeddings. Hence, learning from a path-based representation of the Java classes before and after the change does not improve the identification of security-relevant commits—at least with the code2vec approach.",
"RQ4: Is mining commits using regular expression matching of commit messages an effective means of data augmentation for improving the identification of security-relevant commits?",
"The results in Table TABREF22, rows 11 to 20, show that collecting coarse data samples using regular expression matching for augmenting the ground-truth training set is not effective in increasing the performance of our models. This could possibly be due to the coarse data samples being too noisy or the distribution of security-relevant commits in the coarse dataset not matching that of the unseen dataset. The latter might have been due to the high-precision mining technique used, capturing only a small subset of security vulnerabilities."
],
[
"The lexer and tokenizer we use from the javalang library target Java 8. We are not able to verify that all the projects and their forks in this study are using the same version of Java. However, we do not expect considerable differences in syntax between Java 7 and Java 8 except for the introduction of lambda expressions.",
"There is also a question of to what extent the 635 publicly disclosed vulnerabilities used for evaluation in this study represent the vulnerabilities found in real-world scenarios. While creating larger ground-truth datasets would always be helpful, it might not always be possible. To reduce the possibility of bias in our results, we ensure that we don't train commits from the same projects that we evaluate our models on. We also discard any commits belonging to the set of evaluation projects that are mined using regular expression matching.",
"We directly train code2vec on our dataset without pre-training it, in order to assess how well path-based representations perform for learning on code, as opposed to token-level representations on which H-CNN and HR-CNN are based. However, BIBREF16 pre-trained their model on 10M Java classes. It is possible that the performance of code2vec is considerably better than the results in Table TABREF22 after pre-training. Furthermore, our findings apply only to this particular technique to capturing path-based representations, not the approach in general. However, we leave both issues for future work."
],
[
"In this study, we propose a novel hierarchical deep learning model for the identification of security-relevant commits and show that deep learning has much to offer when it comes to commit classification. We also make a case for pre-training word embeddings on tokens extracted from Java code, which leads to performance improvements. We are able to further improve the results using a siamese architecture connecting two CNN-based encoders to represent the modified files before and after a commit.",
"Network architectures that are effective on a certain task, such as predicting method names, are not necessarily effective on related tasks. Thus, choices between neural models should be made considering the nature of the task and the amount of training data available. Based on the model's ability to predict method names in files across different projects, BIBREF16 claim that code2vec can be used for a wide range of programming language processing tasks. However, for predicting the security relevance of commits, H-CNN and HR-CNN appear to be much better than code2vec.",
"A potential research direction would be to build language models for programming languages based on deep language representation models. Neural networks are becoming increasingly deeper and complex in the NLP literature, with significant interest in deep language representation models such as ELMo, GPT, and BERT BIBREF26, BIBREF27, BIBREF28. BIBREF28 show strong empirical performance on a broad range of NLP tasks. Since all of these models are pre-trained in an unsupervised manner, it would be easy to pre-train such models on the vast amount of data available on GitHub.",
"Deep learning models are known for scaling well with more data. However, with less than 1,000 ground-truth training samples and around 1,800 augmented training samples, we are unable to exploit the full potential of deep learning. A reflection on the current state of labelled datasets in software engineering (or the lack thereof) throws light on limited practicality of deep learning models for certain software engineering tasks BIBREF29. As stated by BIBREF30, just as research in NLP changed focus from brittle rule-based expert systems to statistical methods, software engineering research should augment traditional methods that consider only the formal structure of programs with information about the statistical properties of code. Ongoing research on pre-trained code embeddings that don't require a labelled dataset for training is a step in the right direction. Drawing parallels with the recent history of NLP research, we are hoping that further study in the domain of code embeddings will considerably accelerate progress in tackling software problems with deep learning."
],
[
"We would like to thank SAP and NSERC for their support towards this project."
]
],
"section_name": [
"Introduction",
"Background and Related Work ::: Neural Networks for Text Classification",
"Background and Related Work ::: Learning Embeddings for Source Code",
"Background and Related Work ::: Identifying Security Vulnerabilities",
"Experimental Setup",
"Model ::: Training Word2vec Embeddings",
"Model ::: Identifying Security Vulnerabilities",
"Results and Discussion",
"Results and Discussion ::: Threats to Validity",
"Conclusions and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"2f570b089e48d89dba45acfc02d76568cb6f74d3",
"caed6b923f2083c5bc87f6a8c5b7cfdda6e2b786"
],
"answer": [
{
"evidence": [
"We modify our model accordingly for every research question, based on changes in the input representation. To benchmark the performance of our deep learning models, we compare them against a logistic regression (LR) baseline that learns on one-hot representations of the Java tokens extracted from the commit diffs. For all of our models, we employ dropout on the fully-connected layer for regularization. We use Adam BIBREF25 for optimization, with a learning rate of 0.001, and batch size of 16 for randomly initialized embeddings and 8 for pre-trained embeddings.",
"For RQ1, we use a hierarchical CNN (H-CNN) with either randomly-initialized or pre-trained word embeddings in order to extract features from the commit diff. We represent the commit diff as a concatenation of 300-dimensional vectors for each corresponding token from that diff. This resultant matrix is then passed through three temporal convolutional layers in parallel, with filter windows of size 3, 5, and 7. A temporal max-pooling operation is applied to these feature maps to retain the feature with the highest value in every map. We also present a regularized version of this model (henceforth referred to as HR-CNN) with embedding dropout applied on the inputs, and DropBlock on the activations of the convolutional layers."
],
"extractive_spans": [
"dropout",
"embedding dropout",
"DropBlock"
],
"free_form_answer": "",
"highlighted_evidence": [
"For all of our models, we employ dropout on the fully-connected layer for regularization.",
"We also present a regularized version of this model (henceforth referred to as HR-CNN) with embedding dropout applied on the inputs, and DropBlock on the activations of the convolutional layers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For RQ1, we use a hierarchical CNN (H-CNN) with either randomly-initialized or pre-trained word embeddings in order to extract features from the commit diff. We represent the commit diff as a concatenation of 300-dimensional vectors for each corresponding token from that diff. This resultant matrix is then passed through three temporal convolutional layers in parallel, with filter windows of size 3, 5, and 7. A temporal max-pooling operation is applied to these feature maps to retain the feature with the highest value in every map. We also present a regularized version of this model (henceforth referred to as HR-CNN) with embedding dropout applied on the inputs, and DropBlock on the activations of the convolutional layers."
],
"extractive_spans": [
"dropout",
"DropBlock"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also present a regularized version of this model (henceforth referred to as HR-CNN) with embedding dropout applied on the inputs, and DropBlock on the activations of the convolutional layers."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1abcd65e799b45f6d55ef5a24d0e0afded485aa9",
"bb28f6b4efa7474c6417b10c179117468dac99b8"
],
"answer": [
{
"evidence": [
"The results for all of our models on both the ground-truth and augmented datasets are given in Table TABREF22.",
"FLOAT SELECTED: Table 1: Results for each model on the validation and test splits; best values are bolded."
],
"extractive_spans": [],
"free_form_answer": "Accuracy, Precision, Recall, F1-score",
"highlighted_evidence": [
"The results for all of our models on both the ground-truth and augmented datasets are given in Table TABREF22.\n\n",
"FLOAT SELECTED: Table 1: Results for each model on the validation and test splits; best values are bolded."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The results for all of our models on both the ground-truth and augmented datasets are given in Table TABREF22."
],
"extractive_spans": [],
"free_form_answer": "Accuracy, precision, recall and F1 score.",
"highlighted_evidence": [
"The results for all of our models on both the ground-truth and augmented datasets are given in Table TABREF22."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"268141092945c0e956624a346d7862220659dc33",
"cdf077254c6dca284188c58e3268954905138df8"
],
"answer": [
{
"evidence": [
"For training our classification models, we use a manually-curated dataset of publicly disclosed vulnerabilities in 205 distinct open-source Java projects mapped to commits fixing them, provided by BIBREF23. These repositories are split into training, validation, and test splits containing 808, 265, and 264 commits, respectively. In order to minimize the occurrence of duplicate commits in two of these splits (such as in both training and test), commits from no repository belong to more than one split. However, 808 commits may not be sufficient to train deep learning models. Hence, in order to answer RQ4, we augment the training split with commits mined using regular expression matching on the commit messages from the same set of open-source Java projects. This almost doubles the number of commits in the training split to 1493. We then repeat our experiments for the first three research questions on the augmented dataset, and evaluate our trained models on the same validation and test splits."
],
"extractive_spans": [
"almost doubles the number of commits in the training split to 1493",
"validation, and test splits containing 808, 265, and 264 commits"
],
"free_form_answer": "",
"highlighted_evidence": [
"For training our classification models, we use a manually-curated dataset of publicly disclosed vulnerabilities in 205 distinct open-source Java projects mapped to commits fixing them, provided by BIBREF23. These repositories are split into training, validation, and test splits containing 808, 265, and 264 commits, respectively.",
"However, 808 commits may not be sufficient to train deep learning models. Hence, in order to answer RQ4, we augment the training split with commits mined using regular expression matching on the commit messages from the same set of open-source Java projects. This almost doubles the number of commits in the training split to 1493."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For training our classification models, we use a manually-curated dataset of publicly disclosed vulnerabilities in 205 distinct open-source Java projects mapped to commits fixing them, provided by BIBREF23. These repositories are split into training, validation, and test splits containing 808, 265, and 264 commits, respectively. In order to minimize the occurrence of duplicate commits in two of these splits (such as in both training and test), commits from no repository belong to more than one split. However, 808 commits may not be sufficient to train deep learning models. Hence, in order to answer RQ4, we augment the training split with commits mined using regular expression matching on the commit messages from the same set of open-source Java projects. This almost doubles the number of commits in the training split to 1493. We then repeat our experiments for the first three research questions on the augmented dataset, and evaluate our trained models on the same validation and test splits."
],
"extractive_spans": [],
"free_form_answer": "2022",
"highlighted_evidence": [
"For training our classification models, we use a manually-curated dataset of publicly disclosed vulnerabilities in 205 distinct open-source Java projects mapped to commits fixing them, provided by BIBREF23. These repositories are split into training, validation, and test splits containing 808, 265, and 264 commits, respectively. ",
"Hence, in order to answer RQ4, we augment the training split with commits mined using regular expression matching on the commit messages from the same set of open-source Java projects. This almost doubles the number of commits in the training split to 1493. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"3ea2ee5897504128127204faf0d92d3abde756f7",
"c5bebd2323fd5ae5521b4ab27e5126cff770048a"
],
"answer": [
{
"evidence": [
"For training our classification models, we use a manually-curated dataset of publicly disclosed vulnerabilities in 205 distinct open-source Java projects mapped to commits fixing them, provided by BIBREF23. These repositories are split into training, validation, and test splits containing 808, 265, and 264 commits, respectively. In order to minimize the occurrence of duplicate commits in two of these splits (such as in both training and test), commits from no repository belong to more than one split. However, 808 commits may not be sufficient to train deep learning models. Hence, in order to answer RQ4, we augment the training split with commits mined using regular expression matching on the commit messages from the same set of open-source Java projects. This almost doubles the number of commits in the training split to 1493. We then repeat our experiments for the first three research questions on the augmented dataset, and evaluate our trained models on the same validation and test splits."
],
"extractive_spans": [
"manually-curated dataset of publicly disclosed vulnerabilities in 205 distinct open-source Java projects mapped to commits fixing them"
],
"free_form_answer": "",
"highlighted_evidence": [
"For training our classification models, we use a manually-curated dataset of publicly disclosed vulnerabilities in 205 distinct open-source Java projects mapped to commits fixing them, provided by BIBREF23."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For training our classification models, we use a manually-curated dataset of publicly disclosed vulnerabilities in 205 distinct open-source Java projects mapped to commits fixing them, provided by BIBREF23. These repositories are split into training, validation, and test splits containing 808, 265, and 264 commits, respectively. In order to minimize the occurrence of duplicate commits in two of these splits (such as in both training and test), commits from no repository belong to more than one split. However, 808 commits may not be sufficient to train deep learning models. Hence, in order to answer RQ4, we augment the training split with commits mined using regular expression matching on the commit messages from the same set of open-source Java projects. This almost doubles the number of commits in the training split to 1493. We then repeat our experiments for the first three research questions on the augmented dataset, and evaluate our trained models on the same validation and test splits.",
"We also compare the quality of randomly-initialized embeddings with pre-trained ones. Since the word2vec embeddings only need unlabelled data to train, the data collection and preprocessing stage is straightforward. GitHub, being a very large host of source code, contains enough code for training such models. However, a significant proportion of code in GitHub does not belong to engineered software projects BIBREF24. To reduce the amount of noise in our training data, we filter repositories based on their size, commit history, number of issues, pull requests, and contributors, and build a corpus of the top 1000 Java repositories. We limit the number of repositories to 1000 due to GitHub API limitations. It is worth noting that using a larger training corpus might provide better results. For instance, code2vec is pre-trained on a corpus that is ten times larger."
],
"extractive_spans": [],
"free_form_answer": "Dataset of publicly disclosed vulnerabilities from 205 Java projects from GitHub and 1000 Java repositories from Github",
"highlighted_evidence": [
"For training our classification models, we use a manually-curated dataset of publicly disclosed vulnerabilities in 205 distinct open-source Java projects mapped to commits fixing them, provided by BIBREF23. ",
"Since the word2vec embeddings only need unlabelled data to train, the data collection and preprocessing stage is straightforward. GitHub, being a very large host of source code, contains enough code for training such models. ",
"To reduce the amount of noise in our training data, we filter repositories based on their size, commit history, number of issues, pull requests, and contributors, and build a corpus of the top 1000 Java repositories. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What regularization methods are used?",
"What metrics are used?",
"How long is the dataset?",
"What dataset do they use?"
],
"question_id": [
"f71b95001dce46ee35cdbd8d177676de19ca2611",
"5aa6556ffd7142933f820a015f1294d38e8cd96c",
"10edfb9428b8a4652274c13962917662fdf84f8a",
"a836ab8eb5a72af4b0a0c83bf42a2a14d1b38763"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: A code snippet from Apache Struts with the Javadoc stating which vulnerability was addressed",
"Figure 2: Illustration of our H-CNN model for learning on diff tokens, where the labels are the following: (a) source code diffs from multiple files, (b) stacked token embeddings, (c) convolutional feature extraction, (d) max-pool across time, (e) file-level feature maps, (f) convolutional feature extraction, (g) max-pool across time, (h) commit-level feature vector, and (i) softmax output.",
"Table 1: Results for each model on the validation and test splits; best values are bolded."
],
"file": [
"4-Figure1-1.png",
"5-Figure2-1.png",
"6-Table1-1.png"
]
} | [
"What metrics are used?",
"How long is the dataset?",
"What dataset do they use?"
] | [
[
"1911.07620-Results and Discussion-0",
"1911.07620-6-Table1-1.png"
],
[
"1911.07620-Experimental Setup-1"
],
[
"1911.07620-Experimental Setup-2",
"1911.07620-Experimental Setup-1"
]
] | [
"Accuracy, precision, recall and F1 score.",
"2022",
"Dataset of publicly disclosed vulnerabilities from 205 Java projects from GitHub and 1000 Java repositories from Github"
] | 356 |
1911.03353 | SEPT: Improving Scientific Named Entity Recognition with Span Representation | We introduce a new scientific named entity recognizer called SEPT, which stands for Span Extractor with Pre-trained Transformers. In recent papers, span extractors have been demonstrated to be a powerful model compared with sequence labeling models. However, we discover that with the development of pre-trained language models, the performance of span extractors appears to become similar to sequence labeling models. To keep the advantages of span representation, we modified the model by under-sampling to balance the positive and negative samples and reduce the search space. Furthermore, we simplify the origin network architecture to combine the span extractor with BERT. Experiments demonstrate that even simplified architecture achieves the same performance and SEPT achieves a new state of the art result in scientific named entity recognition even without relation information involved. | {
"paragraphs": [
[
"With the increasing number of scientific publications in the past decades, improving the performance of automatically information extraction in the papers has been a task of concern. Scientific named entity recognition is the key task of information extraction because the overall performance depends on the result of entity extraction in both pipeline and joint models BIBREF0.",
"Named entity recognition has been regarded as a sequence labeling task in most papers BIBREF1. Unlike the sequence labeling model, the span-based model treats an entity as a whole span representation while the sequence labeling model predicts labels in each time step independently. Recent papers BIBREF2, BIBREF3 have shown the advantages of span-based models. Firstly, it can model overlapping and nested named entities. Besides, by extracting the span representation, it can be shared to train in a multitask framework. In this way, span-based models always outperform the traditional sequence labeling models. For all the advantages of the span-based model, there is one more factor that affects performance. The original span extractor needs to score all spans in a text, which is usually a $O(n^2)$ time complexity. However, the ground truths are only a few spans, which means the input samples are extremely imbalanced.",
"Due to the scarcity of annotated corpus of scientific papers, the pre-trained language model is an important role in the task. Recent progress such as ELMo BIBREF4, GPT BIBREF5, BERT BIBREF6 improves the performance of many NLP tasks significantly including named entity recognition. In the scientific domain, SciBERT BIBREF7 leverages a large corpus of scientific text, providing a new resource of the scientific language model. After combining the pre-trained language model with span extractors, we discover that the performance between span-based models and sequence labeling models become similar.",
"In this paper, we propose an approach to improve span-based scientific named entity recognition. Unlike previous papers, we focus on named entity recognition rather than multitask framework because the multitask framework is natural to help. We work on single-tasking and if we can improve the performance on a single task, the benefits on many tasks are natural.",
"To balance the positive and negative samples and reduce the search space, we remove the pruner and modify the model by under-sampling. Furthermore, because there is a multi-head self-attention mechanism in transformers and they can capture interactions between tokens, we don't need more attention or LSTM network in span extractors. So we simplify the origin network architecture and extract span representation by a simple pooling layer. We call the final scientific named entity recognizer SEPT.",
"Experiments demonstrate that even simplified architecture achieves the same performance and SEPT achieves a new state of the art result compared to existing transformer-based systems."
],
[
"The first Span-based model was proposed by BIBREF8, who apply this model to a coreference resolution task. Later, BIBREF3, BIBREF2 extend it to various tasks, such as semantic role labeling, named entity recognition and relation extraction. BIBREF2 is the first one to perform a scientific information extraction task by a span-based model and construct a dataset called SCIERC, which is the only computer science-related fine-grained information extraction dataset to our best knowledge. BIBREF9 further introduces a general framework for the information extraction task by adding a dynamic graph network after span extractors.",
"They use ELMo as word embeddings, then feed these embeddings into a BiLSTM network to capture context features. They enumerate all possible spans, each span representation is obtained by some attention mechanism and concatenating strategy. Then score them and use a pruner to remove spans that have a lower possibility to be a span. Finally, the rest of the spans are classified into different types of entities."
],
[
"Due to the scarcity of annotated corpus in the scientific domain, SciBert BIBREF7 is present to improve downstream scientific NLP tasks. SciBert is a pre-trained language model based on BERT but trained on a large scientific corpus.",
"For named entity recognition task, they feed the final BERT embeddings into a linear classification layer with softmax output. Then they use a conditional random field to guarantee well-formed entities. In their experiments, they get the best result on finetuned SciBert and an in-domain scientific vocabulary."
],
[
"Our model is consists of four parts as illustrated in figure FIGREF2: Embedding layer, sampling layer, span extractor, classification layer."
],
[
"We use a pre-trained SciBert as our context encoder. Formally, the input document is represented as a sequence of words $D = \\lbrace w_1, w_2, \\dots , w_n\\rbrace $, in which $n$ is the length of the document. After feeding into the SciBert model, we obtain the context embeddings $E = \\lbrace \\mathbf {e}_1, \\mathbf {e}_2, \\dots , \\mathbf {e}_n\\rbrace $."
],
[
"In the sampling layer, we sample continuous sub-strings from the embedding layer, which is also called span. Because we know the exact label of each sample in the training phase, so we can train the model in a particular way. For those negative samples, which means each span does not belong to any entity class, we randomly sampling them rather than enumerate them all. This is a simple but effective way to improve both performance and efficiency. For those ground truth, we keep them all. In this way, we can obtain a balanced span set: $S = S_{neg} \\cup S_{pos} $. In which $S_{neg} = \\lbrace s^{\\prime }_1, s^{\\prime }_2, \\dots , s^{\\prime }_p\\rbrace $, $S_{pos} = \\lbrace s_1, s_2, \\dots , s_q\\rbrace $. Both $s$ and $s^{\\prime }$ is consist of $\\lbrace \\mathbf {e}_i ,\\dots ,\\mathbf {e}_j\\rbrace $, $i$ and $j$ are the start and end index of the span. $p$ is a hyper-parameter: the negative sample number. $q$ is the positive sample number. We further explore the effect of different $p$ in the experiment section."
],
[
"Span extractor is responsible to extract a span representation from embeddings. In previous work BIBREF8, endpoint features, content attention, and span length embedding are concatenated to represent a span. We perform a simple max-pooling to extract span representation because those features are implicitly included in self-attention layers of transformers. Formally, each element in the span vector is:",
"$t$ is ranged from 1 to embedding length. $\\mathbf {e}_i, \\dots , \\mathbf {e}_j$ are embeddings in the span $s$. In this way, we obtain a span representation, whose length is the same as word embedding."
],
[
"We use an MLP to classify spans into different types of entities based on span representation $\\mathbf {r}$. The score of each type $l$ is:",
"We then define a set of random variables, where each random variable $y_s$ corresponds to the span $s$, taking value from the discrete label space $\\mathcal {L}$. The random variables $y_s$ are conditionally independent of each other given the input document $D$:",
"For each document $D$, we minimize the negative log-likelihood for the ground truth $Y^*$:"
],
[
"During the evaluation phase, because we can't peek the ground truth of each span, we can't do negative sampling as described above. To make the evaluation phase effective, we build a pre-trained filter to remove the less possible span in advance. This turns the task into a pipeline: firstly, predict whether the span is an entity, then predict the type. To avoid the cascading error, we select a threshold value to control the recall of this stage. In our best result, we can filter 73.8% negative samples with a 99% recall."
],
[
"In our experiment, we aim to explore 4 questions:",
"How does SEPT performance comparing to the existing single task system?",
"How do different numbers of negative samples affect the performance?",
"How a max-pooling extractor performance comparing to the previous method?",
"How does different threshold effect the filter?",
"Each question corresponds to the subsection below. We document the detailed hyperparameters in the appendix."
],
[
"Table TABREF20 shows the overall test results. We run each system on the SCIERC dataset with the same split scheme as the previous work. In BiLSTM model, we use Glove BIBREF10, ELMo BIBREF4 and SciBERT(fine-tuned) BIBREF7 as word embeddings and then concatenate a CRF layer at the end. In SCIIE BIBREF2, we report single task scores and use ELMo embeddings as the same as they described in their paper. To eliminate the effect of pre-trained embeddings and perform a fair competition, we add a SciBERT layer in SCIIE and fine-tune model parameters like other BERT-based models.",
"We discover that performance improvement is mainly supported by the pre-trained external resources, which is very helpful for such a small dataset. In ELMo model, SCIIE achieves almost 3.0% F1 higher than BiLSTM. But in SciBERT, the performance becomes similar, which is only a 0.5% gap.",
"SEPT still has an advantage comparing to the same transformer-based models, especially in the recall."
],
[
"As shown in figure FIGREF22, we get the best F1 score on around 250 negative samples. This experiment shows that with the number of negative samples increasing, the performance becomes worse."
],
[
"In this experiment, we want to explore how different parts of span extractor behave when a span extractor applied to transformers in an ablating study.",
"As shown in table TABREF24, we discovered that explicit features are no longer needed in this situation. Bert model is powerful enough to gain these features and defining these features manually will bring side effects."
],
[
"In the evaluation phase, we want a filter with a high recall rather than a high precision. Because a high recall means we won't remove so many truth spans. Moreover, we want a high filtration rate to obtain a few remaining samples.",
"As shown in figure FIGREF26, there is a positive correlation between threshold and filter rate, and a negative correlation between threshold and recall. We can pick an appropriate value like $10^{-5}$, to get a higher filtration rate relatively with less positive sample loss (high recall). We can filter 73.8% negative samples with a 99% recall. That makes the error almost negligible for a pipeline framework."
],
[
"We presented a new scientific named entity recognizer SEPT that modified the model by under-sampling to balance the positive and negative samples and reduce the search space.",
"In future work, we are investigating whether the SEPT model can be jointly trained with relation and other metadata from papers."
]
],
"section_name": [
"Introduction",
"Related Work ::: Span-based Models",
"Related Work ::: SciBert",
"Models",
"Models ::: Embedding layer",
"Models ::: Sampling layer",
"Models ::: Span extractor",
"Models ::: Classification layer",
"Models ::: Evaluation phase",
"Experiments",
"Experiments ::: Overall performance",
"Experiments ::: Different negative samples",
"Experiments ::: Ablation study: Span extractor",
"Experiments ::: Threshold of filter",
"Conclution"
]
} | {
"answers": [
{
"annotation_id": [
"1aede180c2016ae5ee3ce9f5e0e91b55e2c325cb",
"7bf4a969884eebe28ab31e8019699b6e9de5cce7"
],
"answer": [
{
"evidence": [
"In the sampling layer, we sample continuous sub-strings from the embedding layer, which is also called span. Because we know the exact label of each sample in the training phase, so we can train the model in a particular way. For those negative samples, which means each span does not belong to any entity class, we randomly sampling them rather than enumerate them all. This is a simple but effective way to improve both performance and efficiency. For those ground truth, we keep them all. In this way, we can obtain a balanced span set: $S = S_{neg} \\cup S_{pos} $. In which $S_{neg} = \\lbrace s^{\\prime }_1, s^{\\prime }_2, \\dots , s^{\\prime }_p\\rbrace $, $S_{pos} = \\lbrace s_1, s_2, \\dots , s_q\\rbrace $. Both $s$ and $s^{\\prime }$ is consist of $\\lbrace \\mathbf {e}_i ,\\dots ,\\mathbf {e}_j\\rbrace $, $i$ and $j$ are the start and end index of the span. $p$ is a hyper-parameter: the negative sample number. $q$ is the positive sample number. We further explore the effect of different $p$ in the experiment section.",
"Span extractor is responsible to extract a span representation from embeddings. In previous work BIBREF8, endpoint features, content attention, and span length embedding are concatenated to represent a span. We perform a simple max-pooling to extract span representation because those features are implicitly included in self-attention layers of transformers. Formally, each element in the span vector is:"
],
"extractive_spans": [
"randomly sampling them rather than enumerate them all",
"simple max-pooling to extract span representation because those features are implicitly included in self-attention layers of transformers"
],
"free_form_answer": "",
"highlighted_evidence": [
"Because we know the exact label of each sample in the training phase, so we can train the model in a particular way. For those negative samples, which means each span does not belong to any entity class, we randomly sampling them rather than enumerate them all.",
"We perform a simple max-pooling to extract span representation because those features are implicitly included in self-attention layers of transformers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To balance the positive and negative samples and reduce the search space, we remove the pruner and modify the model by under-sampling. Furthermore, because there is a multi-head self-attention mechanism in transformers and they can capture interactions between tokens, we don't need more attention or LSTM network in span extractors. So we simplify the origin network architecture and extract span representation by a simple pooling layer. We call the final scientific named entity recognizer SEPT.",
"Experiments demonstrate that even simplified architecture achieves the same performance and SEPT achieves a new state of the art result compared to existing transformer-based systems."
],
"extractive_spans": [
" we simplify the origin network architecture and extract span representation by a simple pooling layer"
],
"free_form_answer": "",
"highlighted_evidence": [
"To balance the positive and negative samples and reduce the search space, we remove the pruner and modify the model by under-sampling. Furthermore, because there is a multi-head self-attention mechanism in transformers and they can capture interactions between tokens, we don't need more attention or LSTM network in span extractors. So we simplify the origin network architecture and extract span representation by a simple pooling layer. We call the final scientific named entity recognizer SEPT.\n\nExperiments demonstrate that even simplified architecture achieves the same performance and SEPT achieves a new state of the art result compared to existing transformer-based systems."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2bc76f85100cf6c3b42f67afdec27e0dab838911",
"d9f63d2c8f21193e40c30d1d8c12cb16453317f5"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Overall performance of scientific named entity recognition task. We report micro F1 score following the convention of NER task. All scores are taken from the test set with the corresponding highest development score."
],
"extractive_spans": [],
"free_form_answer": "SEPT have improvement for Recall 3.9% and F1 1.3% over the best performing baseline (SCIIE(SciBERT))",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Overall performance of scientific named entity recognition task. We report micro F1 score following the convention of NER task. All scores are taken from the test set with the corresponding highest development score."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We discover that performance improvement is mainly supported by the pre-trained external resources, which is very helpful for such a small dataset. In ELMo model, SCIIE achieves almost 3.0% F1 higher than BiLSTM. But in SciBERT, the performance becomes similar, which is only a 0.5% gap.",
"SEPT still has an advantage comparing to the same transformer-based models, especially in the recall."
],
"extractive_spans": [
"In ELMo model, SCIIE achieves almost 3.0% F1 higher than BiLSTM",
" in SciBERT, the performance becomes similar, which is only a 0.5% gap"
],
"free_form_answer": "",
"highlighted_evidence": [
"In ELMo model, SCIIE achieves almost 3.0% F1 higher than BiLSTM. But in SciBERT, the performance becomes similar, which is only a 0.5% gap.\n\nSEPT still has an advantage comparing to the same transformer-based models, especially in the recall."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"What simplification of the architecture is performed that resulted in same performance?",
"How much better is performance of SEPT compared to previous state-of-the-art?"
],
"question_id": [
"0b5a7ccf09810ff5a86162d502697d16b3536249",
"8f00859f74fc77832fa7d38c22f23f74ba13a07e"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The overview architecture of SEPT. Firstly, feeding the entire abstract into the model and obtain BERT embeddings for each token (word piece). Then, in the training phase, rather than enumerate all spans: (a) For negative spans, we sample them randomly. (b) For ground truths, we keep them all. We use the maximum pooling to obtain the span representation. Finally, each span is classified into different types of entities by an MLP.",
"Table 1: Overall performance of scientific named entity recognition task. We report micro F1 score following the convention of NER task. All scores are taken from the test set with the corresponding highest development score.",
"Table 2: Ablation study on different part of span extractor.",
"Figure 3: Recall and Filteration rate on different threshold.",
"Figure 2: Different negative samples affect F1 scores in different datasets."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Figure3-1.png",
"4-Figure2-1.png"
]
} | [
"How much better is performance of SEPT compared to previous state-of-the-art?"
] | [
[
"1911.03353-Experiments ::: Overall performance-1",
"1911.03353-Experiments ::: Overall performance-2",
"1911.03353-3-Table1-1.png"
]
] | [
"SEPT have improvement for Recall 3.9% and F1 1.3% over the best performing baseline (SCIIE(SciBERT))"
] | 357 |
1906.04236 | Identifying Visible Actions in Lifestyle Vlogs | We consider the task of identifying human actions visible in online videos. We focus on the widely spread genre of lifestyle vlogs, which consist of videos of people performing actions while verbally describing them. Our goal is to identify if actions mentioned in the speech description of a video are visually present. We construct a dataset with crowdsourced manual annotations of visible actions, and introduce a multimodal algorithm that leverages information derived from visual and linguistic clues to automatically infer which actions are visible in a video. We demonstrate that our multimodal algorithm outperforms algorithms based only on one modality at a time. | {
"paragraphs": [
[
"There has been a surge of recent interest in detecting human actions in videos. Work in this space has mainly focused on learning actions from articulated human pose BIBREF7 , BIBREF8 , BIBREF9 or mining spatial and temporal information from videos BIBREF10 , BIBREF11 . A number of resources have been produced, including Action Bank BIBREF12 , NTU RGB+D BIBREF13 , SBU Kinect Interaction BIBREF14 , and PKU-MMD BIBREF15 .",
"Most research on video action detection has gathered video information for a set of pre-defined actions BIBREF2 , BIBREF16 , BIBREF1 , an approach known as explicit data gathering BIBREF0 . For instance, given an action such as “open door,” a system would identify videos that include a visual depiction of this action. While this approach is able to detect a specific set of actions, whose choice may be guided by downstream applications, it achieves high precision at the cost of low recall. In many cases, the set of predefined actions is small (e.g., 203 activity classes in BIBREF2 ), and for some actions, the number of visual depictions is very small.",
"An alternative approach is to start with a set of videos, and identify all the actions present in these videos BIBREF17 , BIBREF18 . This approach has been referred to as implicit data gathering, and it typically leads to the identification of a larger number of actions, possibly with a small number of examples per action.",
"In this paper, we use an implicit data gathering approach to label human activities in videos. To the best of our knowledge, we are the first to explore video action recognition using both transcribed audio and video information. We focus on the popular genre of lifestyle vlogs, which consist of videos of people demonstrating routine actions while verbally describing them. We use these videos to develop methods to identify if actions are visually present.",
"The paper makes three main contributions. First, we introduce a novel dataset consisting of 1,268 short video clips paired with sets of actions mentioned in the video transcripts, as well as manual annotations of whether the actions are visible or not. The dataset includes a total of 14,769 actions, 4,340 of which are visible. Second, we propose a set of strong baselines to determine whether an action is visible or not. Third, we introduce a multimodal neural architecture that combines information drawn from visual and linguistic clues, and show that it improves over models that rely on one modality at a time.",
"By making progress towards automatic action recognition, in addition to contributing to video understanding, this work has a number of important and exciting applications, including sports analytics BIBREF19 , human-computer interaction BIBREF20 , and automatic analysis of surveillance video footage BIBREF21 .",
"The paper is organized as follows. We begin by discussing related work, then describe our data collection and annotation process. We next overview our experimental set-up and introduce a multimodal method for identifying visible actions in videos. Finally, we discuss our results and conclude with general directions for future work."
],
[
"There has been substantial work on action recognition in the computer vision community, focusing on creating datasets BIBREF22 , BIBREF23 , BIBREF5 , BIBREF2 or introducing new methods BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 . Table TABREF1 compares our dataset with previous action recognition datasets.",
"The largest datasets that have been compiled to date are based on YouTube videos BIBREF2 , BIBREF16 , BIBREF1 . These actions cover a broad range of classes including human-object interactions such as cooking BIBREF28 , BIBREF29 , BIBREF6 and playing tennis BIBREF23 , as well as human-human interactions such as shaking hands and hugging BIBREF4 .",
"Similar to our work, some of these previous datasets have considered everyday routine actions BIBREF2 , BIBREF16 , BIBREF1 . However, because these datasets rely on videos uploaded on YouTube, it has been observed they can be potentially biased towards unusual situations BIBREF1 . For example, searching for videos with the query “drinking tea\" results mainly in unusual videos such as dogs or birds drinking tea. This bias can be addressed by paying people to act out everyday scenarios BIBREF5 , but this can end up being very expensive. In our work, we address this bias by changing the approach used to search for videos. Instead of searching for actions in an explicit way, using queries such as “opening a fridge” or “making the bed,” we search for more general videos using queries such as “my morning routine.” This approach has been referred to as implicit (as opposed to explicit) data gathering, and was shown to result in a greater number of videos with more realistic action depictions BIBREF0 .",
"Although we use implicit data gathering as proposed in the past, unlike BIBREF0 and other human action recognition datasets, we search for routine videos that contain rich audio descriptions of the actions being performed, and we use this transcribed audio to extract actions. In these lifestyle vlogs, a vlogger typically performs an action while also describing it in detail. To the best of our knowledge, we are the first to build a video action recognition dataset using both transcribed audio and video information.",
"Another important difference between our methodology and previously proposed methods is that we extract action labels from the transcripts. By gathering data before annotating the actions, our action labels are post-defined (as in BIBREF0 ). This is unlike the majority of the existing human action datasets that use pre-defined labels BIBREF5 , BIBREF2 , BIBREF16 , BIBREF1 , BIBREF4 , BIBREF29 , BIBREF6 , BIBREF3 . Post-defined labels allow us to use a larger set of labels, expanding on the simplified label set used in earlier datasets. These action labels are more inline with everyday scenarios, where people often use different names for the same action. For example, when interacting with a robot, a user could refer to an action in a variety of ways; our dataset includes the actions “stick it into the freezer,” “freeze it,” “pop into the freezer,” and “put into the freezer,” variations, which would not be included in current human action recognition datasets.",
"In addition to human action recognition, our work relates to other multimodal tasks such as visual question answering BIBREF30 , BIBREF31 , video summarization BIBREF32 , BIBREF33 , and mapping text descriptions to video content BIBREF34 , BIBREF35 . Specifically, we use an architecture similar to BIBREF30 , where an LSTM BIBREF36 is used together with frame-level visual features such as Inception BIBREF37 , and sequence-level features such as C3D BIBREF27 . However, unlike BIBREF30 who encode the textual information (question-answers pairs) using an LSTM, we chose instead to encode our textual information (action descriptions and their contexts) using a large-scale language model ELMo BIBREF38 .",
"Similar to previous research on multimodal methods BIBREF39 , BIBREF40 , BIBREF41 , BIBREF30 , we also perform feature ablation to determine the role played by each modality in solving the task. Consistent with earlier work, we observe that the textual modality leads to the highest performance across individual modalities, and that the multimodal model combining textual and visual clues has the best overall performance."
],
[
"We collect a dataset of routine and do-it-yourself (DIY) videos from YouTube, consisting of people performing daily activities, such as making breakfast or cleaning the house. These videos also typically include a detailed verbal description of the actions being depicted. We choose to focus on these lifestyle vlogs because they are very popular, with tens of millions having been uploaded on YouTube; tab:nbresultssearchqueries shows the approximate number of videos available for several routine queries. Vlogs also capture a wide range of everyday activities; on average, we find thirty different visible human actions in five minutes of video.",
"By collecting routine videos, instead of searching explicitly for actions, we do implicit data gathering, a form of data collection introduced by BIBREF0 . Because everyday actions are common and not unusual, searching for them directly does not return many results. In contrast, by collecting routine videos, we find many everyday activities present in these videos."
],
[
"We build a data gathering pipeline (see Figure FIGREF5 ) to automatically extract and filter videos and their transcripts from YouTube. The input to the pipeline is manually selected YouTube channels. Ten channels are chosen for their rich routine videos, where the actor(s) describe their actions in great detail. From each channel, we manually select two different playlists, and from each playlist, we randomly download ten videos.",
"The following data processing steps are applied:",
"Transcript Filtering. Transcripts are automatically generated by YouTube. We filter out videos that do not contain any transcripts or that contain transcripts with an average (over the entire video) of less than 0.5 words per second. These videos do not contain detailed action descriptions so we cannot effectively leverage textual information.",
"Extract Candidate Actions from Transcript. Starting with the transcript, we generate a noisy list of potential actions. This is done using the Stanford parser BIBREF42 to split the transcript into sentences and identify verb phrases, augmented by a set of hand-crafted rules to eliminate some parsing errors. The resulting actions are noisy, containing phrases such as “found it helpful if you” and “created before up the top you.”",
"Segment Videos into Miniclips. The length of our collected videos varies from two minutes to twenty minutes. To ease the annotation process, we split each video into miniclips (short video sequences of maximum one minute). Miniclips are split to minimize the chance that the same action is shown across multiple miniclips. This is done automatically, based on the transcript timestamp of each action. Because YouTube transcripts have timing information, we are able to line up each action with its corresponding frames in the video. We sometimes notice a gap of several seconds between the time an action occurs in the transcript and the time it is shown in the video. To address this misalignment, we first map the actions to the miniclips using the time information from the transcript. We then expand the miniclip by 15 seconds before the first action and 15 seconds after the last action. This increases the chance that all actions will be captured in the miniclip.",
"Motion Filtering. We remove miniclips that do not contain much movement. We sample one out of every one hundred frames of the miniclip, and compute the 2D correlation coefficient between these sampled frames. If the median of the obtained values is greater than a certain threshold (we choose 0.8), we filter out the miniclip. Videos with low movement tend to show people sitting in front of the camera, describing their routine, but not acting out what they are saying. There can be many actions in the transcript, but if they are not depicted in the video, we cannot leverage the video information."
],
[
"Our goal is to identify which of the actions extracted from the transcripts are visually depicted in the videos. We create an annotation task on Amazon Mechanical Turk (AMT) to identify actions that are visible.",
"We give each AMT turker a HIT consisting of five miniclips with up to seven actions generated from each miniclip. The turker is asked to assign a label (visible in the video; not visible in the video; not an action) to each action. Because it is difficult to reliably separate not visible and not an action, we group these labels together.",
"Each miniclip is annotated by three different turkers. For the final annotation, we use the label assigned by the majority of turkers, i.e., visible or not visible / not an action.",
"To help detect spam, we identify and reject the turkers that assign the same label for every action in all five miniclips that they annotate. Additionally, each HIT contains a ground truth miniclip that has been pre-labeled by two reliable annotators. Each ground truth miniclip has more than four actions with labels that were agreed upon by both reliable annotators. We compute accuracy between a turker's answers and the ground truth annotations; if this accuracy is less than 20%, we reject the HIT as spam.",
"After spam removal, we compute the agreement score between the turkers using Fleiss kappa BIBREF43 . Over the entire data set, the Fleiss agreement score is 0.35, indicating fair agreement. On the ground truth data, the Fleiss kappa score is 0.46, indicating moderate agreement. This fair to moderate agreement indicates that the task is difficult, and there are cases where the visibility of the actions is hard to label. To illustrate, Figure FIGREF9 shows examples where the annotators had low agreement.",
"Table TABREF8 shows statistics for our final dataset of videos labeled with actions, and Figure 2 shows a sample video and transcript, with annotations.",
"For our experiments, we use the first eight YouTube channels from our dataset as train data, the ninth channel as validation data and the last channel as test data. Statistics for this split are shown in Table TABREF10 ."
],
[
"The goal of our dataset is to capture naturally-occurring, routine actions. Because the same action can be identified in different ways (e.g., “pop into the freezer”, “stick into the freezer\"), our dataset has a complex and diverse set of action labels. These labels demonstrate the language used by humans in everyday scenarios; because of that, we choose not to group our labels into a pre-defined set of actions. Table TABREF1 shows the number of unique verbs, which can be considered a lower bound for the number of unique actions in our dataset. On average, a single verb is used in seven action labels, demonstrating the richness of our dataset.",
"The action labels extracted from the transcript are highly dependent on the performance of the constituency parser. This can introduce noise or ill-defined action labels. Some acions contain extra words (e.g., “brush my teeth of course”), or lack words (e.g., “let me just”). Some of this noise is handled during the annotation process; for example, most actions that lack words are labeled as “not visible” or “not an action” because they are hard to interpret."
],
[
"Our goal is to determine if actions mentioned in the transcript of a video are visually represented in the video. We develop a multimodal model that leverages both visual and textual information, and we compare its performance with several single-modality baselines."
],
[
"Starting with our annotated dataset, which includes miniclips paired with transcripts and candidate actions drawn from the transcript, we extract several layers of information, which we then use to develop our multimodal model, as well as several baselines.",
"Action Embeddings. To encode each action, we use both GloVe BIBREF44 and ELMo BIBREF38 embeddings. When using GloVe embeddings, we represent the action as the average of all its individual word embeddings. We use embeddings with dimension 50. When using ELMo, we represent the action as a list of words which we feed into the default ELMo embedding layer. This performs a fixed mean pooling of all the contextualized word representations in each action.",
"Part-of-speech (POS). We use POS information for each action. Similar to word embeddings BIBREF44 , we train POS embeddings. We run the Stanford POS Tagger BIBREF45 on the transcripts and assign a POS to each word in an action. To obtain the POS embeddings, we train GloVe on the Google N-gram corpus using POS information from the five-grams. Finally, for each action, we average together the POS embeddings for all the words in the action to form a POS embedding vector.",
"Context Embeddings. Context can be helpful to determine if an action is visible or not. We use two types of context information, action-level and sentence-level. Action-level context takes into account the previous action and the next action; we denote it as Context INLINEFORM0 . These are each calculated by taking the average of the action's GloVe embeddings. Sentence-level context considers up to five words directly before the action and up to five words after the action (we do not consider words that are not in the same sentence as the action); we denote it as Context INLINEFORM1 . Again, we average the GLoVe embeddings of the preceding and following words to get two context vectors.",
"Concreteness. Our hypothesis is that the concreteness of the words in an action is related to its visibility in a video. We use a dataset of words with associated concreteness scores from BIBREF46 . Each word is labeled by a human annotator with a value between 1 (very abstract) and 5 (very concrete). The percentage of actions from our dataset that have at least one word in the concreteness dataset is 99.8%. For each action, we use the concreteness scores of the verbs and nouns in the action. We consider the concreteness score of an action to be the highest concreteness score of its corresponding verbs and nouns. tab:concr1 shows several sample actions along with their concreteness scores and their visiblity.",
"Video Representations. We use Yolo9000 BIBREF47 to identify objects present in each miniclip. We choose YOLO9000 for its high and diverse number of labels (9,000 unique labels). We sample the miniclips at a rate of 1 frame-per-second, and we use the Yolo9000 model pre-trained on COCO BIBREF48 and ImageNet BIBREF49 .",
"We represent a video both at the frame level and the sequence level. For frame-level video features, we use the Inception V3 model BIBREF37 pre-trained on ImageNet. We extract the output of the very last layer before the Flatten operation (the “bottleneck layer\"); we choose this layer because the following fully connected layers are too specialized for the original task they were trained for. We extract Inception V3 features from miniclips sampled at 1 frame-per-second.",
"For sequence-level video features, we use the C3D model BIBREF27 pre-trained on the Sports-1M dataset BIBREF23 . Similarly, we take the feature map of the sixth fully connected layer. Because C3D captures motion information, it is important that it is applied on consecutive frames. We take each frame used to extract the Inception features and extract C3D features from the 16 consecutive frames around it.",
"We use this approach because combining Inception V3 and C3D features has been shown to work well in other video-based models BIBREF30 , BIBREF25 , BIBREF1 ."
],
[
"Using the different data representations described in Section SECREF12 , we implement several baselines.",
"Concreteness. We label as visible all the actions that have a concreteness score above a certain threshold, and label as non-visible the remaining ones. We fine tune the threshold on our validation set; for fine tuning, we consider threshold values between 3 and 5. Table TABREF20 shows the results obtained for this baseline.",
"Feature-based Classifier. For our second set of baselines, we run a classifier on subsets of all of our features. We use an SVM BIBREF50 , and perform five-fold cross-validation across the train and validation sets, fine tuning the hyper-parameters (kernel type, C, gamma) using a grid search. We run experiments with various combinations of features: action GloVe embeddings; POS embeddings; embeddings of sentence-level context (Context INLINEFORM0 ) and action-level context (Context INLINEFORM1 ); concreteness score. The combinations that perform best during cross-validation on the combined train and validation sets are shown in Table TABREF20 .",
"LSTM and ELMo. We also consider an LSTM model BIBREF36 that takes as input the tokenized action sequences padded to the length of the longest action. These are passed through a trainable embedding layer, initialized with GloVe embeddings, before the LSTM. The LSTM output is then passed through a feed forward network of fully connected layers, each followed by a dropout layer BIBREF51 at a rate of 50%. We use a sigmoid activation function after the last hidden layer to get an output probability distribution. We fine tune the model on the validation set for the number of training epochs, batch size, size of LSTM, and number of fully-connected layers.",
"We build a similar model that embeds actions using ELMo (composed of 2 bi-LSTMs). We pass these embeddings through the same feed forward network and sigmoid activation function. The results for both the LSTM and ELMo models are shown in Table TABREF20 .",
"Yolo Object Detection. Our final baseline leverages video information from the YOLO9000 object detector. This baseline builds on the intuition that many visible actions involve visible objects. We thus label an action as visible if it contains at least one noun similar to objects detected in its corresponding miniclip. To measure similarity, we compute both the Wu-Palmer (WUP) path-length-based semantic similarity BIBREF52 and the cosine similarity on the GloVe word embeddings. For every action in a miniclip, each noun is compared to all detected objects and assigned a similarity score. As in our concreteness baseline, the action is assigned the highest score of its corresponding nouns. We use the validation data to fine tune the similarity threshold that decides if an action is visible or not. The results are reported in Table TABREF20 . Examples of actions that contain one or more words similar to detected objects by Yolo can be seen in Figure FIGREF18 ."
],
[
"Each of our baselines considers only a single modality, either text or video. While each of these modalities contributes important information, neither of them provides a full picture. The visual modality is inherently necessary, because it shows the visibility of an action. For example, the same spoken action can be labeled as either visible or non-visible, depending on its visual context; we find 162 unique actions that are labeled as both visible and not visible, depending on the miniclip. This ambiguity has to be captured using video information. However, the textual modality provides important clues that are often missing in the video. The words of the person talking fill in details that many times cannot be inferred from the video. For our full model, we combine both textual and visual information to leverage both modalities.",
"We propose a multimodal neural architecture that combines encoders for the video and text modalities, as well as additional information (e.g., concreteness). Figure FIGREF19 shows our model architecture. The model takes as input a (miniclip INLINEFORM0 , action INLINEFORM1 ) pair and outputs the probability that action INLINEFORM2 is visible in miniclip INLINEFORM3 . We use C3D and Inception V3 video features extracted for each frame, as described in Section SECREF12 . These features are concatenated and run through an LSTM.",
"To represent the actions, we use ELMo embeddings (see Section SECREF12 ). These features are concatenated with the output from the video encoding LSTM, and run through a three-layer feed forward network with dropout. Finally, the result of the last layer is passed through a sigmoid function, which produces a probability distribution indicating whether the action is visible in the miniclip. We use an RMSprop optimizer BIBREF53 and fine tune the number of epochs, batch size and size of the LSTM and fully-connected layers."
],
[
"Table TABREF20 shows the results obtained using the multimodal model for different sets of input features. The model that uses all the input features available leads to the best results, improving significantly over the text-only and video-only methods.",
"We find that using only Yolo to find visible objects does not provide sufficient information to solve this task. This is due to both the low number of objects that Yolo is able to detect, and the fact that not all actions involve objects. For example, visible actions from our datasets such as “get up\", “cut them in half\", “getting ready\", and “chopped up\" cannot be correctly labeled using only object detection. Consequently, we need to use additional video information such as Inception and C3D information.",
"In general, we find that the text information plays an important role. ELMo embeddings lead to better results than LSTM embeddings, with a relative error rate reduction of 6.8%. This is not surprising given that ELMo uses two bidirectional LSTMs and has improved the state-of-the-art in many NLP tasks BIBREF38 . Consequently, we use ELMo in our multimodal model.",
"Moreover, the addition of extra information improves the results for both modalities. Specifically, the addition of context is found to bring improvements. The use of POS is also found to be generally helpful."
],
[
"In this paper, we address the task of identifying human actions visible in online videos. We focus on the genre of lifestyle vlogs, and construct a new dataset consisting of 1,268 miniclips and 14,769 actions out of which 4,340 have been labeled as visible. We describe and evaluate several text-based and video-based baselines, and introduce a multimodal neural model that leverages visual and linguistic information as well as additional information available in the input data. We show that the multimodal model outperforms the use of one modality at a time.",
"A distinctive aspect of this work is that we label actions in videos based on the language that accompanies the video. This has the potential to create a large repository of visual depictions of actions, with minimal human intervention, covering a wide spectrum of actions that typically occur in everyday life.",
"In future work, we plan to explore additional representations and architectures to improve the accuracy of our model, and to identify finer-grained alignments between visual actions and their verbal descriptions. The dataset and the code introduced in this paper are publicly available at http://lit.eecs.umich.edu/downloads.html."
],
[
"This material is based in part upon work supported by the Michigan Institute for Data Science, by the National Science Foundation (grant #1815291), by the John Templeton Foundation (grant #61156), and by DARPA (grant #HR001117S0026-AIDA-FP-045). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the Michigan Institute for Data Science, the National Science Foundation, the John Templeton Foundation, or DARPA."
]
],
"section_name": [
"Introduction",
"Related Work",
"Data Collection and Annotation",
"Data Gathering",
"Visual Action Annotation",
"Discussion",
"Identifying Visible Actions in Videos",
"Data Processing and Representations",
"Baselines",
"Multimodal Model",
"Evaluation and Results",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"de99ba1f5dd95c7f599c3bd2102b643f68ed330f",
"f849ff74adc4aeec5141e394ca91cdcc43119879"
],
"answer": [
{
"evidence": [
"The paper makes three main contributions. First, we introduce a novel dataset consisting of 1,268 short video clips paired with sets of actions mentioned in the video transcripts, as well as manual annotations of whether the actions are visible or not. The dataset includes a total of 14,769 actions, 4,340 of which are visible. Second, we propose a set of strong baselines to determine whether an action is visible or not. Third, we introduce a multimodal neural architecture that combines information drawn from visual and linguistic clues, and show that it improves over models that rely on one modality at a time."
],
"extractive_spans": [
"14,769"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset includes a total of 14,769 actions, 4,340 of which are visible. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The paper makes three main contributions. First, we introduce a novel dataset consisting of 1,268 short video clips paired with sets of actions mentioned in the video transcripts, as well as manual annotations of whether the actions are visible or not. The dataset includes a total of 14,769 actions, 4,340 of which are visible. Second, we propose a set of strong baselines to determine whether an action is visible or not. Third, we introduce a multimodal neural architecture that combines information drawn from visual and linguistic clues, and show that it improves over models that rely on one modality at a time."
],
"extractive_spans": [
"14,769 actions"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dataset includes a total of 14,769 actions, 4,340 of which are visible."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4959afcd0c523ec138631ad0ab0bcfac1feebc35",
"f00307e529e8f96e8c801296bb606fc6aa50b13d"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 3: Data statistics.",
"Table TABREF8 shows statistics for our final dataset of videos labeled with actions, and Figure 2 shows a sample video and transcript, with annotations."
],
"extractive_spans": [],
"free_form_answer": "177",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Data statistics.",
"Table TABREF8 shows statistics for our final dataset of videos labeled with actions, and Figure 2 shows a sample video and transcript, with annotations."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The paper makes three main contributions. First, we introduce a novel dataset consisting of 1,268 short video clips paired with sets of actions mentioned in the video transcripts, as well as manual annotations of whether the actions are visible or not. The dataset includes a total of 14,769 actions, 4,340 of which are visible. Second, we propose a set of strong baselines to determine whether an action is visible or not. Third, we introduce a multimodal neural architecture that combines information drawn from visual and linguistic clues, and show that it improves over models that rely on one modality at a time."
],
"extractive_spans": [
"1,268"
],
"free_form_answer": "",
"highlighted_evidence": [
"First, we introduce a novel dataset consisting of 1,268 short video clips paired with sets of actions mentioned in the video transcripts, as well as manual annotations of whether the actions are visible or not."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"52f1eee4c80aae87da6a1e82cab53983cb2da0ea",
"82d9e8589e3f7ac7cc6b1c4bdbea555ae9b16a16"
],
"answer": [
{
"evidence": [
"Concreteness. We label as visible all the actions that have a concreteness score above a certain threshold, and label as non-visible the remaining ones. We fine tune the threshold on our validation set; for fine tuning, we consider threshold values between 3 and 5. Table TABREF20 shows the results obtained for this baseline.",
"Feature-based Classifier. For our second set of baselines, we run a classifier on subsets of all of our features. We use an SVM BIBREF50 , and perform five-fold cross-validation across the train and validation sets, fine tuning the hyper-parameters (kernel type, C, gamma) using a grid search. We run experiments with various combinations of features: action GloVe embeddings; POS embeddings; embeddings of sentence-level context (Context INLINEFORM0 ) and action-level context (Context INLINEFORM1 ); concreteness score. The combinations that perform best during cross-validation on the combined train and validation sets are shown in Table TABREF20 .",
"LSTM and ELMo. We also consider an LSTM model BIBREF36 that takes as input the tokenized action sequences padded to the length of the longest action. These are passed through a trainable embedding layer, initialized with GloVe embeddings, before the LSTM. The LSTM output is then passed through a feed forward network of fully connected layers, each followed by a dropout layer BIBREF51 at a rate of 50%. We use a sigmoid activation function after the last hidden layer to get an output probability distribution. We fine tune the model on the validation set for the number of training epochs, batch size, size of LSTM, and number of fully-connected layers.",
"We build a similar model that embeds actions using ELMo (composed of 2 bi-LSTMs). We pass these embeddings through the same feed forward network and sigmoid activation function. The results for both the LSTM and ELMo models are shown in Table TABREF20 .",
"Yolo Object Detection. Our final baseline leverages video information from the YOLO9000 object detector. This baseline builds on the intuition that many visible actions involve visible objects. We thus label an action as visible if it contains at least one noun similar to objects detected in its corresponding miniclip. To measure similarity, we compute both the Wu-Palmer (WUP) path-length-based semantic similarity BIBREF52 and the cosine similarity on the GloVe word embeddings. For every action in a miniclip, each noun is compared to all detected objects and assigned a similarity score. As in our concreteness baseline, the action is assigned the highest score of its corresponding nouns. We use the validation data to fine tune the similarity threshold that decides if an action is visible or not. The results are reported in Table TABREF20 . Examples of actions that contain one or more words similar to detected objects by Yolo can be seen in Figure FIGREF18 ."
],
"extractive_spans": [
"Concreteness",
"Feature-based Classifier",
"LSTM and ELMo",
"Yolo Object Detection"
],
"free_form_answer": "",
"highlighted_evidence": [
"Concreteness. We label as visible all the actions that have a concreteness score above a certain threshold, and label as non-visible the remaining ones.",
"Feature-based Classifier. For our second set of baselines, we run a classifier on subsets of all of our features. We use an SVM BIBREF50 , and perform five-fold cross-validation across the train and validation sets, fine tuning the hyper-parameters (kernel type, C, gamma) using a grid search.",
"LSTM and ELMo. We also consider an LSTM model BIBREF36 that takes as input the tokenized action sequences padded to the length of the longest action.",
"We build a similar model that embeds actions using ELMo (composed of 2 bi-LSTMs).",
"Yolo Object Detection. Our final baseline leverages video information from the YOLO9000 object detector."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Feature-based Classifier. For our second set of baselines, we run a classifier on subsets of all of our features. We use an SVM BIBREF50 , and perform five-fold cross-validation across the train and validation sets, fine tuning the hyper-parameters (kernel type, C, gamma) using a grid search. We run experiments with various combinations of features: action GloVe embeddings; POS embeddings; embeddings of sentence-level context (Context INLINEFORM0 ) and action-level context (Context INLINEFORM1 ); concreteness score. The combinations that perform best during cross-validation on the combined train and validation sets are shown in Table TABREF20 .",
"LSTM and ELMo. We also consider an LSTM model BIBREF36 that takes as input the tokenized action sequences padded to the length of the longest action. These are passed through a trainable embedding layer, initialized with GloVe embeddings, before the LSTM. The LSTM output is then passed through a feed forward network of fully connected layers, each followed by a dropout layer BIBREF51 at a rate of 50%. We use a sigmoid activation function after the last hidden layer to get an output probability distribution. We fine tune the model on the validation set for the number of training epochs, batch size, size of LSTM, and number of fully-connected layers.",
"We build a similar model that embeds actions using ELMo (composed of 2 bi-LSTMs). We pass these embeddings through the same feed forward network and sigmoid activation function. The results for both the LSTM and ELMo models are shown in Table TABREF20 .",
"Yolo Object Detection. Our final baseline leverages video information from the YOLO9000 object detector. This baseline builds on the intuition that many visible actions involve visible objects. We thus label an action as visible if it contains at least one noun similar to objects detected in its corresponding miniclip. To measure similarity, we compute both the Wu-Palmer (WUP) path-length-based semantic similarity BIBREF52 and the cosine similarity on the GloVe word embeddings. For every action in a miniclip, each noun is compared to all detected objects and assigned a similarity score. As in our concreteness baseline, the action is assigned the highest score of its corresponding nouns. We use the validation data to fine tune the similarity threshold that decides if an action is visible or not. The results are reported in Table TABREF20 . Examples of actions that contain one or more words similar to detected objects by Yolo can be seen in Figure FIGREF18 ."
],
"extractive_spans": [
"SVM",
"LSTM",
"ELMo",
"Yolo Object Detection"
],
"free_form_answer": "",
"highlighted_evidence": [
"Feature-based Classifier. For our second set of baselines, we run a classifier on subsets of all of our features. We use an SVM BIBREF50 , and perform five-fold cross-validation across the train and validation sets, fine tuning the hyper-parameters (kernel type, C, gamma) using a grid search.",
"LSTM and ELMo. We also consider an LSTM model BIBREF36 that takes as input the tokenized action sequences padded to the length of the longest action. ",
"We build a similar model that embeds actions using ELMo (composed of 2 bi-LSTMs).",
"Yolo Object Detection. Our final baseline leverages video information from the YOLO9000 object detector. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1b2fef17ccd67cdba9ea2b4eab76630282349a18",
"87f974c56c28ae6c1decbf67818e205db5c3fbe0"
],
"answer": [
{
"evidence": [
"Our goal is to identify which of the actions extracted from the transcripts are visually depicted in the videos. We create an annotation task on Amazon Mechanical Turk (AMT) to identify actions that are visible."
],
"extractive_spans": [
"Amazon Mechanical Turk (AMT)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We create an annotation task on Amazon Mechanical Turk (AMT) to identify actions that are visible."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our goal is to identify which of the actions extracted from the transcripts are visually depicted in the videos. We create an annotation task on Amazon Mechanical Turk (AMT) to identify actions that are visible."
],
"extractive_spans": [
"Amazon Mechanical Turk "
],
"free_form_answer": "",
"highlighted_evidence": [
"We create an annotation task on Amazon Mechanical Turk (AMT) to identify actions that are visible."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"938136d01befee33e029ee7c4d34fc24a32990ee",
"a725c3f742a9faf694224767cee705f772b35667"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"70aab55ba0d0b060ec37007a2082dea55f4dc259",
"824d240d9802b8a35706784c690ee47881f08512"
],
"answer": [
{
"evidence": [
"Segment Videos into Miniclips. The length of our collected videos varies from two minutes to twenty minutes. To ease the annotation process, we split each video into miniclips (short video sequences of maximum one minute). Miniclips are split to minimize the chance that the same action is shown across multiple miniclips. This is done automatically, based on the transcript timestamp of each action. Because YouTube transcripts have timing information, we are able to line up each action with its corresponding frames in the video. We sometimes notice a gap of several seconds between the time an action occurs in the transcript and the time it is shown in the video. To address this misalignment, we first map the actions to the miniclips using the time information from the transcript. We then expand the miniclip by 15 seconds before the first action and 15 seconds after the last action. This increases the chance that all actions will be captured in the miniclip."
],
"extractive_spans": [
"length of our collected videos varies from two minutes to twenty minutes"
],
"free_form_answer": "",
"highlighted_evidence": [
"The length of our collected videos varies from two minutes to twenty minutes."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF8 shows statistics for our final dataset of videos labeled with actions, and Figure 2 shows a sample video and transcript, with annotations.",
"FLOAT SELECTED: Table 3: Data statistics."
],
"extractive_spans": [],
"free_form_answer": "On average videos are 16.36 minutes long",
"highlighted_evidence": [
"Table TABREF8 shows statistics for our final dataset of videos labeled with actions, and Figure 2 shows a sample video and transcript, with annotations.",
"FLOAT SELECTED: Table 3: Data statistics."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
"",
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
"",
"",
""
],
"question": [
"How many actions are present in the dataset?",
"How many videos did they use?",
"What unimodal algorithms do they compare with?",
"What platform was used for crowdsourcing?",
"What language are the videos in?",
"How long are the videos?"
],
"question_id": [
"bda21bfb2dd74085cbc355c70dab5984ef41dba7",
"c2497552cf26671f6634b02814e63bb94ec7b273",
"441a2b80e82266c2cc2b306c0069f2b564813fed",
"e462efb58c71f186cd6b315a2d861cbb7171f65b",
"84f9952814d6995bc99bbb3abb372d90ef2f28b4",
"5364fe5f256f1263a939e0a199c3708727ad856a"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Comparison between our dataset and other video human action recognition datasets. # Actions show either the number of action classes in that dataset (for the other datasets), or the number of unique visible actions in that dataset (ours); # Verbs shows the number of unique verbs in the actions; Implicit is the type of data gathering method (versus explicit); Label types are either post-defined (first gathering data and then annotating actions): X, or pre-defined (annotating actions before gathering data): x.",
"Table 2: Approximate number of videos found when searching for routine and do-it-yourself queries on YouTube.",
"Figure 1: Overview of the data gathering pipeline.",
"Figure 2: Sample video frames, transcript, and annotations.",
"Figure 3: An example of low agreement. The table shows actions and annotations from workers #1, #2, and #3, as well as the ground truth (GT). Labels are: visible - X, not visible - x. The bottom row shows screenshots from the video. The Fleiss kappa agreement score is -0.2.",
"Table 3: Data statistics.",
"Table 4: Statistics for the experimental data split.",
"Table 5: Visible actions with high concreteness scores (Con.), and non-visible actions with low concreteness scores. The noun or verb with the highest concreteness score is in bold.",
"Figure 4: Example of frames, corresponding actions, object detected with YOLO, and the object - word pair with the highest WUP similarity score in each frame.",
"Figure 5: Overview of the multimodal neural architecture. + represents concatenation.",
"Table 6: Results from baselines and our best multimodal method on validation and test data. ActionG indicates action representation using GloVe embedding, and ActionE indicates action representation using ELMo embedding. ContextS indicates sentence-level context, and ContextA indicates action-level context."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"4-Figure1-1.png",
"5-Figure2-1.png",
"5-Figure3-1.png",
"5-Table3-1.png",
"5-Table4-1.png",
"6-Table5-1.png",
"7-Figure4-1.png",
"8-Figure5-1.png",
"9-Table6-1.png"
]
} | [
"How many videos did they use?",
"How long are the videos?"
] | [
[
"1906.04236-5-Table3-1.png",
"1906.04236-Introduction-4",
"1906.04236-Visual Action Annotation-5"
],
[
"1906.04236-5-Table3-1.png",
"1906.04236-Visual Action Annotation-5",
"1906.04236-Data Gathering-4"
]
] | [
"177",
"On average videos are 16.36 minutes long"
] | 358 |
1806.07042 | Response Generation by Context-aware Prototype Editing | Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses. We propose a new paradigm for response generation, that is response generation by editing, which significantly increases the diversity and informativeness of the generation results. Our assumption is that a plausible response can be generated by slightly revising an existing response prototype. The prototype is retrieved from a pre-defined index and provides a good start-point for generation because it is grammatical and informative. We design a response editing model, where an edit vector is formed by considering differences between a prototype context and a current context, and then the edit vector is fed to a decoder to revise the prototype response for the current context. Experiment results on a large scale dataset demonstrate that the response editing model outperforms generative and retrieval-based models on various aspects. | {
"paragraphs": [
[
"In recent years, non-task oriented chatbots focused on responding to humans intelligently on a variety of topics, have drawn much attention from both academia and industry. Existing approaches can be categorized into generation-based methods BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 which generate a response from scratch, and retrieval-based methods BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 which select a response from an existing corpus. Since retrieval-based approaches are severely constrained by a pre-defined index, generative approaches become more and more popular in recent years. Traditional generation-based approaches, however, do not easily generate long, diverse and informative responses, which is referred to as “safe response\" problem BIBREF10 .",
"To address this issue, we propose a new paradigm, prototype-then-edit, for response generation. Our motivations include: 1) human-written responses, termed as “prototypes response\", are informative, diverse and grammatical which do not suffer from short and generic issues. Hence, generating responses by editing such prototypes is able to alleviate the “safe response\" problem. 2) Some retrieved prototypes are not relevant to the current context, or suffer from a privacy issue. The post-editing process can partially solve these two problems. 3) Lexical differences between contexts provide an important signal for response editing. If a word appears in the current context but not in the prototype context, the word is likely to be inserted into the prototype response in the editing process.",
"Inspired by this idea, we formulate the response generation process as follows. Given a conversational context INLINEFORM0 , we first retrieve a similar context INLINEFORM1 and its associated response INLINEFORM2 from a pre-defined index, which are called prototype context and prototype response respectively. Then, we calculate an edit vector by concatenating the weighted average results of insertion word embeddings (words in prototype context but not in current context) and deletion word embeddings (words in current context but not in prototype context). After that, we revise the prototype response conditioning on the edit vector. We further illustrate how our idea works with an example in Table TABREF1 . It is obvious that the major difference between INLINEFORM3 and INLINEFORM4 is what the speaker eats, so the phrase “raw green vegetables\" in INLINEFORM5 should be replaced by “desserts\" in order to adapt to the current context INLINEFORM6 . We hope that the decoder language model could remember the collocation of “desserts\" and “bad for health\", so as to replace “beneficial\" with “bad\" in the revised response. The new paradigm does not only inherits the fluency and informativeness advantages from retrieval results, but also enjoys the flexibility of generation results. Hence, our edit-based model is better than previous retrieval-based and generation-based models. The edit-based model can solve the “safe response\" problem of generative models by leveraging existing responses, and is more flexible than retrieval-based models, because it does not highly depend on the index and is able to edit a response to fit current context.",
"Prior work BIBREF11 has figured out how to edit prototype in an unconditional setting, but it cannot be applied to the response generation directly. In this paper, we propose a prototype editing method in a conditional setting. Our idea is that differences between responses strongly correlates with differences in their contexts (i.e. if a word in prototype context is changed, its related words in the response are probably modified in the editing.). We realize this idea by designing a context-aware editing model that is built upon a encoder-decoder model augmented with an editing vector. The edit vector is computed by the weighted average of insertion word embeddings and deletion word embeddings. Larger weights mean that the editing model should pay more attention on corresponding words in revision. For instance, in Table TABREF1 , we wish words like “dessert\", “Tofu\" and “vegetables\" get larger weights than words like “and\" and “ at\". The encoder learns the prototype representation with a gated recurrent unit (GRU), and feeds the representation to a decoder together with the edit vector. The decoder is a GRU language model, that regards the concatenation of last step word embedding and the edit vector as inputs, and predicts the next word with an attention mechanism.",
"Our experiments are conducted on a large scale Chinese conversation corpus comprised of 20 million context-response pairs. We compare our model with generative models and retrieval models in terms of fluency, relevance, diversity and originality. The experiments show that our method outperforms traditional generative models on relevance, diversity and originality. We further find that the revised response achieves better relevance compared to its prototype and other retrieval results, demonstrating that the editing process does not only promote response originality but also improve the relevance of retrieval results.",
"Our contributions are listed as follows: 1) this paper proposes a new paradigm, prototype-then-edit, for response generation; 2) we elaborate a simple but effective context-aware editing model for response generation; 3) we empirically verify the effectiveness of our method in terms of relevance, diversity, fluency and originality."
],
[
"Research on chatbots goes back to the 1960s when ELIZA was designed BIBREF12 with a huge amount of hand-crafted templates and rules. Recently, researchers have paid more and more attention on data-driven approaches BIBREF13 , BIBREF14 due to their superior scalability. Most of these methods are classified as retrieval-based methods BIBREF14 , BIBREF7 and generation methods BIBREF15 , BIBREF16 , BIBREF17 . The former one aims to select a relevant response using a matching model, while the latter one generates a response with natural language generative models.",
"Prior works on retrieval-based methods mainly focus on the matching model architecture for single turn conversation BIBREF5 and multi-turn conversation BIBREF6 , BIBREF8 , BIBREF9 . For the studies of generative methods, a huge amount of work aims to mitigate the “safe response\" issue from different perspectives. Most of work build models under a sequence to sequence framework BIBREF18 , and introduce other elements, such as latent variables BIBREF4 , topic information BIBREF19 , and dynamic vocabulary BIBREF20 to increase response diversity. Furthermore, the reranking technique BIBREF10 , reinforcement learning technique BIBREF15 , and adversarial learning technique BIBREF16 , BIBREF21 have also been applied to response generation. Apart from work on “safe response\", there is a growing body of literature on style transfer BIBREF22 , BIBREF23 and emotional response generation BIBREF17 . In general, most of previous work generates a response from scratch either left-to-right or conditioned on a latent vector, whereas our approach aims to generate a response by editing a prototype. Prior works have attempted to utilize prototype responses to guide the generation process BIBREF24 , BIBREF25 , in which prototype responses are encoded into vectors and feed to a decoder along with a context representation. Our work differs from previous ones on two aspects. One is they do not consider prototype context in the generation process, while our model utilizes context differences to guide editing process. The other is that we regard prototype responses as a source language, while their works formulate it as a multi-source seq2seq task, in which the current context and prototype responses are all source languages in the generation process.",
"Recently, some researches have explored natural language generation by editing BIBREF11 , BIBREF26 . A typical approach follows a writing-then-edit paradigm, that utilizes one decoder to generate a draft from scratch and uses another decoder to revise the draft BIBREF27 . The other approach follows a retrieval-then-edit paradigm, that uses a Seq2Seq model to edit a prototype retrieved from a corpus BIBREF11 , BIBREF28 , BIBREF29 . As far as we known, we are the first to leverage context lexical differences to edit prototypes."
],
[
"Before introducing our approach, we first briefly describe state-of-the-art natural language editing method BIBREF11 . Given a sentence pair INLINEFORM0 , our goal is to obtain sentence INLINEFORM1 by editing the prototype INLINEFORM2 . The general framework is built upon a Seq2Seq model with an attention mechanism, which takes INLINEFORM3 and INLINEFORM4 as source sequence and target sequence respectively. The main difference is that the generative probability of a vanilla Seq2Seq model is INLINEFORM5 whereas the probability of the edit model is INLINEFORM6 where INLINEFORM7 is an edit vector sampled from a pre-defined distribution like variational auto-encoder. In the training phase, the parameter of the distribution is conditional on the context differences. We first define INLINEFORM8 as an insertion word set, where INLINEFORM9 is a word added to the prototype, and INLINEFORM10 is a deletion word set, where INLINEFORM11 is a word deleted from the prototype. Subsequently, we compute an insertion vector INLINEFORM12 and a deletion vector INLINEFORM13 by a summation over word embeddings in two corresponding sets, where INLINEFORM14 transfers a word to its embedding. Then, the edit vector INLINEFORM15 is sampled from a distribution whose parameters are governed by the concatenation of INLINEFORM16 and INLINEFORM17 . Finally, the edit vector and output of the encoder are fed to the decoder to generate INLINEFORM18 .",
"For response generation, which is a conditional setting of text editing, an interesting question raised, that is how to generate the edit by considering contexts. We will introduce our motivation and model in details in the next section."
],
[
"Suppose that we have a data set INLINEFORM0 . INLINEFORM1 , INLINEFORM2 comprises a context INLINEFORM3 and its response INLINEFORM4 , where INLINEFORM5 is the INLINEFORM6 -th word of the context INLINEFORM7 and INLINEFORM8 is the INLINEFORM9 -th word of the response INLINEFORM10 . It should be noted that INLINEFORM11 can be either a single turn input or a multiple turn input. As the first step, we assume INLINEFORM12 is a single turn input in this work, and leave the verification of the same technology for multi-turn response generation to future work. Our full model is shown in Figure FIGREF3 , consisting of a prototype selector INLINEFORM13 and a context-aware neural editor INLINEFORM14 . Given a new conversational context INLINEFORM15 , we first use INLINEFORM16 to retrieve a context-response pair INLINEFORM17 . Then, the editor INLINEFORM18 calculates an edit vector INLINEFORM19 to encode the information about the differences between INLINEFORM20 and INLINEFORM21 . Finally, we generate a response according to the probability of INLINEFORM22 . In the following, we will elaborate how to design the selector INLINEFORM23 and the editor INLINEFORM24 ."
],
[
"A good prototype selector INLINEFORM0 plays an important role in the prototype-then-edit paradigm. We use different strategies to select prototypes for training and testing. In testing, as we described above, we retrieve a context-response pair INLINEFORM1 from a pre-defined index for context INLINEFORM2 according to the similarity of INLINEFORM3 and INLINEFORM4 . Here, we employ Lucene to construct the index and use its inline algorithm to compute the context similarity.",
"Now we turn to the training phase. INLINEFORM0 , INLINEFORM1 , our goal is to maximize the generative probability of INLINEFORM2 by selecting a prototype INLINEFORM3 . As we already know the ground-truth response INLINEFORM4 , we first retrieve thirty prototypes INLINEFORM5 based on the response similarity instead of context similarity, and then reserve prototypes whose Jaccard similarity to INLINEFORM6 are in the range of INLINEFORM7 . Here, we use Lucene to index all responses, and retrieve the top 20 similar responses along with their corresponding contexts for INLINEFORM8 . The Jaccard similarity measures text similarity from a bag-of-word view, that is formulated as DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are two bags of words and INLINEFORM2 denotes the number of elements in a collection. Each context-response pair is processed with the above procedure, so we obtain enormous quadruples INLINEFORM3 after this step. The motivation behind filtering out instances with Jaccard similarity INLINEFORM4 is that a neural editor model performs well only if a prototype is lexically similar BIBREF11 to its ground-truth. Besides, we hope the editor does not copy the prototype so we discard instances where the prototype and groundtruth are nearly identical (i.e. Jaccard similarity INLINEFORM5 ). We do not use context similarity to construct parallel data for training, because similar contexts may correspond to totally different responses, so-called one-to-many phenomenon BIBREF10 in dialogue generation, that impedes editor training due to the large lexicon gap. According to our preliminary experiments, the editor always generates non-sense responses if training data is constructed by context similarity."
],
[
"A context-aware neural editor aims to revise a prototype to adapt current context. Formally, given a quadruple INLINEFORM0 (we omit subscripts for simplification), a context-aware neural editor first forms an edit vector INLINEFORM1 using INLINEFORM2 and INLINEFORM3 , and then updates parameters of the generative model by maximizing the probability of INLINEFORM4 . For testing, we directly generate a response after getting the editor vector. In the following, we will introduce how to obtain the edit vector and learn the generative model in details.",
"For an unconditional sentence editing setting BIBREF11 , an edit vector is randomly sampled from a distribution because how to edit the sentence is not constrained. In contrast, we should take both of INLINEFORM0 and INLINEFORM1 into consideration when we revise a prototype response INLINEFORM2 . Formally, INLINEFORM3 is firstly transformed to hidden vectors INLINEFORM4 through a biGRU parameterized as Equation ( EQREF10 ). DISPLAYFORM0 ",
"where INLINEFORM0 is the INLINEFORM1 -th word of INLINEFORM2 .",
"Then we compute a context diff-vector INLINEFORM0 by an attention mechanism defined as follows DISPLAYFORM0 ",
"where INLINEFORM0 is a concatenation operation, INLINEFORM1 is a insertion word set, and INLINEFORM2 is a deletion word set. INLINEFORM3 explicitly encodes insertion words and deletion words from INLINEFORM4 to INLINEFORM5 . INLINEFORM6 is the weight of a insertion word INLINEFORM7 , that is computed by DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are parameters, and INLINEFORM2 is the last hidden state of the encoder. INLINEFORM3 is obtained with a similar process: DISPLAYFORM0 ",
"We assume that different words influence the editing process unequally, so we weighted average insertion words and deletion words to form an edit in Equation EQREF11 . Table TABREF1 explains our motivation as well, that is “desserts\" is much more important than “the\" in the editing process. Then we compute the edit vector INLINEFORM0 by following transformation DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are two parameters. Equation EQREF14 can be regarded as a mapping from context differences to response differences.",
"It should be noted that there are several alternative approaches to compute INLINEFORM0 and INLINEFORM1 for this task, such as applying memory networks, latent variables, and other complex network architectures. Here, we just use a simple method, but it yields interesting results on this task. We will further illustrate our experiment findings in the next section.",
"We build our prototype editing model upon a Seq2Seq with an attention mechanism model, which integrates the edit vector into the decoder.",
"The decoder takes INLINEFORM0 as an input and generates a response by a GRU language model with attention. The hidden state of the decoder is acquired by DISPLAYFORM0 ",
"where the input of INLINEFORM0 -th time step is the last step hidden state and the concatenation of the INLINEFORM1 -th word embedding and the edit vector obtained in Equation EQREF14 . Then we compute a context vector INLINEFORM2 , which is a linear combination of INLINEFORM3 : DISPLAYFORM0 ",
"where INLINEFORM0 is given by DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are parameters. The generative probability distribution is given by DISPLAYFORM0 ",
"where INLINEFORM0 and INLINEFORM1 are two parameters. Equation EQREF18 and EQREF19 are the attention mechanism BIBREF30 , that mitigates the long-term dependency issue of the original Seq2Seq model. We append the edit vector to every input embedding of the decoder in Equation EQREF16 , so the edit information can be utilized in the entire generation process.",
"We learn our response generation model by minimizing the negative log likelihood of INLINEFORM0 DISPLAYFORM0 ",
"We implement our model by PyTorch . We employ the Adam algorithm BIBREF31 to optimize the objective function with a batch size of 128. We set the initial learning rate as INLINEFORM0 and reduce it by half if perplexity on validation begins to increase. We will stop training if the perplexity on validation keeps increasing in two successive epochs. ."
],
[
"In this paper, we only consider single turn response generation. We collected over 20 million human-human context-response pairs (context only contains 1 turn) from Douban Group . After removing duplicated pairs and utterance longer than 30 words, we split 19,623,374 pairs for training, 10,000 pairs for validation and 10,000 pairs for testing. The average length of contexts and responses are 11.64 and 12.33 respectively. The training data mentioned above is used by retrieval models and generative models.",
"In terms of ensemble models and our editing model, the validation set and the test set are the same with datasets prepared for retrieval and generation models. Besides, for each context in the validation and test sets, we select its prototypes with the method described in Section “Prototype Selector\". We follow Song et al. song2016two to construct a training data set for ensemble models, and construct a training data set with the method described in Section “Prototype Selector\" for our editing models. We can obtain 42,690,275 INLINEFORM0 quadruples with the proposed data preparing method. For a fair comparison, we randomly sample 19,623,374 instances for the training of our method and the ensemble method respectively. To facilitate further research, related resources of the paper can be found at https://github.com/MarkWuNLP/ResponseEdit."
],
[
"S2SA: We apply the Seq2Seq with attention BIBREF30 as a baseline model. We use a Pytorch implementation, OpenNMT BIBREF33 in the experiment.",
"S2SA-MMI: We employed the bidirectional-MMI decoder as in BIBREF10 . The hyperparameter INLINEFORM0 is set as 0.5 according to the paper's suggestion. 200 candidates are sampled from beam search for reranking.",
"CVAE: The conditional variational auto-encoder is a popular method of increasing the diversity of response generation BIBREF34 . We use the published code at https://github.com/snakeztc/NeuralDialog-CVAE, and conduct small adaptations for our single turn scenario.",
"Retrieval: We compare our model with two retrieval-based methods to show the effect of editing. One is Retrieval-default that directly regards the top-1 result given by Lucene as the reply. The second one is Retrieval-Rerank, where we first retrieve 20 response candidates, and then employ a dual-LSTM model BIBREF6 to compute matching degree between current context and the candidates. The matching model is implemented with the same setting in BIBREF6 , and is trained on the training data set where negative instances are randomly sampled with a ratio of INLINEFORM0 .",
"Ensemble Model: Song et al song2016two propose an ensemble of retrieval and generation methods. It encodes current context and retrieved responses (Top-k retrieved responses are all used in the generation process.) into vectors, and feeds these representations to a decoder to generate a new response. As there is no official code, we implement it carefully by ourselves. We use the top-1 response returned by beam search as a baseline, denoted as Ensemble-default. For a fair comparison, we further rerank top 20 generated results with the same LSTM based matching model, and denote it as Ensemble-Rerank. We further create a candidate pool by merging the retrieval and generation results, and rerank them with the same ranker. The method is denoted as Ensemble-Merge.",
"Correspondingly, we evaluate three variants of our model. Specifically, Edit-default and Edit-1-Rerank edit top-1 response yielded by Retrieval-default and Retrieval-Rerank respectively. Edit-N-Rerank edits all 20 responses returned by Lucene and then reranks the revised results with the dual-LSTM model. We also merge edit results of Edit-N-Rerank and candidates returned by the search engine, and then rerank them, which is denoted as Edit-Merge. In practice, the word embedding size and editor vector size are 512, and both of the encoder and decoder are a 1-layer GRU whose hidden vector size is 1024. Message and response vocabulary size are 30000, and words not covered by the vocabulary are represented by a placeholder $UNK$. Word embedding size, hidden vector size and attention vector size of baselines and our models are the same. All generative models use beam search to yield responses, where the beam size is 20 except S2SA-MMI. For all models, we remove $UNK$ from the target vocabulary, because it always leads to a fluency issue in evaluation."
],
[
"We evaluate our model on four criteria: fluency, relevance, diversity and originality. We employ Embedding Average (Average), Embedding Extrema (Extrema), and Embedding Greedy (Greedy) BIBREF35 to evaluate response relevance, which are better correlated with human judgment than BLEU. Following BIBREF10 , we evaluate the response diversity based on the ratios of distinct unigrams and bigrams in generated responses, denoted as Distinct-1 and Distinct-2. In this paper, we define a new metric, originality, that is defined as the ratio of generated responses that do not appear in the training set. Here, “appear\" means we can find exactly the same response in our training data set. We randomly select 1,000 contexts from the test set, and ask three native speakers to annotate response fluency. We conduct 3-scale rating: +2, +1 and 0. +2: The response is fluent and grammatically correct. +1: There are a few grammatical errors in the response but readers could understand it. 0: The response is totally grammatically broken, making it difficult to understand. As how to evaluate response generation automatically is still an open problem BIBREF35 , we further conduct human evaluations to compare our models with baselines. We ask the same three native speakers to do a side-by-side comparison BIBREF15 on the 1,000 contexts. Given a context and two responses generated by different models, we ask annotators to decide which response is better (Ties are permitted)."
],
[
"Table TABREF25 shows the evaluation results on the Chinese dataset. Our methods are better than retrieval-based methods on embedding based metrics, that means revised responses are more relevant to ground-truth in the semantic space. Our model just slightly revises prototype response, so improvements on automatic metrics are not that large but significant on statistical tests (t-test, p-value INLINEFORM0 ). Two factors are known to cause Edit-1-Rerank worse than Retrieval-Rerank. 1) Rerank algorithm is biased to long responses, that poses a challenge for the editing model. 2) Despite of better prototype responses, a context of top-1 response is always greatly different from current context, leading to a large insertion word set and a large deletion set, that also obstructs the revision process. In terms of diversity, our methods drop on distinct-1 and distinct-2 in a comparison with retrieval-based methods, because the editing model often deletes special words pursuing for better relevance. Retrieval-Rerank is better than retrieval-default, indicating that it is necessary to rerank responses by measuring context-response similarity with a matching model.",
"Our methods significantly outperform generative baselines in terms of diversity since prototype responses are good start-points that are diverse and informative. It demonstrates that the prototype-then-editing paradigm is capable of addressing the safe response problem. Edit-Rerank is better than generative baselines on relevance but Edit-default is not, indicating a good prototype selector is quite important to our editing model. In terms of originality, about 86 INLINEFORM0 revised response do not appear in the training set, that surpasses S2SA, S2SA-MMI and CVAE. This is mainly because baseline methods are more likely to generate safe responses that are frequently appeared in the training data, while our model tends to modify an existing response that avoids duplication issue. In terms of fluency, S2SA achieves the best results, and retrieval based approaches come to the second place. Safe response enjoys high score on fluency, that is why S2SA and S2SA-MMI perform well on this metric. Although editing based methods are not the best on the fluency metric, they also achieve a high absolute number. That is an acceptable fluency score for a dialogue engine, indicating that most of generation responses are grammatically correct. In addition, in terms of the fluency metric, Fleiss' Kappa BIBREF32 on all models are around 0.8, showing a high agreement among labelers.",
"Compared to ensemble models, our model performs much better on diversity and originality, that is because we regard prototype response instead of the current context as source sentence in the Seq2Seq, which keeps most of content in prototype but slightly revises it based on the context difference. Both of the ensemble and edit model are improved when the original retrieval candidates are considered in the rerank process.",
"Regarding human side-by-side evaluation, we can find that Edit-Default and Edit-N-rerank are slightly better than Retrieval-default and Retrieval-rerank (The winning examples are more than the losing examples), indicating that the post-editing is able to improve the response quality. Ed-Default is worse than Ens-Default, but Ed-N-Rerank is better than Ens-Rerank. This is mainly because the editing model regards the prototype response as the source language, so it is highly depends on the quality of prototype response."
],
[
"We train variants of our model by removing the insertion word vector, the deletion word vector, and both of them respectively. The results are shown in Table TABREF29 . We can find that embedding based metrics drop dramatically when the editing vector is partially or totally removed, indicating that the edit vector is crucial for response relevance. Diversity and originality do not decrease after the edit vector is removed, implying that the retrieved prototype is the key factor for these two metrics. According to above observations, we conclude that the prototype selector and the context-aware editor play different roles in generating responses.",
"It is interesting to explore the semantic gap between prototype and revised response. We ask annotators to conduct 4-scale rating on 500 randomly sampled prototype-response pairs given by Edit-default and Edit-N-Rerank respectively. The 4-scale is defined as: identical, paraphrase, on the same topic and unrelated.",
"Figure FIGREF34 provides the ratio of four editing types defined above. For both methods, Only INLINEFORM0 of edits are exactly the same with the prototype, that means our model does not downgrade to a copy model. Surprisingly, there are INLINEFORM1 revised responses are unrelated to prototypes. The key factor for this phenomenon is that the neural editor will rewrite the prototype when it is hard to insert insertion words to the prototype. The ratio of “on the same topic\" response given by Edit-N-rerank is larger than Edit-default, revealing that “on the same topic\" responses might be more relevant from the view of a LSTM based reranker.",
"We give three examples to show how our model works in Table TABREF30 . The first case illustrates the effect of word insertion. Our editing model enriches a short response by inserting words from context, that makes the conversation informative and coherent. The second case gives an example of word deletion, where a phrase “braised pork rice\" is removed as it does not fit current context. Phrase “braised pork rice\" only appears in the prototype context but not in current context, so it is in the deletion word set INLINEFORM0 , that makes the decoder not generate it. The third one is that our model forms a relevant query by deleting some words in the prototype while inserting other words to it. Current context is talking about “clean tatoo\", but the prototype discusses “clean hair\", leading to an irrelevant response. After the word substitution, the revised response becomes appropriated for current context.",
"According to our observation, function words and nouns are more likely to be added/deleted. This is mainly because function words, such as pronoun, auxiliary, and interjection may be substituted in the paraphrasing. In addition, a large proportion of context differences is caused by nouns substitutions, thus we observe that nouns are added/deleted in the revision frequently."
],
[
"We present a new paradigm, prototype-then-edit, for open domain response generation, that enables a generation-based chatbot to leverage retrieved results. We propose a simple but effective model to edit context-aware responses by taking context differences into consideration. Experiment results on a large-scale dataset show that our model outperforms traditional methods on some metrics. In the future, we will investigate how to jointly learn the prototype selector and neural editor."
],
[
"Yu is supported by AdeptMind Scholarship and Microsoft Scholarship. This work was supported in part by the Natural Science Foundation of China (Grand Nos. U1636211, 61672081, 61370126), Beijing Advanced Innovation Center for Imaging Technology (No.BAICIT-2016001) and National Key R&D Program of China (No.2016QY04W0802). "
]
],
"section_name": [
"Introduction",
"Related Work",
"Background",
"Model Overview",
"Prototype Selector",
"Context-Aware Neural Editor",
"Experiment setting",
"Baseline",
"Evaluation Metrics",
"Evaluation Results",
"Discussions",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"407dc75c485e92d8b5810c9a95d51a892ec08edd"
],
"answer": [
{
"evidence": [
"We evaluate our model on four criteria: fluency, relevance, diversity and originality. We employ Embedding Average (Average), Embedding Extrema (Extrema), and Embedding Greedy (Greedy) BIBREF35 to evaluate response relevance, which are better correlated with human judgment than BLEU. Following BIBREF10 , we evaluate the response diversity based on the ratios of distinct unigrams and bigrams in generated responses, denoted as Distinct-1 and Distinct-2. In this paper, we define a new metric, originality, that is defined as the ratio of generated responses that do not appear in the training set. Here, “appear\" means we can find exactly the same response in our training data set. We randomly select 1,000 contexts from the test set, and ask three native speakers to annotate response fluency. We conduct 3-scale rating: +2, +1 and 0. +2: The response is fluent and grammatically correct. +1: There are a few grammatical errors in the response but readers could understand it. 0: The response is totally grammatically broken, making it difficult to understand. As how to evaluate response generation automatically is still an open problem BIBREF35 , we further conduct human evaluations to compare our models with baselines. We ask the same three native speakers to do a side-by-side comparison BIBREF15 on the 1,000 contexts. Given a context and two responses generated by different models, we ask annotators to decide which response is better (Ties are permitted)."
],
"extractive_spans": [
"fluency",
"relevance",
"diversity ",
"originality"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate our model on four criteria: fluency, relevance, diversity and originality. We employ Embedding Average (Average), Embedding Extrema (Extrema), and Embedding Greedy (Greedy) BIBREF35 to evaluate response relevance, which are better correlated with human judgment than BLEU."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"1bcb3acccd4c95d4603ce03b08292362f3e1a787",
"572b6431fd450795e755c15090e8dcc016527158",
"6d06ca87fadbda23168c6e23e7f6264267c4949b"
],
"answer": [
{
"evidence": [
"Our experiments are conducted on a large scale Chinese conversation corpus comprised of 20 million context-response pairs. We compare our model with generative models and retrieval models in terms of fluency, relevance, diversity and originality. The experiments show that our method outperforms traditional generative models on relevance, diversity and originality. We further find that the revised response achieves better relevance compared to its prototype and other retrieval results, demonstrating that the editing process does not only promote response originality but also improve the relevance of retrieval results."
],
"extractive_spans": [
" a large scale Chinese conversation corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our experiments are conducted on a large scale Chinese conversation corpus comprised of 20 million context-response pairs. We compare our model with generative models and retrieval models in terms of fluency, relevance, diversity and originality. The experiments show that our method outperforms traditional generative models on relevance, diversity and originality. We further find that the revised response achieves better relevance compared to its prototype and other retrieval results, demonstrating that the editing process does not only promote response originality but also improve the relevance of retrieval results."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our experiments are conducted on a large scale Chinese conversation corpus comprised of 20 million context-response pairs. We compare our model with generative models and retrieval models in terms of fluency, relevance, diversity and originality. The experiments show that our method outperforms traditional generative models on relevance, diversity and originality. We further find that the revised response achieves better relevance compared to its prototype and other retrieval results, demonstrating that the editing process does not only promote response originality but also improve the relevance of retrieval results."
],
"extractive_spans": [
"Chinese conversation corpus comprised of 20 million context-response pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our experiments are conducted on a large scale Chinese conversation corpus comprised of 20 million context-response pairs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In this paper, we only consider single turn response generation. We collected over 20 million human-human context-response pairs (context only contains 1 turn) from Douban Group . After removing duplicated pairs and utterance longer than 30 words, we split 19,623,374 pairs for training, 10,000 pairs for validation and 10,000 pairs for testing. The average length of contexts and responses are 11.64 and 12.33 respectively. The training data mentioned above is used by retrieval models and generative models.",
"Table TABREF25 shows the evaluation results on the Chinese dataset. Our methods are better than retrieval-based methods on embedding based metrics, that means revised responses are more relevant to ground-truth in the semantic space. Our model just slightly revises prototype response, so improvements on automatic metrics are not that large but significant on statistical tests (t-test, p-value INLINEFORM0 ). Two factors are known to cause Edit-1-Rerank worse than Retrieval-Rerank. 1) Rerank algorithm is biased to long responses, that poses a challenge for the editing model. 2) Despite of better prototype responses, a context of top-1 response is always greatly different from current context, leading to a large insertion word set and a large deletion set, that also obstructs the revision process. In terms of diversity, our methods drop on distinct-1 and distinct-2 in a comparison with retrieval-based methods, because the editing model often deletes special words pursuing for better relevance. Retrieval-Rerank is better than retrieval-default, indicating that it is necessary to rerank responses by measuring context-response similarity with a matching model."
],
"extractive_spans": [],
"free_form_answer": "Chinese dataset containing human-human context response pairs collected from Douban Group ",
"highlighted_evidence": [
"We collected over 20 million human-human context-response pairs (context only contains 1 turn) from Douban Group . ",
"Table TABREF25 shows the evaluation results on the Chinese dataset."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"33b0745f49ae2916fdffcbc9967e443b0a8fa1e9",
"a06c8e12ac44fd7db01bf5ec234d3ae0e30a8d5f",
"b688e87deb006758009e4e0789f8368674ce732c"
],
"answer": [
{
"evidence": [
"Prior work BIBREF11 has figured out how to edit prototype in an unconditional setting, but it cannot be applied to the response generation directly. In this paper, we propose a prototype editing method in a conditional setting. Our idea is that differences between responses strongly correlates with differences in their contexts (i.e. if a word in prototype context is changed, its related words in the response are probably modified in the editing.). We realize this idea by designing a context-aware editing model that is built upon a encoder-decoder model augmented with an editing vector. The edit vector is computed by the weighted average of insertion word embeddings and deletion word embeddings. Larger weights mean that the editing model should pay more attention on corresponding words in revision. For instance, in Table TABREF1 , we wish words like “dessert\", “Tofu\" and “vegetables\" get larger weights than words like “and\" and “ at\". The encoder learns the prototype representation with a gated recurrent unit (GRU), and feeds the representation to a decoder together with the edit vector. The decoder is a GRU language model, that regards the concatenation of last step word embedding and the edit vector as inputs, and predicts the next word with an attention mechanism."
],
"extractive_spans": [
"a GRU language model"
],
"free_form_answer": "",
"highlighted_evidence": [
". The decoder is a GRU language model, that regards the concatenation of last step word embedding and the edit vector as inputs, and predicts the next word with an attention mechanism."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Prior work BIBREF11 has figured out how to edit prototype in an unconditional setting, but it cannot be applied to the response generation directly. In this paper, we propose a prototype editing method in a conditional setting. Our idea is that differences between responses strongly correlates with differences in their contexts (i.e. if a word in prototype context is changed, its related words in the response are probably modified in the editing.). We realize this idea by designing a context-aware editing model that is built upon a encoder-decoder model augmented with an editing vector. The edit vector is computed by the weighted average of insertion word embeddings and deletion word embeddings. Larger weights mean that the editing model should pay more attention on corresponding words in revision. For instance, in Table TABREF1 , we wish words like “dessert\", “Tofu\" and “vegetables\" get larger weights than words like “and\" and “ at\". The encoder learns the prototype representation with a gated recurrent unit (GRU), and feeds the representation to a decoder together with the edit vector. The decoder is a GRU language model, that regards the concatenation of last step word embedding and the edit vector as inputs, and predicts the next word with an attention mechanism."
],
"extractive_spans": [
"a GRU language model"
],
"free_form_answer": "",
"highlighted_evidence": [
"The encoder learns the prototype representation with a gated recurrent unit (GRU), and feeds the representation to a decoder together with the edit vector. The decoder is a GRU language model, that regards the concatenation of last step word embedding and the edit vector as inputs, and predicts the next word with an attention mechanism."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Prior work BIBREF11 has figured out how to edit prototype in an unconditional setting, but it cannot be applied to the response generation directly. In this paper, we propose a prototype editing method in a conditional setting. Our idea is that differences between responses strongly correlates with differences in their contexts (i.e. if a word in prototype context is changed, its related words in the response are probably modified in the editing.). We realize this idea by designing a context-aware editing model that is built upon a encoder-decoder model augmented with an editing vector. The edit vector is computed by the weighted average of insertion word embeddings and deletion word embeddings. Larger weights mean that the editing model should pay more attention on corresponding words in revision. For instance, in Table TABREF1 , we wish words like “dessert\", “Tofu\" and “vegetables\" get larger weights than words like “and\" and “ at\". The encoder learns the prototype representation with a gated recurrent unit (GRU), and feeds the representation to a decoder together with the edit vector. The decoder is a GRU language model, that regards the concatenation of last step word embedding and the edit vector as inputs, and predicts the next word with an attention mechanism."
],
"extractive_spans": [
"GRU"
],
"free_form_answer": "",
"highlighted_evidence": [
"The decoder is a GRU language model, that regards the concatenation of last step word embedding and the edit vector as inputs, and predicts the next word with an attention mechanism."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"8181056b3027150af848f9459ad198e6191887c7"
],
"answer": [
{
"evidence": [
"Our methods significantly outperform generative baselines in terms of diversity since prototype responses are good start-points that are diverse and informative. It demonstrates that the prototype-then-editing paradigm is capable of addressing the safe response problem. Edit-Rerank is better than generative baselines on relevance but Edit-default is not, indicating a good prototype selector is quite important to our editing model. In terms of originality, about 86 INLINEFORM0 revised response do not appear in the training set, that surpasses S2SA, S2SA-MMI and CVAE. This is mainly because baseline methods are more likely to generate safe responses that are frequently appeared in the training data, while our model tends to modify an existing response that avoids duplication issue. In terms of fluency, S2SA achieves the best results, and retrieval based approaches come to the second place. Safe response enjoys high score on fluency, that is why S2SA and S2SA-MMI perform well on this metric. Although editing based methods are not the best on the fluency metric, they also achieve a high absolute number. That is an acceptable fluency score for a dialogue engine, indicating that most of generation responses are grammatically correct. In addition, in terms of the fluency metric, Fleiss' Kappa BIBREF32 on all models are around 0.8, showing a high agreement among labelers."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Although editing based methods are not the best on the fluency metric, they also achieve a high absolute number. That is an acceptable fluency score for a dialogue engine, indicating that most of generation responses are grammatically correct."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"460f8787b77ec0174efcae6c8fead96fe7878b16",
"cd36292afd7752b8b3998bb6cccf84ecc8f6cf1d"
],
"answer": [
{
"evidence": [
"Inspired by this idea, we formulate the response generation process as follows. Given a conversational context INLINEFORM0 , we first retrieve a similar context INLINEFORM1 and its associated response INLINEFORM2 from a pre-defined index, which are called prototype context and prototype response respectively. Then, we calculate an edit vector by concatenating the weighted average results of insertion word embeddings (words in prototype context but not in current context) and deletion word embeddings (words in current context but not in prototype context). After that, we revise the prototype response conditioning on the edit vector. We further illustrate how our idea works with an example in Table TABREF1 . It is obvious that the major difference between INLINEFORM3 and INLINEFORM4 is what the speaker eats, so the phrase “raw green vegetables\" in INLINEFORM5 should be replaced by “desserts\" in order to adapt to the current context INLINEFORM6 . We hope that the decoder language model could remember the collocation of “desserts\" and “bad for health\", so as to replace “beneficial\" with “bad\" in the revised response. The new paradigm does not only inherits the fluency and informativeness advantages from retrieval results, but also enjoys the flexibility of generation results. Hence, our edit-based model is better than previous retrieval-based and generation-based models. The edit-based model can solve the “safe response\" problem of generative models by leveraging existing responses, and is more flexible than retrieval-based models, because it does not highly depend on the index and is able to edit a response to fit current context."
],
"extractive_spans": [
"similar context INLINEFORM1 and its associated response INLINEFORM2"
],
"free_form_answer": "",
"highlighted_evidence": [
" Given a conversational context INLINEFORM0 , we first retrieve a similar context INLINEFORM1 and its associated response INLINEFORM2 from a pre-defined index, which are called prototype context and prototype response respectively."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"A good prototype selector INLINEFORM0 plays an important role in the prototype-then-edit paradigm. We use different strategies to select prototypes for training and testing. In testing, as we described above, we retrieve a context-response pair INLINEFORM1 from a pre-defined index for context INLINEFORM2 according to the similarity of INLINEFORM3 and INLINEFORM4 . Here, we employ Lucene to construct the index and use its inline algorithm to compute the context similarity."
],
"extractive_spans": [
"to compute the context similarity."
],
"free_form_answer": "",
"highlighted_evidence": [
"A good prototype selector INLINEFORM0 plays an important role in the prototype-then-edit paradigm. We use different strategies to select prototypes for training and testing. In testing, as we described above, we retrieve a context-response pair INLINEFORM1 from a pre-defined index for context INLINEFORM2 according to the similarity of INLINEFORM3 and INLINEFORM4 . Here, we employ Lucene to construct the index and use its inline algorithm to compute the context similarity."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Which aspects of response generation do they evaluate on?",
"Which dataset do they evaluate on?",
"What model architecture do they use for the decoder?",
"Do they ensure the edited response is grammatical?",
"What do they use as the pre-defined index of prototype responses?"
],
"question_id": [
"00f507053c47e55d7e72bebdbd8a75b3ca88cf85",
"e14e3e0944ec3290d1985e9a3da82a7df17575cd",
"f637bba86cfb94ca8ac4b058faf839c257d5eaa0",
"0b5bf00d2788c534c4c6c007b72290c48be21e16",
"86c867b393db0ec4ad09abb48cc1353cac47ea4c"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: An example of context-aware prototypes editing. Underlined words mean they do not appear in the original",
"Figure 1: Architecture of our model.",
"Table 2: Automatic evaluation results. Numbers in bold mean that improvement from the model on that metric is statistically significant over the baseline methods (t-test, p-value < 0.01). κ denotes Fleiss Kappa (Fleiss 1971), which reflects the agreement among human annotators.",
"Table 3: Human side-by-side evaluation results. If a row is labeled as “a v.s.b”, the second column, “loss”, means the ratio of responses given by “a” are worse than those given by“b”.",
"Table 4: Model ablation tests. Full model denotes Edit-NRerank. “-del” means we only consider insertion words. “- ins” means we only consider deletion words. “-both” means we train a standard seq2seq model from prototype response to revised response.",
"Figure 2: Distribution across different editing types.",
"Table 5: Case Study. We show examples yielded by Editdefault and Edit-Rerank. Chinese utterances are translated to English here."
],
"file": [
"1-Table1-1.png",
"4-Figure1-1.png",
"6-Table2-1.png",
"6-Table3-1.png",
"7-Table4-1.png",
"7-Figure2-1.png",
"7-Table5-1.png"
]
} | [
"Which dataset do they evaluate on?"
] | [
[
"1806.07042-Introduction-4",
"1806.07042-Evaluation Results-0",
"1806.07042-Experiment setting-0"
]
] | [
"Chinese dataset containing human-human context response pairs collected from Douban Group "
] | 360 |
1912.11637 | Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection | Self-attention based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self-attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called \textbf{Explicit Sparse Transformer}. Explicit Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental results on a series of natural language processing and computer vision tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Explicit Sparse Transformer in model performance. We also show that our proposed sparse attention method achieves comparable or better results than the previous sparse attention method, but significantly reduces training and testing time. For example, the inference speed is twice that of sparsemax in Transformer model. Code will be available at \url{this https URL} | {
"paragraphs": [
[
"Understanding natural language requires the ability to pay attention to the most relevant information. For example, people tend to focus on the most relevant segments to search for the answers to their questions in mind during reading. However, retrieving problems may occur if irrelevant segments impose negative impacts on reading comprehension. Such distraction hinders the understanding process, which calls for an effective attention.",
"This principle is also applicable to the computation systems for natural language. Attention has been a vital component of the models for natural language understanding and natural language generation. Recently, BIBREF0 proposed Transformer, a model based on the attention mechanism for Neural Machine Translation(NMT). Transformer has shown outstanding performance in natural language generation tasks. More recently, the success of BERT BIBREF1 in natural language processing shows the great usefulness of both the attention mechanism and the framework of Transformer.",
"However, the attention in vanilla Transformer has a obvious drawback, as the Transformer assigns credits to all components of the context. This causes a lack of focus. As illustrated in Figure FIGREF1, the attention in vanilla Transformer assigns high credits to many irrelevant words, while in Explicit Sparse Transformer, it concentrates on the most relevant $k$ words. For the word “tim”, the most related words should be \"heart\" and the immediate words. Yet the attention in vanilla Transformer does not focus on them but gives credits to some irrelevant words such as “him”.",
"Recent works have studied applying sparse attention in Transformer model. However, they either add local attention constraints BIBREF2 which break long term dependency or hurt the time efficiency BIBREF3. Inspired by BIBREF4 which introduce sparse credit assignment to the LSTM model, we propose a novel model called Explicit Sparse Transformer which is equipped with our sparse attention mechanism. We implement an explicit selection method based on top-$k$ selection. Unlike vanilla Transformer, Explicit Sparse Transformer only pays attention to the $k$ most contributive states. Thus Explicit Sparse Transformer can perform more concentrated attention than vanilla Transformer.",
"We first validate our methods on three tasks. For further investigation, we compare our methods with previous sparse attention methods and experimentally answer how to choose k in a series of qualitative analyses. We are surprised to find that the proposed sparse attention method can also help with training as a regularization method. Visual analysis shows that Explicit Sparse Transformer exhibits a higher potential in performing a high-quality alignment. The contributions of this paper are presented below:",
"We propose a novel model called Explicit Sparse Transformer, which enhances the concentration of the Transformer's attention through explicit selection.",
"We conducted extensive experiments on three natural language processing tasks, including Neural Machine Translation, Image Captioning and Language Modeling. Compared with vanilla Transformer, Explicit Sparse Transformer demonstrates better performances in the above three tasks.",
"Compared to previous sparse attention methods for transformers, our methods are much faster in training and testing, and achieves comparable results."
],
[
"The review to the attention mechanism and the attention-based framework of Transformer can be found in Appendix SECREF35.",
"Lack of concentration in the attention can lead to the failure of relevant information extraction. To this end, we propose a novel model, Explicit Sparse Transformer, which enables the focus on only a few elements through explicit selection. Compared with the conventional attention, no credit will be assigned to the value that is not highly correlated to the query. We provide a comparison between the attention of vanilla Transformer and that of Explicit Sparse Transformer in Figure FIGREF5.",
"Explicit Sparse Transformer is still based on the Transformer framework. The difference is in the implementation of self-attention. The attention is degenerated to the sparse attention through top-$k$ selection. In this way, the most contributive components for attention are reserved and the other irrelevant information are removed. This selective method is effective in preserving important information and removing noise. The attention can be much more concentrated on the most contributive elements of value. In the following, we first introduce the sparsification in self-attention and then extend it to context attention.",
"In the unihead self-attention, the key components, the query $Q[l_{Q}, d]$, key $K[l_{K}, d]$ and value $V[l_{V}, d]$, are the linear transformation of the source context, namely the input of each layer, where $Q = W_{Q}x$, $K = W_{K}x$ and $V = W_{V}x$. Explicit Sparse Transformer first generates the attention scores $P$ as demonstrated below:",
"Then the model evaluates the values of the scores $P$ based on the hypothesis that scores with larger values demonstrate higher relevance. The sparse attention masking operation $\\mathcal {M}(\\cdot )$ is implemented upon $P$ in order to select the top-$k$ contributive elements. Specifically, we select the $k$ largest element of each row in $P$ and record their positions in the position matrix $(i, j)$, where $k$ is a hyperparameter. To be specific, say the $k$-th largest value of row $i$ is $t_{i}$, if the value of the $j$-th component is larger than $t_i$, the position $(i, j)$ is recorded. We concatenate the threshold value of each row to form a vector $t = [t_1, t_2, \\cdots , t_{l_{Q}}]$. The masking functions $\\mathcal {M}(\\cdot , \\cdot )$ is illustrated as follows:",
"With the top-$k$ selection, the high attention scores are selected through an explicit way. This is different from dropout which randomly abandons the scores. Such explicit selection can not only guarantee the preservation of important components, but also simplify the model since $k$ is usually a small number such as 8, detailed analysis can be found in SECREF28. The next step after top-$k$ selection is normalization:",
"where $A$ refers to the normalized scores. As the scores that are smaller than the top k largest scores are assigned with negative infinity by the masking function $\\mathcal {M}(\\cdot , \\cdot )$, their normalized scores, namely the probabilities, approximate 0. We show the back-propagation process of Top-k selection in SECREF50. The output representation of self-attention $C$ can be computed as below:",
"The output is the expectation of the value following the sparsified distribution $A$. Following the distribution of the selected components, the attention in the Explicit Sparse Transformer model can obtain more focused attention. Also, such sparse attention can extend to context attention. Resembling but different from the self-attention mechanism, the $Q$ is no longer the linear transformation of the source context but the decoding states $s$. In the implementation, we replace $Q$ with $W_{Q}s$, where $W_{Q}$ is still learnable matrix.",
"In brief, the attention in our proposed Explicit Sparse Transformer sparsifies the attention weights. The attention can then become focused on the most contributive elements, and it is compatible to both self-attention and context attention. The simple implementation of this method is in the Appendix SECREF55."
],
[
"We conducted a series of experiments on three natural language processing tasks, including neural machine translation, image captioning and language modeling. Detailed experimental settings are in Appendix SECREF42."
],
[
"To evaluate the performance of Explicit Sparse Transformer in NMT, we conducted experiments on three NMT tasks, English-to-German translation (En-De) with a large dataset, English-to-Vietnamese (En-Vi) translation and German-to-English translation (De-En) with two datasets of medium size. For En-De, we trained Explicit Sparse Transformer on the standard dataset for WMT 2014 En-De translation. The dataset consists of around 4.5 million sentence pairs. The source and target languages share a vocabulary of 32K sub-word units. We used the newstest 2013 for validation and the newstest 2014 as our test set. We report the results on the test set.",
"For En-Vi, we trained our model on the dataset in IWSLT 2015 BIBREF20. The dataset consists of around 133K sentence pairs from translated TED talks. The vocabulary size for source language is around 17,200 and that for target language is around 7,800. We used tst2012 for validation, and tst2013 for testing and report the testing results. For De-En, we used the dataset in IWSLT 2014. The training set contains 160K sentence pairs and the validation set contains 7K sentences. Following BIBREF21, we used the same test set with around 7K sentences. The data were preprocessed with byte-pair encoding BIBREF22. The vocabulary size is 14,000."
],
[
"Table TABREF10 presents the results of the baselines and our Explicit Sparse Transformer on the three datasets. For En-De, Transformer-based models outperform the previous methods. Compared with the result of Transformer BIBREF0, Explicit Sparse Transformer reaches 29.4 in BLEU score evaluation, outperforming vanilla Transformer by 0.3 BLEU score. For En-Vi, vanilla Transformer reaches 30.2, outperforming previous best method BIBREF7. Our model, Explicit Sparse Transformer, achieves a much better performance, 31.1, by a margin of 0.5 over vanilla Transformer. For De-En, we demonstrate that Transformer-based models outperform the other baselines. Compared with Transformer, our Explicit Sparse Transformer reaches a better performance, 35.6. Its advantage is +0.3. To the best of our knowledge, Explicit Sparse Transformer reaches a top line performance on the dataset."
],
[
"We evaluated our approach on the image captioning task. Image captioning is a task that combines image understanding and language generation. We conducted experiments on the Microsoft COCO 2014 dataset BIBREF23. It contains 123,287 images, each of which is paired 5 with descriptive sentences. We report the results and evaluate the image captioning model on the MSCOCO 2014 test set for image captioning. Following previous works BIBREF24, BIBREF25, we used the publicly-available splits provided by BIBREF26. The validation set and test set both contain 5,000 images."
],
[
"Table TABREF17 shows the results of the baseline models and Explicit Sparse Transformer on the COCO Karpathy test split. Transformer outperforms the mentioned baseline models. Explicit Sparse Transformer outperforms the implemented Transformer by +0.4 in terms of BLEU-4, +0.3 in terms of METEOR, +0.7 in terms of CIDEr. , which consistently proves its effectiveness in Image Captioning."
],
[
"Enwiki8 is large-scale dataset for character-level language modeling. It contains 100M bytes of unprocessed Wikipedia texts. The inputs include Latin alphabets, non-Latin alphabets, XML markups and special characters. The vocabulary size 205 tokens, including one for unknown characters. We used the same preprocessing method following BIBREF33. The training set contains 90M bytes of data, and the validation set and the test set contains 5M respectively."
],
[
"Table TABREF23 shows the results of the baseline models and Explicit Sparse Transformer-XL on the test set of enwiki8. Compared with the other strong baselines, Transformer-XL can reach a better performance, and Explicit Sparse Transformer outperforms Transformer-XL with an advantage."
],
[
"In this section, we performed several analyses for further discussion of Explicit Sparse Transformer. First, we compare the proposed method of topk selection before softmax with previous sparse attention method including various variants of sparsemax BIBREF3, BIBREF42, BIBREF43. Second, we discuss about the selection of the value of $k$. Third, we demonstrate that the top-k sparse attention method helps training. In the end, we conducted a series of qualitative analyses to visualize proposed sparse attention in Transformer."
],
[
"We compare the performance and speed of our method with the previous sparse attention methods on the basis of strong implemented transformer baseline. The training and inference speed are reported on the platform of Pytorch and IWSLT 2014 De-En translation dataset, the batch size for inference is set to 128 in terms of sentence and half precision training(FP-16) is applied.",
"As we can see from Table TABREF25, the proposed sparse attention method achieve the comparable results as previous sparse attention methods, but the training and testing speed is 2x faster than sparsemax and 10x faster than Entmax-alpha during the inference. This is due to the fact that our method does not introduce too much computation for calculating sparse attention scores.",
"The other group of sparse attention methods of adding local attention constraints into attention BIBREF2, BIBREF41, do not show performance on neural machine translation, so we do not compare them in Table TABREF25."
],
[
"The natural question of how to choose the optimal $k$ comes with the proposed method. We compare the effect of the value of $k$ at exponential scales. We perform experiments on En-Vi and De-En from 3 different initializations for each value of $K$, and report the mean BLEU scores on the valid set. The figure FIGREF27 shows that regardless of the value of 16 on the En-Vi dataset, the model performance generally rises first and then falls as $k$ increases. For $k\\in \\lbrace 4,8,16,32\\rbrace $, setting the value of $k$ to 8 achieves consistent improvements over the transformer baseline."
],
[
"We are surprised to find that only adding the sparsification in the training phase can also bring an improvement in the performance. We experiment this idea on IWSLT En-Vi and report the results on the valid set in Table TABREF30, . The improvement of 0.3 BLEU scores shows that vanilla Transformer may be overparameterized and the sparsification encourages the simplification of the model."
],
[
"To perform a thorough evaluation of our Explicit Sparse Transformer, we conducted a case study and visualize the attention distributions of our model and the baseline for further comparison. Specifically, we conducted the analysis on the test set of En-Vi, and randomly selected a sample pair of attention visualization of both models.",
"The visualization of the context attention of the decoder's bottom layer in Figure FIGREF33. The attention distribution of the left figure is fairly disperse. On the contrary, the right figure shows that the sparse attention can choose to focus only on several positions so that the model can be forced to stay focused. For example, when generating the phrase “for thinking about my heart”(Word-to-word translation from Vietnamese), the generated word cannot be aligned to the corresponding words. As to Explicit Sparse Transformer, when generating the phrase \"with all my heart\", the attention can focus on the corresponding positions with strong confidence.",
"The visualization of the decoder's top layer is shown in Figure FIGREF34. From the figure, the context attention at the top layer of the vanilla Transformer decoder suffers from focusing on the last source token. This is a common behavior of the attention in vanilla Transformer. Such attention with wrong alignment cannot sufficiently extract enough relevant source-side information for the generation. In contrast, Explicit Sparse Transformer, with simple modification on the vanilla version, does not suffer from this problem, but instead focuses on the relevant sections of the source context. The figure on the right demonstrating the attention distribution of Explicit Sparse Transformer shows that our proposed attention in the model is able to perform accurate alignment."
],
[
"Attention mechanism has demonstrated outstanding performances in a number of neural-network-based methods, and it has been a focus in the NLP studies BIBREF44. A number of studies are proposed to enhance the effects of attention mechanism BIBREF45, BIBREF0, BIBREF4, BIBREF46. BIBREF45 propose local attention and BIBREF47 propose local attention for self-attention. BIBREF48 propose hard attention that pays discrete attention in image captioning. BIBREF49 propose a combination soft attention with hard attention to construct hierarchical memory network. BIBREF8 propose a temperature mechanism to change the softness of attention distribution. BIBREF50 propose an attention which can select a small proportion for focusing. It is trained by reinforcement learning algorithms BIBREF51. In terms of memory networks, BIBREF52 propose to sparse access memory.",
"BIBREF2 recently propose to use local attention and block attention to sparsify the transformer. Our approach differs from them in that our method does not need to block sentences and still capture long distance dependencies. Besides, we demonstrate the importance of Explicit Sparse Transformer in sequence to sequence learning. Although the variants of sparsemax BIBREF3, BIBREF42, BIBREF43 improve in machine translation tasks, we empirically demonstrate in SECREF24 that our method introduces less computation in the standard transformer and is much faster than those sparse attention methods on GPUs."
],
[
"In this paper, we propose a novel model called Explicit Sparse Transformer. Explicit Sparse Transformer is able to make the attention in vanilla Transformer more concentrated on the most contributive components. Extensive experiments show that Explicit Sparse Transformer outperforms vanilla Transformer in three different NLP tasks. We conducted a series of qualitative analyses to investigate the reasons why Explicit Sparse Transformer outperforms the vanilla Transformer. Furthermore, we find an obvious problem of the attention at the top layer of the vanilla Transformer, and Explicit Sparse Transformer can alleviate this problem effectively with improved alignment effects."
],
[
"BIBREF44 first introduced the attention mechanism to learn the alignment between the target-side context and the source-side context, and BIBREF45 formulated several versions for local and global attention. In general, the attention mechanism maps a query and a key-value pair to an output. The attention score function and softmax normalization can turn the query $Q$ and the key $K$ into a distribution $\\alpha $. Following the distribution $\\alpha $, the attention mechanism computes the expectation of the value $V$ and finally generates the output $C$.",
"Take the original attention mechanism in NMT as an example. Both key $K \\in \\mathbb {R}^{n \\times d}$ and value $V \\in \\mathbb {R}^{n \\times d} $ are the sequence of output states from the encoder. Query $Q \\in \\mathbb {R}^{m \\times d}$ is the sequence of output states from the decoder, where $m$ is the length of $Q$, $n$ is the length of $K$ and $V$, and $d$ is the dimension of the states. Thus, the attention mechanism is formulated as:",
"where $f$ refers to the attention score computation."
],
[
"Transformer BIBREF0, which is fully based on the attention mechanism, demonstrates the state-of-the-art performances in a series of natural language generation tasks. Specifically, we focus on self-attention and multi-head attention.",
"The ideology of self-attention is, as the name implies, the attention over the context itself. In the implementation, the query $Q$, key $K$ and value $V$ are the linear transformation of the input $x$, so that $Q = W_{Q}x$, $K = W_{K}x$ and $V = W_{V}x$ where $W_{Q}$, $W_{K}$ and $W_{V}$ are learnable parameters. Therefore, the computation can be formulated as below:",
"where $d$ refers to the dimension of the states.",
"The aforementioned mechanism can be regarded as the unihead attention. As to the multi-head attention, the attention computation is separated into $g$ heads (namely 8 for basic model and 16 for large model in the common practice). Thus multiple parts of the inputs can be computed individually. For the $i$-th head, the output can be computed as in the following formula:",
"where $C^{(i)}$ refers to the output of the head, $Q^{(i)}$, $K^{(i)}$ and $V^{(i)}$ are the query, key and value of the head, and $d_k$ refers to the size of each head ($d_k = d/g$). Finally, the output of each head are concatenated for the output:",
"In common practice, $C$ is sent through a linear transformation with weight matrix $W_c$ for the final output of multi-head attention.",
"However, soft attention can assign weights to a lot more words that are less relevent to the query. Therefore, in order to improve concentration in attention for effective information extraction, we study the problem of sparse attention in Transformer and propose our model Explicit Sparse Transformer."
],
[
"We use the default setting in BIBREF0 for the implementation of our proposed Explicit Sparse Transformer. The hyper parameters including beam size and training steps are tuned on the valid set."
],
[
"Training: For En-Vi translation, we use default scripts and hyper-parameter setting of tensor2tensor v1.11.0 to preprocess, train and evaluate our model. We use the default scripts of fairseq v0.6.1 to preprocess the De-En and En-De dataset. We train the model on the En-Vi dataset for $35K$ steps with batch size of $4K$. For IWSLT 2015 De-En dataset, batch size is also set to $4K$, we update the model every 4 steps and train the model for 90epochs. For WMT 2014 En-De dataset, we train the model for 72 epochs on 4 GPUs with update frequency of 32 and batch size of 3584. We train all models on a single RTX2080TI for two small IWSLT datasets and on a single machine of 4 RTX TITAN for WMT14 En-De. In order to reduce the impact of random initialization, we perform experiments with three different initializations for all models and report the highest for small datasets.",
"Evaluation: We use case-sensitive tokenized BLEU score BIBREF55 for the evaluation of WMT14 En-De, and we use case-insensitive BLEU for that of IWSLT 2015 En-Vi and IWSLT 2014 De-En following BIBREF8. Same as BIBREF0, compound splitting is used for WMT 14 En-De. For WMT 14 En-De and IWSLT 2014 De-En, we save checkpoints every epoch and average last 10 checkpoints every 5 epochs, We select the averaged checkpoint with best valid BLEU and report its BLEU score on the test set. For IWSLT 2015 En-Vi, we save checkpoints every 600 seconds and average last 20 checkpoints."
],
[
"We still use the default setting of Transformer for training our proposed Explicit Sparse Transformer. We report the standard automatic evaluation metrics with the help of the COCO captioning evaluation toolkit BIBREF53, which includes the commonly-used evaluation metrics, BLEU-4 BIBREF55, METEOR BIBREF54, and CIDEr BIBREF56."
],
[
"We follow BIBREF40 and use their implementation for our Explicit Sparse Transformer. Following the previous work BIBREF33, BIBREF40, we use BPC ($E[− log_2 P(xt+1|ht)]$), standing for the average number of Bits-Per-Character, for evaluation. Lower BPC refers to better performance. As to the model implementation, we implement Explicit Sparse Transformer-XL, which is based on the base version of Transformer-XL. Transformer-XL is a model based on Transformer but has better capability of representing long sequences."
],
[
"The masking function $\\mathcal {M}(\\cdot , \\cdot )$ is illustrated as follow:",
"Denote $M=\\mathcal {M}(P, k)$. We regard $t_i$ as constants. When back-propagating,",
"The next step after top-$k$ selection is normalization:",
"where $A$ refers to the normalized scores. When backpropagating,",
"The softmax function is evidently differentiable, therefore, we have calculated the gradient involved in top-k selection."
],
[
"Figure FIGREF56 shows the code for the idea in case of single head self-attention, the proposed method is easy to implement and plug in the successful Transformer model."
]
],
"section_name": [
"Introduction",
"Explicit Sparse Transformer",
"Results",
"Results ::: Neural Machine Translation ::: Dataset",
"Results ::: Neural Machine Translation ::: Result",
"Results ::: Image Captioning ::: Dataset",
"Results ::: Image Captioning ::: Result",
"Results ::: Language Modeling ::: Dataset",
"Results ::: Language Modeling ::: Result",
"Discussion",
"Discussion ::: Comparison with other Sparse Attention Methods",
"Discussion ::: How to Select a Proper k?",
"Discussion ::: Do the proposed sparse attention method helps training?",
"Discussion ::: Do the Explicit Sparse Transformer Attend better?",
"Related Work",
"Conclusion",
"Appendix ::: Background ::: Attention Mechanism",
"Appendix ::: Background ::: Transformer",
"Appendix ::: Experimental Details",
"Appendix ::: Experimental Details ::: Neural Machine Translation",
"Appendix ::: Experimental Details ::: Image Captioning",
"Appendix ::: Experimental Details ::: Language Models",
"Appendix ::: The Back-propagation Process of Top-k Selection",
"Appendix ::: Implementation"
]
} | {
"answers": [
{
"annotation_id": [
"6d4fdd063ed5eadea9e9b759663dc5fb432fa725",
"b08d0f8f516da89105bffa8bb3fd246a850ddcbd"
],
"answer": [
{
"evidence": [
"Explicit Sparse Transformer is still based on the Transformer framework. The difference is in the implementation of self-attention. The attention is degenerated to the sparse attention through top-$k$ selection. In this way, the most contributive components for attention are reserved and the other irrelevant information are removed. This selective method is effective in preserving important information and removing noise. The attention can be much more concentrated on the most contributive elements of value. In the following, we first introduce the sparsification in self-attention and then extend it to context attention."
],
"extractive_spans": [],
"free_form_answer": "It is meant that only most contributive k elements are reserved, while other elements are removed.",
"highlighted_evidence": [
"Explicit Sparse Transformer is still based on the Transformer framework. The difference is in the implementation of self-attention. The attention is degenerated to the sparse attention through top-$k$ selection. In this way, the most contributive components for attention are reserved and the other irrelevant information are removed."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Recent works have studied applying sparse attention in Transformer model. However, they either add local attention constraints BIBREF2 which break long term dependency or hurt the time efficiency BIBREF3. Inspired by BIBREF4 which introduce sparse credit assignment to the LSTM model, we propose a novel model called Explicit Sparse Transformer which is equipped with our sparse attention mechanism. We implement an explicit selection method based on top-$k$ selection. Unlike vanilla Transformer, Explicit Sparse Transformer only pays attention to the $k$ most contributive states. Thus Explicit Sparse Transformer can perform more concentrated attention than vanilla Transformer.",
"Lack of concentration in the attention can lead to the failure of relevant information extraction. To this end, we propose a novel model, Explicit Sparse Transformer, which enables the focus on only a few elements through explicit selection. Compared with the conventional attention, no credit will be assigned to the value that is not highly correlated to the query. We provide a comparison between the attention of vanilla Transformer and that of Explicit Sparse Transformer in Figure FIGREF5."
],
"extractive_spans": [],
"free_form_answer": "focusing on the top-k segments that contribute the most in terms of correlation to the query",
"highlighted_evidence": [
"We implement an explicit selection method based on top-$k$ selection. Unlike vanilla Transformer, Explicit Sparse Transformer only pays attention to the $k$ most contributive states.",
"Compared with the conventional attention, no credit will be assigned to the value that is not highly correlated to the query."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"1c6c0fd7eb21ec857127b5e5ba9ad6848b3ed2e9",
"85bd98f471fd915be2109602cce8eff5e4f387af"
],
"answer": [
{
"evidence": [
"To evaluate the performance of Explicit Sparse Transformer in NMT, we conducted experiments on three NMT tasks, English-to-German translation (En-De) with a large dataset, English-to-Vietnamese (En-Vi) translation and German-to-English translation (De-En) with two datasets of medium size. For En-De, we trained Explicit Sparse Transformer on the standard dataset for WMT 2014 En-De translation. The dataset consists of around 4.5 million sentence pairs. The source and target languages share a vocabulary of 32K sub-word units. We used the newstest 2013 for validation and the newstest 2014 as our test set. We report the results on the test set.",
"For En-Vi, we trained our model on the dataset in IWSLT 2015 BIBREF20. The dataset consists of around 133K sentence pairs from translated TED talks. The vocabulary size for source language is around 17,200 and that for target language is around 7,800. We used tst2012 for validation, and tst2013 for testing and report the testing results. For De-En, we used the dataset in IWSLT 2014. The training set contains 160K sentence pairs and the validation set contains 7K sentences. Following BIBREF21, we used the same test set with around 7K sentences. The data were preprocessed with byte-pair encoding BIBREF22. The vocabulary size is 14,000."
],
"extractive_spans": [],
"free_form_answer": "For En-De translation the newstest 2014 set from WMT 2014 En-De translation dataset, for En-Vi translation the tst2013 from IWSLT 2015 dataset. and for De-En translation the teste set from IWSLT 2014.",
"highlighted_evidence": [
"To evaluate the performance of Explicit Sparse Transformer in NMT, we conducted experiments on three NMT tasks, English-to-German translation (En-De) with a large dataset, English-to-Vietnamese (En-Vi) translation and German-to-English translation (De-En) with two datasets of medium size. For En-De, we trained Explicit Sparse Transformer on the standard dataset for WMT 2014 En-De translation. The dataset consists of around 4.5 million sentence pairs. The source and target languages share a vocabulary of 32K sub-word units. We used the newstest 2013 for validation and the newstest 2014 as our test set. We report the results on the test set.\n\nFor En-Vi, we trained our model on the dataset in IWSLT 2015 BIBREF20. The dataset consists of around 133K sentence pairs from translated TED talks. The vocabulary size for source language is around 17,200 and that for target language is around 7,800. We used tst2012 for validation, and tst2013 for testing and report the testing results. For De-En, we used the dataset in IWSLT 2014. The training set contains 160K sentence pairs and the validation set contains 7K sentences. Following BIBREF21, we used the same test set with around 7K sentences."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To evaluate the performance of Explicit Sparse Transformer in NMT, we conducted experiments on three NMT tasks, English-to-German translation (En-De) with a large dataset, English-to-Vietnamese (En-Vi) translation and German-to-English translation (De-En) with two datasets of medium size. For En-De, we trained Explicit Sparse Transformer on the standard dataset for WMT 2014 En-De translation. The dataset consists of around 4.5 million sentence pairs. The source and target languages share a vocabulary of 32K sub-word units. We used the newstest 2013 for validation and the newstest 2014 as our test set. We report the results on the test set.",
"For En-Vi, we trained our model on the dataset in IWSLT 2015 BIBREF20. The dataset consists of around 133K sentence pairs from translated TED talks. The vocabulary size for source language is around 17,200 and that for target language is around 7,800. We used tst2012 for validation, and tst2013 for testing and report the testing results. For De-En, we used the dataset in IWSLT 2014. The training set contains 160K sentence pairs and the validation set contains 7K sentences. Following BIBREF21, we used the same test set with around 7K sentences. The data were preprocessed with byte-pair encoding BIBREF22. The vocabulary size is 14,000.",
"We evaluated our approach on the image captioning task. Image captioning is a task that combines image understanding and language generation. We conducted experiments on the Microsoft COCO 2014 dataset BIBREF23. It contains 123,287 images, each of which is paired 5 with descriptive sentences. We report the results and evaluate the image captioning model on the MSCOCO 2014 test set for image captioning. Following previous works BIBREF24, BIBREF25, we used the publicly-available splits provided by BIBREF26. The validation set and test set both contain 5,000 images.",
"Enwiki8 is large-scale dataset for character-level language modeling. It contains 100M bytes of unprocessed Wikipedia texts. The inputs include Latin alphabets, non-Latin alphabets, XML markups and special characters. The vocabulary size 205 tokens, including one for unknown characters. We used the same preprocessing method following BIBREF33. The training set contains 90M bytes of data, and the validation set and the test set contains 5M respectively."
],
"extractive_spans": [
"newstest 2014",
"tst2013",
"Following BIBREF21, we used the same test set with around 7K sentences.",
"MSCOCO 2014 test set",
"Enwiki8"
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate the performance of Explicit Sparse Transformer in NMT, we conducted experiments on three NMT tasks, English-to-German translation (En-De) with a large dataset, English-to-Vietnamese (En-Vi) translation and German-to-English translation (De-En) with two datasets of medium size. For En-De, we trained Explicit Sparse Transformer on the standard dataset for WMT 2014 En-De translation. The dataset consists of around 4.5 million sentence pairs. The source and target languages share a vocabulary of 32K sub-word units. We used the newstest 2013 for validation and the newstest 2014 as our test set. We report the results on the test set.\n\nFor En-Vi, we trained our model on the dataset in IWSLT 2015 BIBREF20. The dataset consists of around 133K sentence pairs from translated TED talks. The vocabulary size for source language is around 17,200 and that for target language is around 7,800. We used tst2012 for validation, and tst2013 for testing and report the testing results. For De-En, we used the dataset in IWSLT 2014. The training set contains 160K sentence pairs and the validation set contains 7K sentences. Following BIBREF21, we used the same test set with around 7K sentences. The data were preprocessed with byte-pair encoding BIBREF22. The vocabulary size is 14,000.",
"We evaluated our approach on the image captioning task. Image captioning is a task that combines image understanding and language generation. We conducted experiments on the Microsoft COCO 2014 dataset BIBREF23. It contains 123,287 images, each of which is paired 5 with descriptive sentences. We report the results and evaluate the image captioning model on the MSCOCO 2014 test set for image captioning. Following previous works BIBREF24, BIBREF25, we used the publicly-available splits provided by BIBREF26. The validation set and test set both contain 5,000 images.",
"Enwiki8 is large-scale dataset for character-level language modeling. It contains 100M bytes of unprocessed Wikipedia texts. The inputs include Latin alphabets, non-Latin alphabets, XML markups and special characters. The vocabulary size 205 tokens, including one for unknown characters. We used the same preprocessing method following BIBREF33. The training set contains 90M bytes of data, and the validation set and the test set contains 5M respectively."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"What do they mean by explicit selection of most relevant segments?",
"What datasets they used for evaluation?"
],
"question_id": [
"45e6532ac06a59cb6a90624513242b06d7391501",
"a98ae529b47362f917a398015c8525af3646abf0"
],
"question_writer": [
"798ee385d7c8105b83b032c7acc2347588e09d61",
"798ee385d7c8105b83b032c7acc2347588e09d61"
],
"search_query": [
"sparse transformer",
"sparse transformer"
],
"topic_background": [
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Illustration of self-attention in the models. The orange bar denotes the attention score of our proposed model while the blue bar denotes the attention scores of the vanilla Transformer. The orange line denotes the attention between the target word “tim” and the selected top-k positions in the sequence. In the attention of vanilla Transformer, ”tim” assigns too many non-zero attention scores to the irrelevant words. But for the proposal, the top-k largest attention scores removes the distraction from irrelevant words and the attention becomes concentrated.",
"Figure 2: The comparison between the attentions of vanilla Transformer and Explicit Sparse Transformer and the illustration of the attention module of Explicit Sparse Transformer. With the mask based on top-k selection and softmax function, only the most contributive elements are assigned with probabilities.",
"Table 1: Results on the En-De, En-Vi and De-En test sets. Compared with the baseline models, Explicit Sparse Transformer reaches improved performances, and it achieves the state-of-the-art performances in En-Vi and De-En.",
"Table 2: Results on the MSCOCO Karpathy test split.",
"Table 3: Comparison with state-of-the-art results on enwiki8. Explicit Sparse Transformer-XL refers to the Transformer with our sparsification method.",
"Table 4: In the Transformer model, the proposed method, top-k selection before softmax is faster than previous sparse attention methods and is comparable in terms of BLEU scores.",
"Figure 3: Analyse the value of K on IWSLT En-Vi and De-En datasets. ”inf” denotes the special case of the Explicit Sparse Transformer where all positions may be attended, same as the origin Transformer.",
"Table 5: Results of the ablation study of the sparsification at different phases on the En-Vi test set. “Base” denotes vanilla Transformer. “T” denotes only adding the sparsification in the training phase, and “T&P” denotes adding it at both phases as the implementation of Explicit Sparse Transformer does.",
"Figure 4: Figure 4(a) is the attention visualization of Transformer and Figure 4(b) is that of the Explicit Sparse Transformer. The red box shows that the attentions in vanilla Transformer at most steps are concentrated on the last token of the context.",
"Figure 5: Code for the main idea in Pytorch"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"5-Table2-1.png",
"6-Table3-1.png",
"6-Table4-1.png",
"7-Figure3-1.png",
"7-Table5-1.png",
"8-Figure4-1.png",
"15-Figure5-1.png"
]
} | [
"What do they mean by explicit selection of most relevant segments?",
"What datasets they used for evaluation?"
] | [
[
"1912.11637-Explicit Sparse Transformer-2",
"1912.11637-Introduction-3",
"1912.11637-Explicit Sparse Transformer-1"
],
[
"1912.11637-Results ::: Neural Machine Translation ::: Dataset-0",
"1912.11637-Results ::: Image Captioning ::: Dataset-0",
"1912.11637-Results ::: Language Modeling ::: Dataset-0",
"1912.11637-Results ::: Neural Machine Translation ::: Dataset-1"
]
] | [
"focusing on the top-k segments that contribute the most in terms of correlation to the query",
"For En-De translation the newstest 2014 set from WMT 2014 En-De translation dataset, for En-Vi translation the tst2013 from IWSLT 2015 dataset. and for De-En translation the teste set from IWSLT 2014."
] | 362 |
2002.04326 | ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning | Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical results show that the state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models. | {
"paragraphs": [
[
"Machine reading comprehension (MRC) is a fundamental task in Natural Language Processing, which requires models to understand a body of text and answer a particular question related to the context. With success of unsupervised representation learning in NLP, language pre-training based models such as GPT-2 BIBREF0, BERT BIBREF1, XLNet BIBREF2 and RoBERTa BIBREF3 have achieved nearly saturated performance on most of the popular MRC datasets BIBREF4, BIBREF5, BIBREF6, BIBREF7. It is time to challenge state-of-the-art models with more difficult reading comprehension tasks and move a step forward to more comprehensive analysis and reasoning over text BIBREF8.",
"In natural language understanding, logical reasoning is an important ability to examine, analyze and critically evaluate arguments as they occur in ordinary language according to the definition from Law School Admission BIBREF9. It is a significant component of human intelligence and is essential in negotiation, debate and writing etc. However, existing reading comprehension datasets have none or merely a small amount of data requiring logical reasoning, e.g., 0% in MCTest dataset BIBREF10 and 1.2% in SQuAD BIBREF4 according to BIBREF11. One related task is natural language inference, which requires models to label the logical relationships of sentence pairs. However, this task only considers three types of simple logical relationships and only needs reasoning at sentence-level. To push the development of models in logical reasoning from simple logical relationship classification to multiple complicated logical reasoning and from sentence-level to passage-level, it is necessary to introduce a reading comprehension dataset targeting logical reasoning.",
"A typical example of logical reasoning questions is shown in Table TABREF5. Similar to the format of multiple-choice reading comprehension datasets BIBREF10, BIBREF5, it contains a context, a question and four options with only one right answer. To answer the question in this example, readers need to identify the logical connections between the lines to pinpoint the conflict, then understand each of the options and select an option that solves the conflict. Human minds need extensive training and practice to get used to complex reasoning, and it will take immense efforts for crowdsourcing workers to design such logical reasoning questions. Inspired by the datasets extracted from standardized examinations BIBREF5, BIBREF12, we build a dataset by selecting such logical reasoning questions from standardized exams such as GMAT and LSAT . We finally collect 6,138 pieces of logical reasoning questions, which constitute a Reading Comprehension dataset requiring logical reasoning (ReClor).",
"Human-annotated datasets usually contain biases BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, which are often exploited by neural network models as shortcut solutions to achieve high testing accuracy. For data points whose options can be selected correctly without knowing the contexts and questions, we classify them as biased ones. In order to fully assess the logical reasoning ability of the models, we propose to identify the biased data points and group them as EASY set, and put the rest into HARD set. Based on our experiments on these separate sets, we find that even the state-of-the-art models can only perform well on EASY set and struggle on HARD set as shown in Figure FIGREF4. This phenomenon shows that current models can well capture the biases in the dataset but lack the ability to understand the text and reason based on connections between the lines. On the other hand, human beings perform similarly on both the EASY and HARD set. It is thus observed that there is still a long way to go to equip models with true logical reasoning ability.",
"The contributions of our paper are two-fold. First, we introduce ReClor, a new reading comprehension dataset requiring logical reasoning. We use option-only-input baselines trained with different random seeds to identify the data points with biases in the testing set, and group them as EASY set, with the rest as HARD set to facilitate comprehensive evaluation. Second, we evaluate several state-of-the-art models on ReClor and find these pre-trained language models can perform well on EASY set but struggle on the HARD set. This indicates although current models are good at exploiting biases in the dataset, they are far from capable of performing real logical reasoning yet.",
"",
""
],
[
"Reading Comprehension Datasets. A variety of reading comprehension datasets have been introduced to promote the development of this field. MCTest BIBREF10 is a dataset with 2,000 multiple-choice reading comprehension questions about fictional stories in the format similar to ReClor. BIBREF4 proposed SQuAD dataset, which contains 107,785 question-answer pairs on 536 Wikipedia articles. The authors manually labeled 192 examples of the dataset and found that the examples mainly require reasoning of lexical or syntactic variation. In an analysis of the above-mentioned datasets, BIBREF11 found that none of questions requiring logical reasoning in MCTest dataset BIBREF10 and only 1.2% in SQuAD dataset BIBREF4. BIBREF5 introduced RACE dataset by collecting the English exams for middle and high school Chinese students in the age range between 12 to 18. They hired crowd workers on Amazon Mechanical Turk to label the reasoning type of 500 samples in the dataset and show that around 70 % of the samples are in the category of word matching, paraphrasing or single-sentence reasoning. To encourage progress on deeper comprehension of language, more reading comprehension datasets requiring more complicated reasoning types are introduced, such as iterative reasoning about the narrative of a story BIBREF20, multi-hop reasoning across multiple sentences BIBREF21 and multiple documents BIBREF22, commonsense knowledge reasoning BIBREF23, BIBREF24, BIBREF25 and numerical discrete reasoning over paragraphs BIBREF8. However, to the best of our knowledge, although there are some datasets targeting logical reasoning in other NLP tasks mentioned in the next section, there is no dataset targeting evaluating logical reasoning in reading comprehension task. This work introduces a new dataset to fill this gap.",
"Logical Reasoning in NLP. There are several tasks and datasets introduced to investigate logical reasoning in NLP. The task of natural language inference, also known as recognizing textual entailment BIBREF26, BIBREF27, BIBREF28, BIBREF29, BIBREF30 requires models to take a pair of sentence as input and classify their relationship types, i.e., Entailment, Neutral, or Contradiction. SNLI BIBREF31 and MultiNLI BIBREF32 datasets are proposed for this task. However, this task only focuses on sentence-level logical relationship reasoning and the relationships are limited to only a few types. Another task related to logical reasoning in NLP is argument reasoning comprehension task introduced by BIBREF33 with a dataset of this task. Given an argument with a claim and a premise, this task aims to select the correct implicit warrant from two options. Although the task is on passage-level logical reasoning, it is limited to only one logical reasoning type, i.e., identifying warrants. ReClor and the proposed task integrate various logical reasoning types into reading comprehension, with the aim to promote the development of models in logical reasoning not only from sentence-level to passage-level, but also from simple logical reasoning types to the complicated diverse ones.",
"Datasets from Examinations. There have been several datasets extracted from human standardized examinations in NLP, such as RACE dataset BIBREF5 mentioned above. Besides, NTCIR QA Lab BIBREF34 offers comparative evaluation for solving real-world university entrance exam questions; The dataset of CLEF QA Entrance Exams Task BIBREF35 is extracted from standardized English examinations for university admission in Japan; ARC dataset BIBREF12 consists of 7,787 science questions targeting student grade level, ranging from 3rd grade to 9th; The dialogue-based multiple-choice reading comprehension dataset DREAM BIBREF36 contains 10,197 questions for 6,444 multi-turn multi-party dialogues from English language exams that are designed by human experts to assess the comprehension level of Chinese learners of English. Compared with these datasets, ReClor distinguishes itself by targeting logical reasoning.",
""
],
[
""
],
[
"The format of data in ReClor is similar to other multiple-choice reading comprehension datasets BIBREF10, BIBREF5, where a data point contains a context, a question and four answer options, among which only one option is right/most suitable. We collect reading comprehension problems that require complicated logical reasoning. However, producing such data requires the ability to perform complex logical reasoning, which makes it hard for crowdsourcing workers to generate such logical questions. Fortunately, we find the reading comprehension problems in some standardized tests, such as GMAT and LSAT, are highly in line with our expectation.",
"We construct a dataset containing 6,138 logical reasoning questions sourced from open websites and books. In the original problems, there are five answer options in which only one is right. To comply with fair use of law, we shuffle the order of answer options and randomly delete one of the wrong options for each data point, which results in four options with one right option and three wrong options. Furthermore, similar to ImageNet dataset, ReClor is available for non-commercial research purpose only. We are also hosting a public evaluation server on EvalAI BIBREF37 to benchmark progress on Reclor."
],
[
"As mentioned above, we collect 6,138 data points, in which 91.22% are from actual exams of GMAT and LSAT while others are from high-quality practice exams. They are divided into training set, validation set and testing set with 4,638, 500 and 1,000 data points respectively. The overall statistics of ReClor and comparison with other similar multiple-choice MRC datasets are summarized in Table TABREF9. As shown, ReClor is of comparable size and relatively large vocabulary size. Compared with RACE, the length of the context of ReCor is much shorter. In RACE, there are many redundant sentences in context to answer a question. However, in ReClor, every sentence in the context passages is important, which makes this dataset focus on evaluating the logical reasoning ability of models rather than the ability to extract relevant information from a long context. The length of answer options of ReClor is largest among these datasets. We analyze and manually annotate the types of questions on the testing set and group them into 17 categories, whose percentages and descriptions are shown in Table TABREF11. The percentages of different types of questions reflect those in the logical reasoning module of GMAT and LSAT. Some examples of different types of logical reasoning are listed in Figure FIGREF12, and more examples are listed in the Appendix . Taking two examples, we further express how humans would solve such questions in Table TABREF13, showing the challenge of ReClor."
],
[
"The dataset is collected from exams devised by experts in logical reasoning, which means it is annotated by humans and may introduce biases in the dataset. Recent studies have shown that models can utilize the biases in a dataset of natural language understanding to perform well on the task without truly understanding the text BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18. It is necessary to analyze such data biases to help evaluate models. In the ReClor dataset, the common context and question are shared across the four options for each data point, so we focus on the analysis of the difference in lexical choice and sentence length of the right and wrong options without contexts and questions. We first investigate the biases of lexical choice. We lowercase the options and then use WordPiece tokenization BIBREF39 of BERT$_{\\small \\textsc {BASE}}$ BIBREF1 to get the tokens. Similar to BIBREF16, for the tokens in options, we analyze their conditional probability of label $l \\in \\lbrace \\mathrm {right, wrong}\\rbrace $ given by the token $t$ by $p(l|t) =count(t, l) / count(t)$. The larger the correlation score is for a particular token, the more likely it contributes to the prediction of related option. Table SECREF14 reports tokens in training set which occur at least twenty times with the highest scores since many of the tokens with the highest scores are of low frequency. We further analyze the lengths of right and wrong options BIBREF17 in training set. We notice a slight difference in the distribution of sentence length for right and wrong options. The average length for wrong options is around 21.82 whereas that for right options is generally longer with an average length of 23.06.",
"tableTop 10 tokens that correlate to right options with more than 20 occurrences. figureThe distribution of the option length in ReClor with respect to right and wrong labels."
],
[
"Many neural network based models such as FastText BIBREF40, Bi-LSTM, GPT BIBREF41, GPT-2 BIBREF0, BERT BIBREF1, XLNet BIBREF2, RoBERTa BIBREF3 have achieved impressive results in various NLP tasks. We challenge these neural models with ReClor to investigate how well they can perform. Details of the baseline models and implementation are shown in the Appendix and ."
],
[
"As mentioned earlier, biases prevalently exist in human-annotated datasets BIBREF16, BIBREF17, BIBREF18, BIBREF42, which are often exploited by models to perform well without truly understanding the text. Therefore, it is necessary to find out the biased data points in ReClor in order to evaluate models in a more comprehensive manner BIBREF43. To this end, we feed the five strong baseline models (GPT, GPT-2, BERT$_{\\small \\textsc {BASE}}$, XLNet$_{\\small \\textsc {BASE}}$ and RoBERTa$_{\\small \\textsc {BASE}}$) with ONLY THE ANSWER OPTIONS for each problem. In other words, we purposely remove the context and question in the inputs. In this way, we are able to identify those problems that can be answered correctly by merely exploiting the biases in answer options without knowing the relevant context and question. However, the setting of this task is a multiple-choice question with 4 probable options, and even a chance baseline could have 25% probability to get it right. To eliminate the effect of random guess, we set four different random seeds for each model and pick the data points that are predicted correctly in all four cases to form the EASY set. Then, the data points which are predicted correctly by the models at random could be nearly eliminated, since any data point only has a probability of $(25\\%)^{4}=0.39\\%$ to be guessed right consecutively for four times. Then we unite the sets of data points that are consistently predicted right by each model, because intuitively different models may learn different biases of the dataset. The above process is formulated as the following expression,",
"where $_{\\mathrm {BERT}}^{\\mathrm {seed_1}}$ denotes the set of data points which are predicted correctly by BERT$_{\\small \\textsc {BASE}}$ with seed 1, and similarly for the rest. Table TABREF18 shows the average performance for each model trained with four different random seeds and the number of data points predicted correctly by all of them. Finally, we get 440 data points from the testing set $_{\\mathrm {TEST}}$ and we denote this subset as EASY set $_{\\mathrm {EASY}}$ and the other as HARD set $_{\\mathrm {HARD}}$."
],
[
"Among multiple-choice reading comprehension or QA datasets from exams, although the size of ReClor is comparable to those of ARC BIBREF12 and DREAM BIBREF36, it is much smaller than RACE BIBREF5. Recent studies BIBREF44, BIBREF45, BIBREF25, BIBREF46 have shown the effectiveness of pre-training on similar tasks or datasets then fine-tuning on the target dataset for transfer learning. BIBREF46 find that by first training on RACE BIBREF5 and then further fine-tuning on the target dataset, the performances of BERT$_{\\small \\textsc {BASE}}$ on multiple-choice dataset MC500 BIBREF10 and DREAM BIBREF36 can significantly boost from 69.5% to 81.2%, and from 63.2% to 70.2%, respectively. However, they also find that the model cannot obtain significant improvement even performs worse if it is first fine-tuned on span-based dataset like SQuAD BIBREF4. ReClor is a multiple-choice dataset, so we choose RACE for fine-tuning study."
],
[
"The performance of all tested models on the ReClor is presented in Table TABREF21. This dataset is built on questions designed for students who apply for admission to graduate schools, thus we randomly choose 100 samples from the testing set and divide them into ten tests, which are distributed to ten different graduate students in a university. We take the average of their scores and present it as the baseline of graduate students. The data of ReClor are carefully chosen and modified from only high-quality questions from standardized graduate entrance exams. We set the ceiling performance to 100% since ambiguous questions are not included in the dataset.",
"The performance of fastText is better than random guess, showing that word correlation could be used to help improve performance to some extent. It is difficult for Bi-LSTM to converge on this dataset. Transformer-based pre-training models have relatively good performance, close to the performance of graduate students. However, we find that these models only perform well on EASY set with around 75% accuracy, showing these models have an outstanding ability to capture the biases of the dataset, but they perform poorly on HARD set with only around 30% accuracy. In contrast, humans can still keep good performance on HARD set. We notice the difference in testing accuracy performed by graduate students on EASY and HARD set, but this could be due to the small number of students participated in the experiments. Therefore, we say humans perform relatively consistent on both biased and non-biased dataset.",
"It is noticed that if the models are first trained on RACE and then fine-tuned on ReClor, they could obtain significant improvement, especially on HARD set. The overall performance of RoBERTa$_{\\small \\textsc {LARGE}}$ is even better than that of graduate students. This similar phenomenon can also be observed on DREAM dataset BIBREF36 by BIBREF46, which shows the potential of transfer learning for reasoning tasks. However, even after fine-tuning on RACE, the best performance of these strong baselines on HARD set is around 50%, still lower than that of graduate students and far away from ceiling performance.",
"Experiments in different input settings are also done. Compared with the input setting of answer options only (A), the setting of questions and answer options (Q, A) can not bring significant improvement. This may be because some questions e.g., Which one of the following is an assumption required by the argument?, Which one of the following, if true, most strengthens the argument? can be used in the same reasoning types of question, which could not offer much information. Further adding context causes significant boost, showing the high informativeness of the context.",
"We further analyze the model performance with respect to different question types of logical reasoning. Some results are shown in Figure FIGREF22 and the full results are shown in Figure , and in the Appendix . Three models of BERT$_{\\small \\textsc {LARGE}}$, XLNet$_{\\small \\textsc {LARGE}}$ and RoBERTa$_{\\small \\textsc {LARGE}}$ perform well on most of types. On HARD set, the three models perform poorly on certain types such as Strengthen, Weaken and Role which require extensive logical reasoning. However, they perform relatively better on other certain types, such as Conclusion/Main Point and Match Structures that are more straight-forward. For the result of transfer learning, we analyze XLNet$_{\\small \\textsc {LARGE}}$ in detail. Though the overall performance is significantly boosted after fine-tuning on RACE first, the histograms in the bottom of Figure FIGREF22 show that on EASY set, accuracy of the model with fine-tuning on RACE is similar to that without it among most question types, while on HARD set, significant improvement on some question types is observed, such as Conclusion/Main Point and Most Strongly Supported. This may be because these types require less logical reasoning to some extent compared with other types, and similar question types may also be found in RACE dataset. Thus, the pre-training on RACE helps enhance the ability of logical reasoning especially of relatively simple reasoning types, but more methods are still needed to further enhance the ability especially that of relatively complex reasoning types."
],
[
"In this paper, we introduce ReClor, a reading comprehension dataset requiring logical reasoning, with the aim to push research progress on logical reasoning in NLP forward from sentence-level to passage-level and from simple logical reasoning to multiple complicated one. We propose to identify biased data points and split the testing set into EASY and HARD group for biased and non-biased data separately. We further empirically study the different behaviors of state-of-the-art models on these two testing sets, and find recent powerful transformer-based pre-trained language models have an excellent ability to exploit the biases in the dataset but have difficulty in understanding and reasoning given the non-biased data with low performance close to or slightly better than random guess. These results show there is a long way to equip deep learning models with real logical reasoning abilities. We hope this work would inspire more research in future to adopt similar split technique and evaluation scheme when reporting their model performance. We also show by first fine-tuning on a large-scale dataset RACE then fine-tuning on ReClor, the models could obtain significant improvement, showing the potential of transfer learning to solve reasoning tasks."
],
[
"We would like to thank three anonymous reviews for their insightful comments and suggestions; thank Rishabh Jain from Georgia Tech for helping build up the leaderboard of ReClor on EvalAI. Jiashi Feng was partially supported by NUS IDS R-263-000-C67-646, ECRA R-263-000-C87-133, MOE Tier-II R-263-000-D17-112 and AI.SG R-263-000-D97-490."
]
],
"section_name": [
"Introduction",
"Related Work",
"ReClor Data Collection and Analysis",
"ReClor Data Collection and Analysis ::: Data collection",
"ReClor Data Collection and Analysis ::: Data analysis",
"ReClor Data Collection and Analysis ::: Data Biases in the Dataset",
"Experiments ::: Baseline Models",
"Experiments ::: Experiments to Find Biased Data",
"Experiments ::: Transfer learning Through Fine-tuning",
"Experiments ::: Results and Analysis",
"Conclusion",
"Conclusion ::: Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"4a100f80305bc4d632d96416d83262b20c85f40e",
"f5078c09a30b7cd1c253909a0337e8ae2eca0feb"
],
"answer": [
{
"evidence": [
"We construct a dataset containing 6,138 logical reasoning questions sourced from open websites and books. In the original problems, there are five answer options in which only one is right. To comply with fair use of law, we shuffle the order of answer options and randomly delete one of the wrong options for each data point, which results in four options with one right option and three wrong options. Furthermore, similar to ImageNet dataset, ReClor is available for non-commercial research purpose only. We are also hosting a public evaluation server on EvalAI BIBREF37 to benchmark progress on Reclor."
],
"extractive_spans": [
"6,138 logical reasoning questions"
],
"free_form_answer": "",
"highlighted_evidence": [
"We construct a dataset containing 6,138 logical reasoning questions sourced from open websites and books. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"A typical example of logical reasoning questions is shown in Table TABREF5. Similar to the format of multiple-choice reading comprehension datasets BIBREF10, BIBREF5, it contains a context, a question and four options with only one right answer. To answer the question in this example, readers need to identify the logical connections between the lines to pinpoint the conflict, then understand each of the options and select an option that solves the conflict. Human minds need extensive training and practice to get used to complex reasoning, and it will take immense efforts for crowdsourcing workers to design such logical reasoning questions. Inspired by the datasets extracted from standardized examinations BIBREF5, BIBREF12, we build a dataset by selecting such logical reasoning questions from standardized exams such as GMAT and LSAT . We finally collect 6,138 pieces of logical reasoning questions, which constitute a Reading Comprehension dataset requiring logical reasoning (ReClor)."
],
"extractive_spans": [
"6,138 pieces of logical reasoning questions"
],
"free_form_answer": "",
"highlighted_evidence": [
"Inspired by the datasets extracted from standardized examinations BIBREF5, BIBREF12, we build a dataset by selecting such logical reasoning questions from standardized exams such as GMAT and LSAT . We finally collect 6,138 pieces of logical reasoning questions, which constitute a Reading Comprehension dataset requiring logical reasoning (ReClor)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ea4394112c1549185e6b763d6f36733a9f2ed794",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
]
},
{
"annotation_id": [
"1cec682edac5cac196a0c9e6f6fe362f00ba93f4",
"5c9f0b7e83c5588b4f7e21119dc8881e565a126e"
],
"answer": [
{
"evidence": [
"As mentioned earlier, biases prevalently exist in human-annotated datasets BIBREF16, BIBREF17, BIBREF18, BIBREF42, which are often exploited by models to perform well without truly understanding the text. Therefore, it is necessary to find out the biased data points in ReClor in order to evaluate models in a more comprehensive manner BIBREF43. To this end, we feed the five strong baseline models (GPT, GPT-2, BERT$_{\\small \\textsc {BASE}}$, XLNet$_{\\small \\textsc {BASE}}$ and RoBERTa$_{\\small \\textsc {BASE}}$) with ONLY THE ANSWER OPTIONS for each problem. In other words, we purposely remove the context and question in the inputs. In this way, we are able to identify those problems that can be answered correctly by merely exploiting the biases in answer options without knowing the relevant context and question. However, the setting of this task is a multiple-choice question with 4 probable options, and even a chance baseline could have 25% probability to get it right. To eliminate the effect of random guess, we set four different random seeds for each model and pick the data points that are predicted correctly in all four cases to form the EASY set. Then, the data points which are predicted correctly by the models at random could be nearly eliminated, since any data point only has a probability of $(25\\%)^{4}=0.39\\%$ to be guessed right consecutively for four times. Then we unite the sets of data points that are consistently predicted right by each model, because intuitively different models may learn different biases of the dataset. The above process is formulated as the following expression,"
],
"extractive_spans": [
"we feed the five strong baseline models (GPT, GPT-2, BERT$_{\\small \\textsc {BASE}}$, XLNet$_{\\small \\textsc {BASE}}$ and RoBERTa$_{\\small \\textsc {BASE}}$) with ONLY THE ANSWER OPTIONS for each problem",
" identify those problems that can be answered correctly by merely exploiting the biases in answer options without knowing the relevant context and question"
],
"free_form_answer": "",
"highlighted_evidence": [
"As mentioned earlier, biases prevalently exist in human-annotated datasets BIBREF16, BIBREF17, BIBREF18, BIBREF42, which are often exploited by models to perform well without truly understanding the text. Therefore, it is necessary to find out the biased data points in ReClor in order to evaluate models in a more comprehensive manner BIBREF43. To this end, we feed the five strong baseline models (GPT, GPT-2, BERT$_{\\small \\textsc {BASE}}$, XLNet$_{\\small \\textsc {BASE}}$ and RoBERTa$_{\\small \\textsc {BASE}}$) with ONLY THE ANSWER OPTIONS for each problem. In other words, we purposely remove the context and question in the inputs. In this way, we are able to identify those problems that can be answered correctly by merely exploiting the biases in answer options without knowing the relevant context and question. However, the setting of this task is a multiple-choice question with 4 probable options, and even a chance baseline could have 25% probability to get it right. To eliminate the effect of random guess, we set four different random seeds for each model and pick the data points that are predicted correctly in all four cases to form the EASY set. Then, the data points which are predicted correctly by the models at random could be nearly eliminated, since any data point only has a probability of $(25\\%)^{4}=0.39\\%$ to be guessed right consecutively for four times. Then we unite the sets of data points that are consistently predicted right by each model, because intuitively different models may learn different biases of the dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The dataset is collected from exams devised by experts in logical reasoning, which means it is annotated by humans and may introduce biases in the dataset. Recent studies have shown that models can utilize the biases in a dataset of natural language understanding to perform well on the task without truly understanding the text BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18. It is necessary to analyze such data biases to help evaluate models. In the ReClor dataset, the common context and question are shared across the four options for each data point, so we focus on the analysis of the difference in lexical choice and sentence length of the right and wrong options without contexts and questions. We first investigate the biases of lexical choice. We lowercase the options and then use WordPiece tokenization BIBREF39 of BERT$_{\\small \\textsc {BASE}}$ BIBREF1 to get the tokens. Similar to BIBREF16, for the tokens in options, we analyze their conditional probability of label $l \\in \\lbrace \\mathrm {right, wrong}\\rbrace $ given by the token $t$ by $p(l|t) =count(t, l) / count(t)$. The larger the correlation score is for a particular token, the more likely it contributes to the prediction of related option. Table SECREF14 reports tokens in training set which occur at least twenty times with the highest scores since many of the tokens with the highest scores are of low frequency. We further analyze the lengths of right and wrong options BIBREF17 in training set. We notice a slight difference in the distribution of sentence length for right and wrong options. The average length for wrong options is around 21.82 whereas that for right options is generally longer with an average length of 23.06."
],
"extractive_spans": [],
"free_form_answer": "They identify biases as lexical choice and sentence length for right and wrong answer options in an isolated context, without the question and paragraph context that typically precedes answer options. Lexical choice was identified by calculating per-token correlation scores with \"right\" and \"wrong labels. They calculated the average sentence length for \"right\" and \"wrong\" sentences.",
"highlighted_evidence": [
"In the ReClor dataset, the common context and question are shared across the four options for each data point, so we focus on the analysis of the difference in lexical choice and sentence length of the right and wrong options without contexts and questions. ",
"Similar to BIBREF16, for the tokens in options, we analyze their conditional probability of label $l \\in \\lbrace \\mathrm {right, wrong}\\rbrace $ given by the token $t$ by $p(l|t) =count(t, l) / count(t)$. The larger the correlation score is for a particular token, the more likely it contributes to the prediction of related option. ",
"We notice a slight difference in the distribution of sentence length for right and wrong options. The average length for wrong options is around 21.82 whereas that for right options is generally longer with an average length of 23.06.",
"They focused on the"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"ea4394112c1549185e6b763d6f36733a9f2ed794"
]
}
],
"nlp_background": [
"two",
"two"
],
"paper_read": [
"no",
"no"
],
"question": [
"How big is this dataset?",
"How are biases identified in the dataset?"
],
"question_id": [
"6371c6863fe9a14bf67560e754ce531d70de10ab",
"28a8a1542b45f67674a2f1d54fff7a1e45bfad66"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Performance comparison of state-of-the-art models and humans (graduate students) on EASY and HARD set of ReClor testing set.",
"Table 1: An example in the ReClor dataset which is modified from the Law School Admission Council (2019b).",
"Table 2: Statistics of several multiple-choice MRC datasets.",
"Table 3: The percentage and description of each logical reasoning type. The descriptions are adapted from those specified by Khan Academy (2019).",
"Figure 3: The distribution of the option length in ReClor with respect to right and wrong labels.",
"Figure 2: Examples of some question types. The correct options are marked by X. More examples are shown in the Appendix C.",
"Table 6: Average accuracy of each model using four different random seeds with only answer options as input, and the number of their common correct predictions.",
"Table 7: Accuracy (%) of models and human performance. The column Input means whether to input context (C), question (Q) and answer options (A). The RACE column represents whether to first use RACE to fine-tune before training on ReClor.",
"Figure 4: Performance of models on EASY (left) and HARD (right) testing sets and that of models. XLNetLARGE +Fine-Tune means the model is first fine-tuned on RACE before training on ReClor.",
"Table 8: Input formats of different models. Context, Question and Option represent the token sequences of the context, question and option respectively, and || denotes concatenation.",
"Table 9: Hyperparameters for finetuning pre-training language models on ReClor",
"Table 27: The definition and an example of the logical reasoning type - Others",
"Table 28: Overlap of each pair of models after intersection among 4 random seeds.",
"Figure 5: Accuracy of all baseline models on overall testing set",
"Figure 6: Accuracy of all baseline models on EASY set of testing set",
"Figure 7: Accuracy of all baseline models on HARD set of testing set",
"Figure 8: Performance of BERTLARGE (top) and RoBERTaLARGE (bottom) on EASY (left) and HARD (right) testing sets."
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"5-Table3-1.png",
"5-Figure3-1.png",
"7-Figure2-1.png",
"8-Table6-1.png",
"9-Table7-1.png",
"10-Figure4-1.png",
"15-Table8-1.png",
"16-Table9-1.png",
"22-Table27-1.png",
"22-Table28-1.png",
"23-Figure5-1.png",
"24-Figure6-1.png",
"25-Figure7-1.png",
"26-Figure8-1.png"
]
} | [
"How are biases identified in the dataset?"
] | [
[
"2002.04326-Experiments ::: Experiments to Find Biased Data-0",
"2002.04326-ReClor Data Collection and Analysis ::: Data Biases in the Dataset-0"
]
] | [
"They identify biases as lexical choice and sentence length for right and wrong answer options in an isolated context, without the question and paragraph context that typically precedes answer options. Lexical choice was identified by calculating per-token correlation scores with \"right\" and \"wrong labels. They calculated the average sentence length for \"right\" and \"wrong\" sentences."
] | 364 |
1911.04128 | A hybrid text normalization system using multi-head self-attention for mandarin | In this paper, we propose a hybrid text normalization system using multi-head self-attention. The system combines the advantages of a rule-based model and a neural model for text preprocessing tasks. Previous studies in Mandarin text normalization usually use a set of hand-written rules, which are hard to improve on general cases. The idea of our proposed system is motivated by the neural models from recent studies and has a better performance on our internal news corpus. This paper also includes different attempts to deal with imbalanced pattern distribution of the dataset. Overall, the performance of the system is improved by over 1.5% on sentence-level and it has a potential to improve further. | {
"paragraphs": [
[
"Text Normalization (TN) is a process to transform non-standard words (NSW) into spoken-form words (SFW) for disambiguation. In Text-To-Speech (TTS), text normalization is an essential procedure to normalize unreadable numbers, symbols or characters, such as transforming “$20” to “twenty dollars” and “@” to “at”, into words that can be used in speech synthesis. The surrounding context is the determinant for ambiguous cases in TN. For example, the context will decide whether to read “2019” as year or a number, and whether to read “10:30” as time or the score of a game. In Mandarin, some cases depend on language habit instead of rules- “2” can either be read as “r” or “ling” and “1” as “y” or “yo”.",
"Currently, based on the traditional taxonomy approach for NSWBIBREF0, the Mandarin TN tasks are generally resolved by rule-based systems which use keywords and regular expressions to determine the SFW of ambiguous wordsBIBREF1, BIBREF2. These systems typically classify NSW into different pattern groups, such as abbreviations, numbers, etc., and then into sub-groups, such as phone number, year, etc., which has corresponding NSW-SFW transformations. ZhouBIBREF3 and JiaBIBREF4 proposed systems which use maximum entropy (ME) to further disambiguate the NSW with multiple pattern matches. For the NSW given the context constraints, the highest probability corresponds to the highest entropy. LiouBIBREF5 proposed a system of data-driven models which combines a rule-based and a keyword-based TN module. The second module classifies preceding and following words around the keywords and then trains a CRF model to predict the NSW patterns based on the classification results. There are some other hybrid systemsBIBREF6, BIBREF7 which use NLP models and rules separately to help normalize hard cases in TN.",
"For recent NLP studies, sequence-to-sequence (seq2seq) models have achieved impressive progress in TN tasks in English and RussianBIBREF8, BIBREF9. Seq2seq models typically encode sequences into a state vector, which is decoded into an output vector from its learnt vector representation and then to a sequence. Different seq2seq models with bi-LSTM, bi-GRU with attention are proposed in BIBREF9, BIBREF10. Zhang and Sproat proposed a contextual seq2seq model, which uses a sliding-window and RNN with attentionBIBREF8. In this model, bi-directional GRU is used in both encoder and decoder, and the context words are labeled with “$\\langle $self$\\rangle $”, helping the model distinguish the NSW and the context.",
"However, seq2seq models have several downsides when directly applied in Mandarin TN tasks. As mentioned in BIBREF8, the sequence output directly from a seq2seq model can lead to unrecoverable errors. The model sometimes changes the context words which should be kept the same. Our experiments produce similar errors. For example, “Podnieks, Andrew 2000” is transformed to “Podncourt, Andrew Two Thousand”, changing “Podnieks” to “Podncourt”. These errors cannot be detected by the model itself. In BIBREF11, rules are applied to two specific categories to resolve silly errors, but this method is hard to apply to all cases. Another challenge in Mandarin is the word segmentation since words are not separated by spaces and the segmentation could depend on the context. Besides, some NSW may have more than one SFW in Mandarin, making the seq2seq model hard to train. For example, “UTF8gbsn两千零八年” and “UTF8gbsn二零零八年” are both acceptable SFW for “2008UTF8gbsn年”. The motivation of this paper is to combine the advantages of a rule-based model for its flexibility and a neural model to enhance the performance on more general cases. To avoid the problems of seq2seq models, we consider the TN task as a multi-class classification problem with carefully designed patterns for the neural model.",
"The contributions of this paper include the following. First, this is the first known TN system for Mandarin which uses a neural model with multi-head self-attention. Second, we propose a hybrid system combining a rule-based model and a neural model. Third, we experiment with different approaches to deal with imbalanced dataset in the TN task.",
"The paper is organized as follows. Section SECREF2 introduces the detailed structure of the proposed hybrid system and its training and inference. In Section SECREF3, the performance of different system configurations is evaluated on different datasets. And the conclusion is given in Section SECREF4."
],
[
"The rule-based TN model can handle the TN task alone and is the baseline in our experiments. It has the same idea as in BIBREF8 but has a more complicated system of rules with priorities. The model contains 45 different groups and about 300 patterns as sub-groups, each of which uses a keyword with regular expressions to match the preceding and following texts. Each pattern also has a priority value. During normalization, each sentence is fed as input and the NSW will be matched by the regular expressions. The model tries to match patterns with longer context and slowly decrease the context length until a match is found. If there are multiple pattern matches with the same length, the one with a higher priority will be chosen for the NSW. The model has been developed on abundant test data and bad cases. The advantage of the rule-based system is the flexibility, since one can simply add more special cases when they appear, such as new units. However, improving the performance of this system on more general cases becomes a bottleneck. For example, in a report of a football game, it cannot transform “1-3” to score if there are no keywords like “score” or “game” close to it."
],
[
"We propose a hybrid TN system as in Fig. FIGREF3, which combines the rule-based model and a neural model to make up the shortcomings of one another. The system inputs are raw texts. The NSW are first extracted from the original text using regular expressions. We only extract NSW that are digit and symbol related, and other NSW like English abbreviations will be processed in the rule-based model. Then the system performs a priority check on the NSW, and all matched strings will be sent into the rule-based model. The priority patterns include definite NSW such as “911” and other user-defined strings. Then the remaining patterns are passed through the neural model to be classified into one of the pattern groups. Before normalizing the classified NSW in the pattern reader, the format of each classified NSW is checked with regular expressions, and the illegal ones will be filtered back to the rule-based system. For example, classifying “10%” to read as year is illegal. In the pattern reader, each pattern label has a unique process function to perform the NSW-SFW transformation. Finally, all of the normalized SFW are inserted back to the text segmentations to form the output sentences.",
"Multi-head self-attention was proposed by GoogleBIBREF12 in the model of transformer. The model uses self-attention in the encoder and decoder and encoder-decoder attention in between. Motivated by the structure of transformer, multi-head self-attention is adopted in our neural model and the structure is shown in Fig. FIGREF4. Compared with other modules like LSTM and GRU, self-attention can efficiently extract the information of the NSW with all context in parallel and is fast to train. The core part of the neural model is similar to the encoder of a transformer. The inputs of the model are the sentences with their manually labeled NSW. We take a 30-character context window around each NSW and send it to the embedding layer. Padding is used when the window exceeds the sentence range. After 8 heads of self-attention, the model outputs a vector with dimension of the number of patterns. Finally, the highest masked softmax probability is chosen as the classified pattern group. The mask uses a regular expression to check if the NSW contain symbols and filters illegal ones such as classifying “12:00” as pure number, which is like a bi-class classification before softmax is applied.",
"For the loss function, in order to solve the problem of imbalanced dataset, which will be talked about in SECREF7, the final selection of the loss function is motivated by BIBREF13:",
"where $\\alpha _t$ and $\\gamma $ are hyper-parameters, $p$'s are the pattern probabilities after softmax, and $y$ is the correctness of the prediction. In our experiment, we choose $\\alpha _t=0.5$ and $\\gamma =4$."
],
[
"The neural TN model is trained alone with inputs of labeled sentences and outputs of pattern groups. And the inference is on the entire hybrid TN system in Fig1, which takes the original text with NSW as input and text with SFW as output.",
"The training data is split into 36 different classes, each of which has its own NSW-SFW transformation. The distribution of the dataset is the same with the NSW in our internal news corpus and is imbalanced, which is one of the challenges for our neural model. The approaches to deal with the imbalanced dataset are discussed in the next section."
],
[
"The training dataset contains 100,747 pattern labels. The texts are in Mandarin with a small proportion of English characters. The patterns are digit or symbol related, and patterns like English abbreviations are not included in the training labels. There are 36 classes in total, and some examples are listed in Table TABREF8. The first 8 are patterns with digits and symbols, and there could be substitutions among “$\\sim $”, “-”, “—” and “:” in a single group. The last 2 patterns are language related- “1” and “2” have different pronunciations based on language habit in Mandarin. Fig. FIGREF9 is a pie chart of the training label distribution. Notice that the top 5 patterns take up more than 90% of all labels, which makes the dataset imbalanced.",
"Imbalanced dataset is a challenge for the task because the top patterns are taking too much attention so that most weights might be determined by the easier ones. We have tried different methods to deal with this problem. The first method is data expansion using oversampling. Attempts include duplicating the text with low pattern proportion, replacing first few characters with paddings in the window, randomly changing digits, and shifting the context window. The other method is to add loss control in the model as mentioned in SECREF2. The loss function helps the model to focus on harder cases in different classes and therefore reduce the impact of the imbalanced data. The experimental results are in SECREF11."
],
[
"For sentence embedding, pre-trained embedding models are used to boost training. We experiment on a word-to-vector (w2v) model trained on Wikipedia corpus and a trained Bidirectional Encoder Representations from Transformers (BERT) model. The experimental result is in SECREF11.",
"The experiments show that using a fixed context window achieves better performance than padding to the maximum length of all sentences. And padding with 1's gives a slightly better performance than with 0's. During inference, all NSW patterns in one sentence need to be processed simultaneously before transforming to SFW to keep their original context."
],
[
"Table TABREF12 compares the highest pattern accuracy on the test set of 7 different neural model setups. Model 2-7's configuration differences are compared with Model 1: 1) proposed configuration; 2) replace w2v with BERT; 3) replace padding with 1's to 0's; 4) replace the context window length of 30 with maximum sentence length; 5) replace the loss with Cross Entropy (CE) loss; 6) remove mask; 7) apply data expansion.",
"Overall, w2v model has a better performance than BERT. A possible reason is that the model with BERT overfits the training data. The result also shows that data expansion does not give us better accuracy even though we find the model becomes more robust and has better performance on the lower proportioned patterns. This is because it changes the pattern distribution and the performance on the top proportioned patterns decreases a little, resulting in a large number of misclassifications. This is a tradeoff between a robust and a high-accuracy model, and we choose Model 1 for the following test since our golden set uses accuracy as the metric.",
"The neural model with the proposed configuration is evaluated on the test set of each pattern group. The test dataset has the same distribution as training data and precision/recall are evaluated on each pattern group. The $F_1$ score is the harmonic mean of precision and recall. The results of the top proportioned patterns are shown in Table TABREF13.",
"The proposed hybrid TN system is tested on our internal golden set of NSW-SFW pairs. It would be considered as an error if any character of the transformed and ground-truth sentences is different. The golden set has 67853 sentences, each of which contains 1-10 NSW strings. The sentence accuracy and pattern accuracy are listed in Table TABREF14. On sentence-level, the accuracy increases by 1.5%, which indicates an improvement of correctness on over 1000 sentences. The improvement is mainly on ambiguous NSW that don't have obvious keywords for rules to match and in long sentences."
],
[
"In this paper, we propose a TN system for Mandarin using multi-head self-attention. This system aims to improve the performance of the rule-based model combining with the advantages of a neural model. For a highly developed rule-based model, improving the accuracy on general cases becomes a bottleneck, but a neural model can help overcome this problem. From the results on the test data, the proposed system improves the accuracy on NSW-SFW transformation by over 1.5% on sentence-level and still has a potential to improve further. This is an obvious improvement based on the fully developed rules, which can hardly improve anymore.",
"The future work includes other aspects of model explorations. Mandarin word segmentation methods will be applied to replace the character-wise embedding with word-level embedding. More sequence learning models and attention mechanisms will be experimented. And more labeled dataset in other corpus will be supplemented for training."
]
],
"section_name": [
"Introduction",
"Method ::: Rule-based TN model",
"Method ::: Proposed Hybrid TN system",
"Method ::: Training and Inference",
"Experiments ::: Training Dataset",
"Experiments ::: System Configuration",
"Experiments ::: Model Performance",
"Conclusions & Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"22f2b4ee7bddbeabf5b0a71426c348f2d61511a6",
"e067b30dfc7a475bd550a7398468b175c7cceedc"
],
"answer": [
{
"evidence": [
"Imbalanced dataset is a challenge for the task because the top patterns are taking too much attention so that most weights might be determined by the easier ones. We have tried different methods to deal with this problem. The first method is data expansion using oversampling. Attempts include duplicating the text with low pattern proportion, replacing first few characters with paddings in the window, randomly changing digits, and shifting the context window. The other method is to add loss control in the model as mentioned in SECREF2. The loss function helps the model to focus on harder cases in different classes and therefore reduce the impact of the imbalanced data. The experimental results are in SECREF11."
],
"extractive_spans": [
"data expansion using oversampling",
"add loss control"
],
"free_form_answer": "",
"highlighted_evidence": [
" We have tried different methods to deal with this problem. The first method is data expansion using oversampling. Attempts include duplicating the text with low pattern proportion, replacing first few characters with paddings in the window, randomly changing digits, and shifting the context window. The other method is to add loss control in the model as mentioned in SECREF2."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Imbalanced dataset is a challenge for the task because the top patterns are taking too much attention so that most weights might be determined by the easier ones. We have tried different methods to deal with this problem. The first method is data expansion using oversampling. Attempts include duplicating the text with low pattern proportion, replacing first few characters with paddings in the window, randomly changing digits, and shifting the context window. The other method is to add loss control in the model as mentioned in SECREF2. The loss function helps the model to focus on harder cases in different classes and therefore reduce the impact of the imbalanced data. The experimental results are in SECREF11."
],
"extractive_spans": [
"data expansion using oversampling",
"add loss control in the model"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first method is data expansion using oversampling. ",
"The other method is to add loss control in the model as mentioned in SECREF2. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"646cd5a94a691ff2d1de7aa00b9e83a1d4d49ff6",
"6c4c4e1ad30999e1d70ce30fde0343c9befd91e4"
],
"answer": [
{
"evidence": [
"The rule-based TN model can handle the TN task alone and is the baseline in our experiments. It has the same idea as in BIBREF8 but has a more complicated system of rules with priorities. The model contains 45 different groups and about 300 patterns as sub-groups, each of which uses a keyword with regular expressions to match the preceding and following texts. Each pattern also has a priority value. During normalization, each sentence is fed as input and the NSW will be matched by the regular expressions. The model tries to match patterns with longer context and slowly decrease the context length until a match is found. If there are multiple pattern matches with the same length, the one with a higher priority will be chosen for the NSW. The model has been developed on abundant test data and bad cases. The advantage of the rule-based system is the flexibility, since one can simply add more special cases when they appear, such as new units. However, improving the performance of this system on more general cases becomes a bottleneck. For example, in a report of a football game, it cannot transform “1-3” to score if there are no keywords like “score” or “game” close to it."
],
"extractive_spans": [
"rule-based TN model"
],
"free_form_answer": "",
"highlighted_evidence": [
"The rule-based TN model can handle the TN task alone and is the baseline in our experiments."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table TABREF12 compares the highest pattern accuracy on the test set of 7 different neural model setups. Model 2-7's configuration differences are compared with Model 1: 1) proposed configuration; 2) replace w2v with BERT; 3) replace padding with 1's to 0's; 4) replace the context window length of 30 with maximum sentence length; 5) replace the loss with Cross Entropy (CE) loss; 6) remove mask; 7) apply data expansion."
],
"extractive_spans": [],
"free_form_answer": "six different variations of their multi-head attention model",
"highlighted_evidence": [
"Table TABREF12 compares the highest pattern accuracy on the test set of 7 different neural model setups. Model 2-7's configuration differences are compared with Model 1: 1) proposed configuration; 2) replace w2v with BERT; 3) replace padding with 1's to 0's; 4) replace the context window length of 30 with maximum sentence length; 5) replace the loss with Cross Entropy (CE) loss; 6) remove mask; 7) apply data expansion."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"84fb292a40bb823f701d4535f09873cae70ef7d0",
"9284edaa566767f04b73e2f9c756e7659c01b054"
],
"answer": [
{
"evidence": [
"Text Normalization (TN) is a process to transform non-standard words (NSW) into spoken-form words (SFW) for disambiguation. In Text-To-Speech (TTS), text normalization is an essential procedure to normalize unreadable numbers, symbols or characters, such as transforming “$20” to “twenty dollars” and “@” to “at”, into words that can be used in speech synthesis. The surrounding context is the determinant for ambiguous cases in TN. For example, the context will decide whether to read “2019” as year or a number, and whether to read “10:30” as time or the score of a game. In Mandarin, some cases depend on language habit instead of rules- “2” can either be read as “r” or “ling” and “1” as “y” or “yo”."
],
"extractive_spans": [
"normalize unreadable numbers, symbols or characters"
],
"free_form_answer": "",
"highlighted_evidence": [
"In Text-To-Speech (TTS), text normalization is an essential procedure to normalize unreadable numbers, symbols or characters, such as transforming “$20” to “twenty dollars” and “@” to “at”, into words that can be used in speech synthesis."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"31bad09d131dcb868a74951865d642575e620e77",
"b09685a10fed1952045fe1b622055ce0a3797ec4"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
},
{
"annotation_id": [
"1d646003617afa51d9c704dea98acbb21f2776bd",
"5c8b526242c77976c71cc4d65a45b33834dda6f5"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How do they deal with imbalanced datasets?",
"What models do they compare to?",
"What text preprocessing tasks do they focus on?",
"What news sources did they get the dataset from?",
"Did they collect their own corpus?"
],
"question_id": [
"539f5c27e1a2d240e52b711d0a50a3a6ddfa5cb2",
"aa7c5386aedfb13a361a2629b67cb54277e208d2",
"9b3371dcd855f1d3342edb212efa39dfc9142ae3",
"b02a6f59270b8c55fa4df3751bcb66fca2371451",
"3a3c372b6d73995adbdfa26103c85b32d071ff10"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. Flowchart of the proposed hybrid TN system.",
"Fig. 2. Multi-head self-attention model structure.",
"Table 1. Examples of some dataset pattern rules.",
"Fig. 3. Label distribution for dataset.",
"Table 3. Model performance on the test dataset.",
"Table 4. Model performance on the news golden set."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Table1-1.png",
"3-Figure3-1.png",
"4-Table3-1.png",
"4-Table4-1.png"
]
} | [
"What models do they compare to?"
] | [
[
"1911.04128-Method ::: Rule-based TN model-0",
"1911.04128-Experiments ::: Model Performance-0"
]
] | [
"six different variations of their multi-head attention model"
] | 365 |
1606.07947 | Sequence-Level Knowledge Distillation | Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU. | {
"paragraphs": [
[
"Neural machine translation (NMT) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 is a deep learning-based method for translation that has recently shown promising results as an alternative to statistical approaches. NMT systems directly model the probability of the next word in the target sentence simply by conditioning a recurrent neural network on the source sentence and previously generated target words.",
"While both simple and surprisingly accurate, NMT systems typically need to have very high capacity in order to perform well: Sutskever2014 used a 4-layer LSTM with 1000 hidden units per layer (herein INLINEFORM0 ) and Zhou2016 obtained state-of-the-art results on English INLINEFORM1 French with a 16-layer LSTM with 512 units per layer. The sheer size of the models requires cutting-edge hardware for training and makes using the models on standard setups very challenging.",
"This issue of excessively large networks has been observed in several other domains, with much focus on fully-connected and convolutional networks for multi-class classification. Researchers have particularly noted that large networks seem to be necessary for training, but learn redundant representations in the process BIBREF6 . Therefore compressing deep models into smaller networks has been an active area of research. As deep learning systems obtain better results on NLP tasks, compression also becomes an important practical issue with applications such as running deep learning models for speech and translation locally on cell phones.",
"Existing compression methods generally fall into two categories: (1) pruning and (2) knowledge distillation. Pruning methods BIBREF7 , BIBREF8 , BIBREF9 , zero-out weights or entire neurons based on an importance criterion: LeCun1990 use (a diagonal approximation to) the Hessian to identify weights whose removal minimally impacts the objective function, while Han2016 remove weights based on thresholding their absolute values. Knowledge distillation approaches BIBREF0 , BIBREF10 , BIBREF1 learn a smaller student network to mimic the original teacher network by minimizing the loss (typically INLINEFORM0 or cross-entropy) between the student and teacher output.",
"In this work, we investigate knowledge distillation in the context of neural machine translation. We note that NMT differs from previous work which has mainly explored non-recurrent models in the multi-class prediction setting. For NMT, while the model is trained on multi-class prediction at the word-level, it is tasked with predicting complete sequence outputs conditioned on previous decisions. With this difference in mind, we experiment with standard knowledge distillation for NMT and also propose two new versions of the approach that attempt to approximately match the sequence-level (as opposed to word-level) distribution of the teacher network. This sequence-level approximation leads to a simple training procedure wherein the student network is trained on a newly generated dataset that is the result of running beam search with the teacher network.",
"We run experiments to compress a large state-of-the-art INLINEFORM0 LSTM model, and find that with sequence-level knowledge distillation we are able to learn a INLINEFORM1 LSTM that roughly matches the performance of the full system. We see similar results compressing a INLINEFORM2 model down to INLINEFORM3 on a smaller data set. Furthermore, we observe that our proposed approach has other benefits, such as not requiring any beam search at test-time. As a result we are able to perform greedy decoding on the INLINEFORM4 model 10 times faster than beam search on the INLINEFORM5 model with comparable performance. Our student models can even be run efficiently on a standard smartphone. Finally, we apply weight pruning on top of the student network to obtain a model that has INLINEFORM6 fewer parameters than the original teacher model. We have released all the code for the models described in this paper."
],
[
"Let INLINEFORM0 and INLINEFORM1 be (random variable sequences representing) the source/target sentence, with INLINEFORM2 and INLINEFORM3 respectively being the source/target lengths. Machine translation involves finding the most probable target sentence given the source: INLINEFORM4 ",
"where INLINEFORM0 is the set of all possible sequences. NMT models parameterize INLINEFORM1 with an encoder neural network which reads the source sentence and a decoder neural network which produces a distribution over the target sentence (one word at a time) given the source. We employ the attentional architecture from Luong2015, which achieved state-of-the-art results on English INLINEFORM2 German translation."
],
[
"Knowledge distillation describes a class of methods for training a smaller student network to perform better by learning from a larger teacher network (in addition to learning from the training data set). We generally assume that the teacher has previously been trained, and that we are estimating parameters for the student. Knowledge distillation suggests training by matching the student's predictions to the teacher's predictions. For classification this usually means matching the probabilities either via INLINEFORM0 on the INLINEFORM1 scale BIBREF10 or by cross-entropy BIBREF11 , BIBREF1 .",
"Concretely, assume we are learning a multi-class classifier over a data set of examples of the form INLINEFORM0 with possible classes INLINEFORM1 . The usual training criteria is to minimize NLL for each example from the training data, INLINEFORM2 ",
"where INLINEFORM0 is the indicator function and INLINEFORM1 the distribution from our model (parameterized by INLINEFORM2 ). This objective can be seen as minimizing the cross-entropy between the degenerate data distribution (which has all of its probability mass on one class) and the model distribution INLINEFORM3 .",
"In knowledge distillation, we assume access to a learned teacher distribution INLINEFORM0 , possibly trained over the same data set. Instead of minimizing cross-entropy with the observed data, we instead minimize the cross-entropy with the teacher's probability distribution, INLINEFORM1 ",
" where INLINEFORM0 parameterizes the teacher distribution and remains fixed. Note the cross-entropy setup is identical, but the target distribution is no longer a sparse distribution. Training on INLINEFORM5 is attractive since it gives more information about other classes for a given data point (e.g. similarity between classes) and has less variance in gradients BIBREF1 .",
"",
"",
"Since this new objective has no direct term for the training data, it is common practice to interpolate between the two losses, INLINEFORM0 ",
"where INLINEFORM0 is mixture parameter combining the one-hot distribution and the teacher distribution."
],
[
"The large sizes of neural machine translation systems make them an ideal candidate for knowledge distillation approaches. In this section we explore three different ways this technique can be applied to NMT."
],
[
"NMT systems are trained directly to minimize word NLL, INLINEFORM0 , at each position. Therefore if we have a teacher model, standard knowledge distillation for multi-class cross-entropy can be applied. We define this distillation for a sentence as, INLINEFORM1 ",
"where INLINEFORM0 is the target vocabulary set. The student can further be trained to optimize the mixture of INLINEFORM1 and INLINEFORM2 . In the context of NMT, we refer to this approach as word-level knowledge distillation and illustrate this in Figure 1 (left)."
],
[
"Word-level knowledge distillation allows transfer of these local word distributions. Ideally however, we would like the student model to mimic the teacher's actions at the sequence-level. The sequence distribution is particularly important for NMT, because wrong predictions can propagate forward at test-time.",
"First, consider the sequence-level distribution specified by the model over all possible sequences INLINEFORM0 , INLINEFORM1 ",
"for any length INLINEFORM0 . The sequence-level negative log-likelihood for NMT then involves matching the one-hot distribution over all complete sequences, INLINEFORM1 ",
"where INLINEFORM0 is the observed sequence. Of course, this just shows that from a negative log likelihood perspective, minimizing word-level NLL and sequence-level NLL are equivalent in this model.",
"But now consider the case of sequence-level knowledge distillation. As before, we can simply replace the distribution from the data with a probability distribution derived from our teacher model. However, instead of using a single word prediction, we use INLINEFORM0 to represent the teacher's sequence distribution over the sample space of all possible sequences, INLINEFORM1 ",
"Note that INLINEFORM0 is inherently different from INLINEFORM1 , as the sum is over an exponential number of terms. Despite its intractability, we posit that this sequence-level objective is worthwhile. It gives the teacher the chance to assign probabilities to complete sequences and therefore transfer a broader range of knowledge. We thus consider an approximation of this objective.",
"Our simplest approximation is to replace the teacher distribution INLINEFORM0 with its mode, INLINEFORM1 ",
"Observing that finding the mode is itself intractable, we use beam search to find an approximation. The loss is then INLINEFORM0 ",
"where INLINEFORM0 is now the output from running beam search with the teacher model.",
"Using the mode seems like a poor approximation for the teacher distribution INLINEFORM0 , as we are approximating an exponentially-sized distribution with a single sample. However, previous results showing the effectiveness of beam search decoding for NMT lead us to belief that a large portion of INLINEFORM1 's mass lies in a single output sequence. In fact, in experiments we find that with beam of size 1, INLINEFORM2 (on average) accounts for INLINEFORM3 of the distribution for German INLINEFORM4 English, and INLINEFORM5 for Thai INLINEFORM6 English (Table 1: INLINEFORM7 ).",
"",
"",
"To summarize, sequence-level knowledge distillation suggests to: (1) train a teacher model, (2) run beam search over the training set with this model, (3) train the student network with cross-entropy on this new dataset. Step (3) is identical to the word-level NLL process except now on the newly-generated data set. This is shown in Figure 1 (center)."
],
[
"Next we consider integrating the training data back into the process, such that we train the student model as a mixture of our sequence-level teacher-generated data ( INLINEFORM0 ) with the original training data ( INLINEFORM1 ), INLINEFORM2 ",
" where INLINEFORM0 is the gold target sequence.",
"Since the second term is intractable, we could again apply the mode approximation from the previous section, INLINEFORM0 ",
"and train on both observed ( INLINEFORM0 ) and teacher-generated ( INLINEFORM1 ) data. However, this process is non-ideal for two reasons: (1) unlike for standard knowledge distribution, it doubles the size of the training data, and (2) it requires training on both the teacher-generated sequence and the true sequence, conditioned on the same source input. The latter concern is particularly problematic since we observe that INLINEFORM2 and INLINEFORM3 are often quite different.",
"As an alternative, we propose a single-sequence approximation that is more attractive in this setting. This approach is inspired by local updating BIBREF12 , a method for discriminative training in statistical machine translation (although to our knowledge not for knowledge distillation). Local updating suggests selecting a training sequence which is close to INLINEFORM0 and has high probability under the teacher model, INLINEFORM1 ",
"where INLINEFORM0 is a function measuring closeness (e.g. Jaccard similarity or BLEU BIBREF13 ). Following local updating, we can approximate this sequence by running beam search and choosing INLINEFORM1 ",
"where INLINEFORM0 is the INLINEFORM1 -best list from beam search. We take INLINEFORM2 to be smoothed sentence-level BLEU BIBREF14 .",
"We justify training on INLINEFORM0 from a knowledge distillation perspective with the following generative process: suppose that there is a true target sequence (which we do not observe) that is first generated from the underlying data distribution INLINEFORM1 . And further suppose that the target sequence that we observe ( INLINEFORM2 ) is a noisy version of the unobserved true sequence: i.e. (i) INLINEFORM3 , (ii) INLINEFORM4 , where INLINEFORM5 is, for example, a noise function that independently replaces each element in INLINEFORM6 with a random element in INLINEFORM7 with some small probability. In such a case, ideally the student's distribution should match the mixture distribution, INLINEFORM10 ",
"In this setting, due to the noise assumption, INLINEFORM0 now has significant probability mass around a neighborhood of INLINEFORM1 (not just at INLINEFORM2 ), and therefore the INLINEFORM3 of the mixture distribution is likely something other than INLINEFORM4 (the observed sequence) or INLINEFORM5 (the output from beam search). We can see that INLINEFORM6 is a natural approximation to the INLINEFORM7 of this mixture distribution between INLINEFORM8 and INLINEFORM9 for some INLINEFORM10 . We illustrate this framework in Figure 1 (right) and visualize the distribution over a real example in Figure 2."
],
[
"To test out these approaches, we conduct two sets of NMT experiments: high resource (English INLINEFORM0 German) and low resource (Thai INLINEFORM1 English).",
"The English-German data comes from WMT 2014. The training set has 4m sentences and we take newstest2012/newstest2013 as the dev set and newstest2014 as the test set. We keep the top 50k most frequent words, and replace the rest with UNK. The teacher model is a INLINEFORM0 LSTM (as in Luong2015) and we train two student models: INLINEFORM1 and INLINEFORM2 . The Thai-English data comes from IWSLT 2015. There are 90k sentences in the training set and we take 2010/2011/2012 data as the dev set and 2012/2013 as the test set, with a vocabulary size is 25k. Size of the teacher model is INLINEFORM3 (which performed better than INLINEFORM4 , INLINEFORM5 models), and the student model is INLINEFORM6 . Other training details mirror Luong2015.",
"We evaluate on tokenized BLEU with INLINEFORM0 , and experiment with the following variations:"
],
[
"Results of our experiments are shown in Table 1. We find that while word-level knowledge distillation (Word-KD) does improve upon the baseline, sequence-level knowledge distillation (Seq-KD) does better on English INLINEFORM0 German and performs similarly on Thai INLINEFORM1 English. Combining them (Seq-KD INLINEFORM2 Word-KD) results in further gains for the INLINEFORM3 and INLINEFORM4 models (although not for the INLINEFORM5 model), indicating that these methods provide orthogonal means of transferring knowledge from the teacher to the student: Word-KD is transferring knowledge at the the local (i.e. word) level while Seq-KD is transferring knowledge at the global (i.e. sequence) level.",
"Sequence-level interpolation (Seq-Inter), in addition to improving models trained via Word-KD and Seq-KD, also improves upon the original teacher model that was trained on the actual data but fine-tuned towards Seq-Inter data (Baseline INLINEFORM0 Seq-Inter). In fact, greedy decoding with this fine-tuned model has similar performance ( INLINEFORM1 ) as beam search with the original model ( INLINEFORM2 ), allowing for faster decoding even with an identically-sized model.",
"We hypothesize that sequence-level knowledge distillation is effective because it allows the student network to only model relevant parts of the teacher distribution (i.e. around the teacher's mode) instead of `wasting' parameters on trying to model the entire space of translations. Our results suggest that this is indeed the case: the probability mass that Seq-KD models assign to the approximate mode is much higher than is the case for baseline models trained on original data (Table 1: INLINEFORM0 ). For example, on English INLINEFORM1 German the (approximate) INLINEFORM2 for the INLINEFORM3 Seq-KD model (on average) accounts for INLINEFORM4 of the total probability mass, while the corresponding number is INLINEFORM5 for the baseline. This also explains the success of greedy decoding for Seq-KD models—since we are only modeling around the teacher's mode, the student's distribution is more peaked and therefore the INLINEFORM6 is much easier to find. Seq-Inter offers a compromise between the two, with the greedily-decoded sequence accounting for INLINEFORM7 of the distribution.",
"Finally, although past work has shown that models with lower perplexity generally tend to have higher BLEU, our results indicate that this is not necessarily the case. The perplexity of the baseline INLINEFORM0 English INLINEFORM1 German model is INLINEFORM2 while the perplexity of the corresponding Seq-KD model is INLINEFORM3 , despite the fact that Seq-KD model does significantly better for both greedy ( INLINEFORM4 BLEU) and beam search ( INLINEFORM5 BLEU) decoding."
],
[
"Run-time complexity for beam search grows linearly with beam size. Therefore, the fact that sequence-level knowledge distillation allows for greedy decoding is significant, with practical implications for running NMT systems across various devices. To test the speed gains, we run the teacher/student models on GPU, CPU, and smartphone, and check the average number of source words translated per second (Table 2). We use a GeForce GTX Titan X for GPU and a Samsung Galaxy 6 smartphone. We find that we can run the student model 10 times faster with greedy decoding than the teacher model with beam search on GPU ( INLINEFORM0 vs INLINEFORM1 words/sec), with similar performance."
],
[
"Although knowledge distillation enables training faster models, the number of parameters for the student models is still somewhat large (Table 1: Params), due to the word embeddings which dominate most of the parameters. For example, on the INLINEFORM0 English INLINEFORM1 German model the word embeddings account for approximately INLINEFORM2 (50m out of 84m) of the parameters. The size of word embeddings have little impact on run-time as the word embedding layer is a simple lookup table that only affects the first layer of the model.",
"We therefore focus next on reducing the memory footprint of the student models further through weight pruning. Weight pruning for NMT was recently investigated by See2016, who found that up to INLINEFORM0 of the parameters in a large NMT model can be pruned with little loss in performance. We take our best English INLINEFORM1 German student model ( INLINEFORM2 Seq-KD INLINEFORM3 Seq-Inter) and prune INLINEFORM4 of the parameters by removing the weights with the lowest absolute values. We then retrain the pruned model on Seq-KD data with a learning rate of INLINEFORM5 and fine-tune towards Seq-Inter data with a learning rate of INLINEFORM6 . As observed by See2016, retraining proved to be crucial. The results are shown in Table 3.",
"Our findings suggest that compression benefits achieved through weight pruning and knowledge distillation are orthogonal. Pruning INLINEFORM0 of the weight in the INLINEFORM1 student model results in a model with INLINEFORM2 fewer parameters than the original teacher model with only a decrease of INLINEFORM3 BLEU. While pruning INLINEFORM4 of the weights results in a more appreciable decrease of INLINEFORM5 BLEU, the model is drastically smaller with 8m parameters, which is INLINEFORM6 fewer than the original teacher model."
],
[
"For models trained with word-level knowledge distillation, we also tried regressing the student network's top-most hidden layer at each time step to the teacher network's top-most hidden layer as a pretraining step, noting that Romero2015 obtained improvements with a similar technique on feed-forward models. We found this to give comparable results to standard knowledge distillation and hence did not pursue this further.",
"There have been promising recent results on eliminating word embeddings completely and obtaining word representations directly from characters with character composition models, which have many fewer parameters than word embedding lookup tables BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 . Combining such methods with knowledge distillation/pruning to further reduce the memory footprint of NMT systems remains an avenue for future work."
],
[
"Compressing deep learning models is an active area of current research. Pruning methods involve pruning weights or entire neurons/nodes based on some criterion. LeCun1990 prune weights based on an approximation of the Hessian, while Han2016 show that a simple magnitude-based pruning works well. Prior work on removing neurons/nodes include Srinivas2015 and Mariet2016. See2016 were the first to apply pruning to Neural Machine Translation, observing that that different parts of the architecture (input word embeddings, LSTM matrices, etc.) admit different levels of pruning. Knowledge distillation approaches train a smaller student model to mimic a larger teacher model, by minimizing the loss between the teacher/student predictions BIBREF0 , BIBREF10 , BIBREF11 , BIBREF1 . Romero2015 additionally regress on the intermediate hidden layers of the student/teacher network as a pretraining step, while Mou2015 obtain smaller word embeddings from a teacher model via regression. There has also been work on transferring knowledge across different network architectures: Chan2015b show that a deep non-recurrent neural network can learn from an RNN; Geras2016 train a CNN to mimic an LSTM for speech recognition. Kuncoro2016 recently investigated knowledge distillation for structured prediction by having a single parser learn from an ensemble of parsers.",
"Other approaches for compression involve low rank factorizations of weight matrices BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , sparsity-inducing regularizers BIBREF24 , binarization of weights BIBREF25 , BIBREF26 , and weight sharing BIBREF27 , BIBREF9 . Finally, although we have motivated sequence-level knowledge distillation in the context of training a smaller model, there are other techniques that train on a mixture of the model's predictions and the data, such as local updating BIBREF12 , hope/fear training BIBREF28 , SEARN BIBREF29 , DAgger BIBREF30 , and minimum risk training BIBREF31 , BIBREF32 ."
],
[
"In this work we have investigated existing knowledge distillation methods for NMT (which work at the word-level) and introduced two sequence-level variants of knowledge distillation, which provide improvements over standard word-level knowledge distillation.",
"We have chosen to focus on translation as this domain has generally required the largest capacity deep learning models, but the sequence-to-sequence framework has been successfully applied to a wide range of tasks including parsing BIBREF33 , summarization BIBREF34 , dialogue BIBREF35 , BIBREF36 , BIBREF37 , NER/POS-tagging BIBREF38 , image captioning BIBREF39 , BIBREF40 , video generation BIBREF41 , and speech recognition BIBREF42 . We anticipate that methods described in this paper can be used to similarly train smaller models in other domains."
]
],
"section_name": [
"Introduction",
"Sequence-to-Sequence with Attention",
"Knowledge Distillation",
"Knowledge Distillation for NMT",
"Word-Level Knowledge Distillation",
"Sequence-Level Knowledge Distillation",
"Sequence-Level Interpolation",
"Experimental Setup",
"Results and Discussion",
"Decoding Speed",
"Weight Pruning",
"Further Observations",
"Related Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"36478d2555038291da8f3bcc71c0beb939e37d0d",
"f9c28ed43f00667329aa13dc88edf160ccaf601d"
],
"answer": [
{
"evidence": [
"In this work, we investigate knowledge distillation in the context of neural machine translation. We note that NMT differs from previous work which has mainly explored non-recurrent models in the multi-class prediction setting. For NMT, while the model is trained on multi-class prediction at the word-level, it is tasked with predicting complete sequence outputs conditioned on previous decisions. With this difference in mind, we experiment with standard knowledge distillation for NMT and also propose two new versions of the approach that attempt to approximately match the sequence-level (as opposed to word-level) distribution of the teacher network. This sequence-level approximation leads to a simple training procedure wherein the student network is trained on a newly generated dataset that is the result of running beam search with the teacher network."
],
"extractive_spans": [
"standard knowledge distillation for NMT "
],
"free_form_answer": "",
"highlighted_evidence": [
" With this difference in mind, we experiment with standard knowledge distillation for NMT and also propose two new versions of the approach that attempt to approximately match the sequence-level (as opposed to word-level) distribution of the teacher network. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The large sizes of neural machine translation systems make them an ideal candidate for knowledge distillation approaches. In this section we explore three different ways this technique can be applied to NMT.",
"Word-Level Knowledge Distillation",
"Sequence-Level Knowledge Distillation",
"Sequence-Level Interpolation"
],
"extractive_spans": [
"Word-Level Knowledge Distillation",
"Sequence-Level Knowledge Distillation",
"Sequence-Level Interpolation"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this section we explore three different ways this technique can be applied to NMT.\n\nWord-Level Knowledge Distillation",
"Sequence-Level Knowledge Distillation",
"Sequence-Level Interpolation"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"faa9d98af17237ed9c3971f2803e42910d310c65",
"99cfe0c8b535e328e50cdd4b35ca93cea1029fab"
],
"answer": [
{
"evidence": [
"We therefore focus next on reducing the memory footprint of the student models further through weight pruning. Weight pruning for NMT was recently investigated by See2016, who found that up to INLINEFORM0 of the parameters in a large NMT model can be pruned with little loss in performance. We take our best English INLINEFORM1 German student model ( INLINEFORM2 Seq-KD INLINEFORM3 Seq-Inter) and prune INLINEFORM4 of the parameters by removing the weights with the lowest absolute values. We then retrain the pruned model on Seq-KD data with a learning rate of INLINEFORM5 and fine-tune towards Seq-Inter data with a learning rate of INLINEFORM6 . As observed by See2016, retraining proved to be crucial. The results are shown in Table 3."
],
"extractive_spans": [],
"free_form_answer": "pruning parameters by removing the weights with the lowest absolute values",
"highlighted_evidence": [
"We take our best English INLINEFORM1 German student model ( INLINEFORM2 Seq-KD INLINEFORM3 Seq-Inter) and prune INLINEFORM4 of the parameters by removing the weights with the lowest absolute values. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We therefore focus next on reducing the memory footprint of the student models further through weight pruning. Weight pruning for NMT was recently investigated by See2016, who found that up to INLINEFORM0 of the parameters in a large NMT model can be pruned with little loss in performance. We take our best English INLINEFORM1 German student model ( INLINEFORM2 Seq-KD INLINEFORM3 Seq-Inter) and prune INLINEFORM4 of the parameters by removing the weights with the lowest absolute values. We then retrain the pruned model on Seq-KD data with a learning rate of INLINEFORM5 and fine-tune towards Seq-Inter data with a learning rate of INLINEFORM6 . As observed by See2016, retraining proved to be crucial. The results are shown in Table 3."
],
"extractive_spans": [],
"free_form_answer": "Prune %x of the parameters by removing the weights with the lowest absolute values.",
"highlighted_evidence": [
"We take our best English INLINEFORM1 German student model ( INLINEFORM2 Seq-KD INLINEFORM3 Seq-Inter) and prune INLINEFORM4 of the parameters by removing the weights with the lowest absolute values. We then retrain the pruned model on Seq-KD data with a learning rate of INLINEFORM5 and fine-tune towards Seq-Inter data with a learning rate of INLINEFORM6 . As observed by See2016, retraining proved to be crucial. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"73046ead862aeed890be3e8fc2e3862eee8c19c6",
"8506482fd5a01ca7f6ea3207505d016dd7d0ecc7"
],
"answer": [
{
"evidence": [
"To test out these approaches, we conduct two sets of NMT experiments: high resource (English INLINEFORM0 German) and low resource (Thai INLINEFORM1 English).",
"The English-German data comes from WMT 2014. The training set has 4m sentences and we take newstest2012/newstest2013 as the dev set and newstest2014 as the test set. We keep the top 50k most frequent words, and replace the rest with UNK. The teacher model is a INLINEFORM0 LSTM (as in Luong2015) and we train two student models: INLINEFORM1 and INLINEFORM2 . The Thai-English data comes from IWSLT 2015. There are 90k sentences in the training set and we take 2010/2011/2012 data as the dev set and 2012/2013 as the test set, with a vocabulary size is 25k. Size of the teacher model is INLINEFORM3 (which performed better than INLINEFORM4 , INLINEFORM5 models), and the student model is INLINEFORM6 . Other training details mirror Luong2015."
],
"extractive_spans": [
"WMT 2014",
"IWSLT 2015"
],
"free_form_answer": "",
"highlighted_evidence": [
"To test out these approaches, we conduct two sets of NMT experiments: high resource (English INLINEFORM0 German) and low resource (Thai INLINEFORM1 English).\n\nThe English-German data comes from WMT 2014. ",
"The Thai-English data comes from IWSLT 2015. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The English-German data comes from WMT 2014. The training set has 4m sentences and we take newstest2012/newstest2013 as the dev set and newstest2014 as the test set. We keep the top 50k most frequent words, and replace the rest with UNK. The teacher model is a INLINEFORM0 LSTM (as in Luong2015) and we train two student models: INLINEFORM1 and INLINEFORM2 . The Thai-English data comes from IWSLT 2015. There are 90k sentences in the training set and we take 2010/2011/2012 data as the dev set and 2012/2013 as the test set, with a vocabulary size is 25k. Size of the teacher model is INLINEFORM3 (which performed better than INLINEFORM4 , INLINEFORM5 models), and the student model is INLINEFORM6 . Other training details mirror Luong2015."
],
"extractive_spans": [
"IWSLT 2015",
" WMT 2014"
],
"free_form_answer": "",
"highlighted_evidence": [
"The English-German data comes from WMT 2014. ",
" The Thai-English data comes from IWSLT 2015."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"ad964efd04e3f570e6073d62304c0be1b147ae16",
"dd4f54e2f7e96cd29524ced29c9609984c1bdebe"
],
"answer": [
{
"evidence": [
"We run experiments to compress a large state-of-the-art INLINEFORM0 LSTM model, and find that with sequence-level knowledge distillation we are able to learn a INLINEFORM1 LSTM that roughly matches the performance of the full system. We see similar results compressing a INLINEFORM2 model down to INLINEFORM3 on a smaller data set. Furthermore, we observe that our proposed approach has other benefits, such as not requiring any beam search at test-time. As a result we are able to perform greedy decoding on the INLINEFORM4 model 10 times faster than beam search on the INLINEFORM5 model with comparable performance. Our student models can even be run efficiently on a standard smartphone. Finally, we apply weight pruning on top of the student network to obtain a model that has INLINEFORM6 fewer parameters than the original teacher model. We have released all the code for the models described in this paper.",
"Sequence-level interpolation (Seq-Inter), in addition to improving models trained via Word-KD and Seq-KD, also improves upon the original teacher model that was trained on the actual data but fine-tuned towards Seq-Inter data (Baseline INLINEFORM0 Seq-Inter). In fact, greedy decoding with this fine-tuned model has similar performance ( INLINEFORM1 ) as beam search with the original model ( INLINEFORM2 ), allowing for faster decoding even with an identically-sized model."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
". As a result we are able to perform greedy decoding on the INLINEFORM4 model 10 times faster than beam search on the INLINEFORM5 model with comparable performance. ",
" In fact, greedy decoding with this fine-tuned model has similar performance ( INLINEFORM1 ) as beam search with the original model ( INLINEFORM2 ), allowing for faster decoding even with an identically-sized model."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Which knowledge destilation methods do they introduce?",
"What type of weight pruning do they use?",
"Which dataset do they train on?",
"Do they reason why greedy decoding works better then beam search?"
],
"question_id": [
"e3c2b6fcf77a7b1c76add2e6e1420d07c29996ea",
"ee2c2fb01d67f4c58855bf23186cbd45cecbfa56",
"f77d7cddef3e021d70e16b9e16cecfd4b8ee80d3",
"a0197894ee94b01766fa2051f50f84e16b5c9370"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Overview of the different knowledge distillation approaches. In word-level knowledge distillation (left) cross entropy is minimized between the student/teacher distributions (yellow) for each word in the actual target sequence (ECD), as well as between the student distribution and the degenerate data distribution, which has all of its probabilitiy mass on one word (black). In sequence-level knowledge distillation (center) the student network is trained on the output from beam search of the teacher network that had the highest score (ACF). In sequence-level interpolation (right) the student is trained on the output from beam search of the teacher network that had the highest sim with the target sequence (ECE).",
"Figure 2: Visualization of sequence-level interpolation on an example training sentence with a German → English model: Bis 15 Tage vor Anreise sind Zimmer-Annullationen kostenlos. We run beam search and plot the final hidden state of the hypotheses using t-SNE and show the corresponding (smoothed) probabilities with contours. In the above example, the sentence that is at the top of the beam after beam search (green) is quite far away from gold (red), so we train the model on a sentence that is on the beam but had the highest sim to gold (purple).",
"Table 1: Results on English-German (newstest2014) and English-Thai (2012/2013) test sets. Params: number of parameters in the model; PPL: perplexity on the test set; BLEUK=1: BLEU score with beam size K = 1 (i.e. greedy decoding); ∆K=1: BLEU gain over the baseline model without any knowledge distillation with greedy decoding; BLEUK=5: BLEU score with beam sizeK = 5; ∆K=5: BLEU gain over the baseline model without any knowledge distillation with beam size K = 5; p(t = ŷ): Probability of output sequence from greedy decoding (averaged over the test set).",
"Table 2: Number of source words translated per second across the GPU (GeForce GTX Titan X), CPU, and cell phone (Samsung Galaxy 6). We were unable to run the 4× 1000 models on the cell phone."
],
"file": [
"4-Figure1-1.png",
"6-Figure2-1.png",
"8-Table1-1.png",
"9-Table2-1.png"
]
} | [
"What type of weight pruning do they use?"
] | [
[
"1606.07947-Weight Pruning-1"
]
] | [
"Prune %x of the parameters by removing the weights with the lowest absolute values."
] | 367 |
1603.00957 | Question Answering on Freebase via Relation Extraction and Textual Evidence | Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F_1 of 53.3%, a substantial improvement over the state-of-the-art. | {
"paragraphs": [
[
"Since the advent of large structured knowledge bases (KBs) like Freebase BIBREF0 , YAGO BIBREF1 and DBpedia BIBREF2 , answering natural language questions using those structured KBs, also known as KB-based question answering (or KB-QA), is attracting increasing research efforts from both natural language processing and information retrieval communities.",
"The state-of-the-art methods for this task can be roughly categorized into two streams. The first is based on semantic parsing BIBREF3 , BIBREF4 , which typically learns a grammar that can parse natural language to a sophisticated meaning representation language. But such sophistication requires a lot of annotated training examples that contains compositional structures, a practically impossible solution for large KBs such as Freebase. Furthermore, mismatches between grammar predicted structures and KB structure is also a common problem BIBREF4 , BIBREF5 , BIBREF6 .",
"On the other hand, instead of building a formal meaning representation, information extraction methods retrieve a set of candidate answers from KB using relation extraction BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 or distributed representations BIBREF11 , BIBREF12 . Designing large training datasets for these methods is relatively easy BIBREF7 , BIBREF13 , BIBREF14 . These methods are often good at producing an answer irrespective of their correctness. However, handling compositional questions that involve multiple entities and relations, still remains a challenge. Consider the question what mountain is the highest in north america. Relation extraction methods typically answer with all the mountains in North America because of the lack of sophisticated representation for the mathematical function highest. To select the correct answer, one has to retrieve all the heights of the mountains, and sort them in descending order, and then pick the first entry. We propose a method based on textual evidence which can answer such questions without solving the mathematic functions implicitly.",
"Knowledge bases like Freebase capture real world facts, and Web resources like Wikipedia provide a large repository of sentences that validate or support these facts. For example, a sentence in Wikipedia says, Denali (also known as Mount McKinley, its former official name) is the highest mountain peak in North America, with a summit elevation of 20,310 feet (6,190 m) above sea level. To answer our example question against a KB using a relation extractor, we can use this sentence as external evidence, filter out wrong answers and pick the correct one.",
"Using textual evidence not only mitigates representational issues in relation extraction, but also alleviates the data scarcity problem to some extent. Consider the question, who was queen isabella's mother. Answering this question involves predicting two constraints hidden in the word mother. One constraint is that the answer should be the parent of Isabella, and the other is that the answer's gender is female. Such words with multiple latent constraints have been a pain-in-the-neck for both semantic parsing and relation extraction, and requires larger training data (this phenomenon is coined as sub-lexical compositionality by wang2015). Most systems are good at triggering the parent constraint, but fail on the other, i.e., the answer entity should be female. Whereas the textual evidence from Wikipedia, ...her mother was Isabella of Barcelos ..., can act as a further constraint to answer the question correctly.",
"We present a novel method for question answering which infers on both structured and unstructured resources. Our method consists of two main steps as outlined in sec:overview. In the first step we extract answers for a given question using a structured KB (here Freebase) by jointly performing entity linking and relation extraction (sec:kb-qa). In the next step we validate these answers using an unstructured resource (here Wikipedia) to prune out the wrong answers and select the correct ones (sec:refine). Our evaluation results on a benchmark dataset WebQuestions show that our method outperforms existing state-of-the-art models. Details of our experimental setup and results are presented in sec:experiments. Our code, data and results can be downloaded from https://github.com/syxu828/QuestionAnsweringOverFB."
],
[
"fig:qaframework gives an overview of our method for the question “who did shaq first play for”. We have two main steps: (1) inference on Freebase (KB-QA box); and (2) further inference on Wikipedia (Answer Refinement box). Let us take a close look into step 1. Here we perform entity linking to identify a topic entity in the question and its possible Freebase entities. We employ a relation extractor to predict the potential Freebase relations that could exist between the entities in the question and the answer entities. Later we perform a joint inference step over the entity linking and relation extraction results to find the best entity-relation configuration which will produce a list of candidate answer entities. In the step 2, we refine these candidate answers by applying an answer refinement model which takes the Wikipedia page of the topic entity into consideration to filter out the wrong answers and pick the correct ones.",
"While the overview in fig:qaframework works for questions containing single Freebase relation, it also works for questions involving multiple Freebase relations. Consider the question who plays anakin skywalker in star wars 1. The actors who are the answers to this question should satisfy the following constraints: (1) the actor played anakin skywalker; and (2) the actor played in star wars 1. Inspired by msra14, we design a dependency tree-based method to handle such multi-relational questions. We first decompose the original question into a set of sub-questions using syntactic patterns which are listed in Appendix. The final answer set of the original question is obtained by intersecting the answer sets of all its sub-questions. For the example question, the sub-questions are who plays anakin skywalker and who plays in star wars 1. These sub-questions are answered separately over Freebase and Wikipedia, and the intersection of their answers to these sub-questions is treated as the final answer."
],
[
"Given a sub-question, we assume the question word that represents the answer has a distinct KB relation $r$ with an entity $e$ found in the question, and predict a single KB triple $(e,r,?)$ for each sub-question (here $?$ stands for the answer entities). The QA problem is thus formulated as an information extraction problem that involves two sub-tasks, i.e., entity linking and relation extraction. We first introduce these two components, and then present a joint inference procedure which further boosts the overall performance."
],
[
"For each question, we use hand-built sequences of part-of-speech categories to identify all possible named entity mention spans, e.g., the sequence NN (shaq) may indicate an entity. For each mention span, we use the entity linking tool S-MART BIBREF15 to retrieve the top 5 entities from Freebase. These entities are treated as candidate entities that will eventually be disambiguated in the joint inference step. For a given mention span, S-MART first retrieves all possible entities of Freebase by surface matching, and then ranks them using a statistical model, which is trained on the frequency counts with which the surface form occurs with the entity."
],
[
"We now proceed to identify the relation between the answer and the entity in the question. Inspired by the recent success of neural network models in KB question-answering BIBREF16 , BIBREF12 , and the success of syntactic dependencies for relation extraction BIBREF17 , BIBREF18 , we propose a Multi-Channel Convolutional Neural Network (MCCNN) which could exploit both syntactic and sentential information for relation extraction.",
"In MCCNN, we use two channels, one for syntactic information and the other for sentential information. The network structure is illustrated in Figure 2 . Convolution layer tackles an input of varying length returning a fixed length vector (we use max pooling) for each channel. These fixed length vectors are concatenated and then fed into a softmax classifier, the output dimension of which is equal to the number of predefined relation types. The value of each dimension indicates the confidence score of the corresponding relation.",
"We use the shortest path between an entity mention and the question word in the dependency tree as input to the first channel. Similar to xu-EtAl:2015:EMNLP1, we treat the path as a concatenation of vectors of words, dependency edge directions and dependency labels, and feed it to the convolution layer. Note that, the entity mention and the question word are excluded from the dependency path so as to learn a more general relation representation in syntactic level. As shown in Figure 2 , the dependency path between who and shaq is $\\leftarrow $ dobj – play – nsubj $\\rightarrow $ .",
"This channel takes the words in the sentence as input excluding the question word and the entity mention. As illustrated in Figure 2 , the vectors for did, first, play and for are fed into this channel.",
"The model is learned using pairs of question and its corresponding gold relation from the training data. Given an input question $x$ with an annotated entity mention, the network outputs a vector $o(x)$ , where the entry $o_{k}(x)$ is the probability that there exists the k-th relation between the entity and the expected answer. We denote $t(x) \\in \\mathbb {R}^{K\\times 1}$ as the target distribution vector, in which the value for the gold relation is set to 1, and others to 0. We compute the cross entropy error between $t(x)$ and $o(x)$ , and further define the objective function over the training data as: $\nJ(\\theta ) = - \\sum _{x} \\sum _{k=1}^{K} t_k(x) \\log o_k(x) + \\lambda ||\\theta ||^{2}_{2}\n$ ",
"where $\\theta $ represents the weights, and $\\lambda $ the $L2$ regularization parameters. The weights $\\theta $ can be efficiently computed via back-propagation through network structures. To minimize $J(\\theta )$ , we apply stochastic gradient descent (SGD) with AdaGrad BIBREF20 ."
],
[
"A pipeline of entity linking and relation extraction may suffer from error propagations. As we know, entities and relations have strong selectional preferences that certain entities do not appear with certain relations and vice versa. Locally optimized models could not exploit these implicit bi-directional preferences. Therefore, we use a joint model to find a globally optimal entity-relation assignment from local predictions. The key idea behind is to leverage various clues from the two local models and the KB to rank a correct entity-relation assignment higher than other combinations. We describe the learning procedure and the features below.",
"Suppose the pair $(e_{gold}, r_{gold})$ represents the gold entity/relation pair for a question $q$ . We take all our entity and relation predictions for $q$ , create a list of entity and relation pairs $\\lbrace (e_{0}, r_{0}), (e_{1}, r_{1}), ..., (e_{n}, r_{n})\\rbrace $ from $q$ and rank them using an svm rank classifier BIBREF21 which is trained to predict a rank for each pair. Ideally higher rank indicates the prediction is closer to the gold prediction. For training, svm rank classifier requires a ranked or scored list of entity-relation pairs as input. We create the training data containing ranked input pairs as follows: if both $e_{pred} = e_{gold}$ and $r_{pred} = r_{gold}$ , we assign it with a score of 3. If only the entity or relation equals to the gold one (i.e., $e_{pred}=e_{gold}$ , $r_{pred}\\ne r_{gold}$ or $e_{pred}\\ne e_{gold}$ , $q$0 ), we assign a score of 2 (encouraging partial overlap). When both entity and relation assignments are wrong, we assign a score of 1.",
"For a given entity-relation pair, we extract the following features which are passed as an input vector to the svm ranker above:",
"We use the score of the predicted entity returned by the entity linking system as a feature. The number of word overlaps between the entity mention and entity's Freebase name is also included as a feature. In Freebase, most entities have a relation fb:description which describes the entity. For instance, in the running example, shaq is linked to three potential entities m.06_ttvh (Shaq Vs. Television Show), m.05n7bp (Shaq Fu Video Game) and m.012xdf (Shaquille O'Neal). Interestingly, the word play only appears in the description of Shaquille O'Neal and it occurs three times. We count the content word overlap between the given question and the entity's description, and include it as a feature.",
"The score of relation returned by the MCCNNs is used as a feature. Furthermore, we view each relation as a document which consists of the training questions that this relation is expressed in. For a given question, we use the sum of the tf-idf scores of its words with respect to the relation as a feature. A Freebase relation $r$ is a concatenation of a series of fragments $r~=~r_1.r_2.r_3$ . For instance, the three fragments of people.person.parents are people, person and parents. The first two fragments indicate the Freebase type of the subject of this relation, and the third fragment indicates the object type, in our case the answer type. We use an indicator feature to denote if the surface form of the third fragment (here parents) appears in the question.",
"The above two feature classes indicate local features. From the entity-relation $(e,r)$ pair, we create the query triple $(e,r,?)$ to retrieve the answers, and further extract features from the answers. These features are non-local since we require both $e$ and $r$ to retrieve the answer. One such feature is using the co-occurrence of the answer type and the question word based on the intuition that question words often indicate the answer type, e.g., the question word when usually indicates the answer type type.datetime. Another feature is the number of answer entities retrieved."
],
[
"We use the best ranked entity-relation pair from the above step to retrieve candidate answers from Freebase. In this step, we validate these answers using Wikipedia as our unstructured knowledge resource where most statements in it are verified for factuality by multiple people.",
"Our refinement model is inspired by the intuition of how people refine their answers. If you ask someone: who did shaq first play for, and give them four candidate answers (Los Angeles Lakers, Boston Celtics, Orlando Magic and Miami Heat), as well as access to Wikipedia, that person might first determine that the question is about Shaquille O'Neal, then go to O'Neal's Wikipedia page, and search for the sentences that contain the candidate answers as evidence. By analyzing these sentences, one can figure out whether a candidate answer is correct or not."
],
[
"As mentioned above, we should first find the Wikipedia page corresponding to the topic entity in the given question. We use Freebase API to convert Freebase entity to Wikipedia page. We extract the content from the Wikipedia page and process it with Wikifier BIBREF22 which recognizes Wikipedia entities, which can further be linked to Freebase entities using Freebase API. Additionally we use Stanford CoreNLP BIBREF19 for tokenization and entity co-reference resolution. We search for the sentences containing the candidate answer entities retrieved from Freebase. For example, the Wikipedia page of O'Neal contains a sentence “O'Neal was drafted by the Orlando Magic with the first overall pick in the 1992 NBA draft”, which is taken into account by the refinement model (our inference model on Wikipedia) to discriminate whether Orlando Magic is the answer for the given question."
],
[
"We treat the refinement process as a binary classification task over the candidate answers, i.e., correct (positive) and incorrect (negative) answer. We prepare the training data for the refinement model as follows. On the training dataset, we first infer on Freebase to retrieve the candidate answers. Then we use the annotated gold answers of these questions and Wikipedia to create the training data. Specifically, we treat the sentences that contain correct/incorrect answers as positive/negative examples for the refinement model. We use libsvm BIBREF23 to learn the weights for classification.",
"Note that, in the Wikipedia page of the topic entity, we may collect more than one sentence that contain a candidate answer. However, not all sentences are relevant, therefore we consider the candidate answer as correct if at least there is one positive evidence. On the other hand, sometimes, we may not find any evidence for the candidate answer. In these cases, we fall back to the results of the KB-based approach."
],
[
"Regarding the features used in libsvm, we use the following lexical features extracted from the question and a Wikipedia sentence. Formally, given a question $q$ = $<$ $q_1$ , ... $q_{n}$ $>$ and an evidence sentence $s$ = $<$ $s_1$ , ... $s_{m}$ $>$ , we denote the tokens of $<$0 and $<$1 by $<$2 and $<$3 , respectively. For each pair ( $<$4 , $<$5 ), we identify a set of all possible token pairs ( $<$6 , $<$7 ), the occurrences of which are used as features. As learning proceeds, we hope to learn a higher weight for a feature like (first, drafted) and a lower weight for (first, played)."
],
[
"In this section we introduce the experimental setup, the main results and detailed analysis of our system."
],
[
"We use the WebQuestions BIBREF3 dataset, which contains 5,810 questions crawled via Google Suggest service, with answers annotated on Amazon Mechanical Turk. The questions are split into training and test sets, which contain 3,778 questions (65%) and 2,032 questions (35%), respectively. We further split the training questions into 80%/20% for development.",
"To train the MCCNNs and the joint inference model, we need the gold standard relations of the questions. Since this dataset contains only question-answer pairs and annotated topic entities, instead of relying on gold relations we rely on surrogate gold relations which produce answers that have the highest overlap with gold answers. Specifically, for a given question, we first locate the topic entity $e$ in the Freebase graph, then select 1-hop and 2-hop relations connected to the topic entity as relation candidates. The 2-hop relations refer to the $n$ -ary relations of Freebase, i.e., first hop from the subject to a mediator node, and the second from the mediator to the object node. For each relation candidate $r$ , we issue the query ( $e$ , $r$ , $?$ ) to the KB, and label the relation that produces the answer with minimal $F_1$ -loss against the gold answer, as the surrogate gold relation. From the training set, we collect 461 relations to train the MCCNN, and the target prediction during testing time is over these relations."
],
[
"We have 6 dependency tree patterns based on msra14 to decompose the question into sub-questions (See Appendix). We initialize the word embeddings with DBLP:conf/acl/TurianRB10's word representations with dimensions set to 50. The hyper parameters in our model are tuned using the development set. The window size of MCCNN is set to 3. The sizes of the hidden layer 1 and the hidden layer 2 of the two MCCNN channels are set to 200 and 100, respectively. We use the Freebase version of DBLP:conf/emnlp/BerantCFL13, containing 4M entities and 5,323 relations."
],
[
"We use the average question-wise $F_1$ as our evaluation metric. To give an idea of the impact of different configurations of our method, we compare the following with existing methods.",
"This method involves inference on Freebase only. First the entity linking (EL) system is run to predict the topic entity. Then we run the relation extraction (RE) system and select the best relation that can occur with the topic entity. We choose this entity-relation pair to predict the answer.",
"In this method instead of the above pipeline, we perform joint EL and RE as described in sec:jointInference.",
"We use the pipelined EL and RE along with inference on Wikipedia as described in sec:refine.",
"This is our main model. We perform inference on Freebase using joint EL and RE, and then inference on Wikipedia to validate the results. Specifically, we treat the top two predictions of the joint inference model as the candidate subject and relation pairs, and extract the corresponding answers from each pair, take the union, and filter the answer set using Wikipedia.",
"Table 1 summarizes the results on the test data along with the results from the literature. We can see that joint EL and RE performs better than the default pipelined approach, and outperforms most semantic parsing based models, except BIBREF24 which searches partial logical forms in strategic order by combining imitation learning and agenda-based parsing. In addition, inference on unstructured data helps the default model. The joint EL and RE combined with inference on unstructured data further improves the default pipelined model by 9.2% (from 44.1% to 53.3%), and achieves a new state-of-the-art result beating the previous reported best result of yih-EtAl:2015:ACL-IJCNLP (with one-tailed t-test significance of $p < 0.05$ ).",
"From Table 1 , we can see that the joint EL & RE gives a performance boost of 3% (from 44.1 to 47.1). We also analyze the impact of joint inference on the individual components of EL & RE.",
"We first evaluate the EL component using the gold entity annotations on the development set. As shown in Table 2 , for 79.8% questions, our entity linker can correctly find the gold standard topic entities. The joint inference improves this result to 83.2%, a 3.4% improvement. Next we use the surrogate gold relations to evaluate the performance of the RE component on the development set. As shown in Table 2 , the relation prediction accuracy increases by 9.4% (from 45.9% to 55.3%) when using the joint inference.",
"Table 3 presents the results on the impact of individual and joint channels on the end QA performance. When using a single-channel network, we tune the parameters of only one channel while switching off the other channel. As seen, the sentential features are found to be more important than syntactic features. We attribute this to the short and noisy nature of WebQuestions questions due to which syntactic parser wrongly parses or the shortest dependency path does not contain sufficient information to predict a relation. By using both the channels, we see further improvements than using any one of the channels.",
" As shown in Table 1 , when structured inference is augmented with the unstructured inference, we see an improvement of 2.9% (from 44.1% to 47.0%). And when Structured + Joint uses unstructured inference, the performance boosts by 6.2% (from 47.1% to 53.3%) achieving a new state-of-the-art result. For the latter, we manually analyzed the cases in which unstructured inference helps. Table 4 lists some of these questions and the corresponding answers before and after the unstructured inference. We observed the unstructured inference mainly helps for two classes of questions: (1) questions involving aggregation operations (Questions 1-3); (2) questions involving sub-lexical compositionally (Questions 4-5). Questions 1 and 2 contain the predicate $largest$ an aggregation operator. A semantic parsing method should explicitly handle this predicate to trigger $max(.)$ operator. For Question 3, structured inference predicts the Freebase relation fb:teams..from retrieving all the years in which Ray Allen has played basketball. Note that Ray Allen has joined Connecticut University's team in 1993 and NBA from 1996. To answer this question a semantic parsing system would require a min( $\\cdot $ ) operator along with an additional constraint that the year corresponds to the NBA's term. Interestingly, without having to explicitly model these complex predicates, the unstructured inference helps in answering these questions more accurately. Questions 4-5 involve sub-lexical compositionally BIBREF25 predicates father and college. For example in Question 5, the user queries for the colleges that John Steinbeck attended. However, Freebase defines the relation fb:education..institution to describe a person's educational information without discriminating the specific periods such as high school or college. Inference using unstructured data helps in alleviating these representational issues.",
" We analyze the errors of Structured + Joint + Unstructured model. Around 15% of the errors are caused by incorrect entity linking, and around 50% of the errors are due to incorrect relation predictions. The errors in relation extraction are due to (i) insufficient context, e.g., in what is duncan bannatyne, neither the dependency path nor sentential context provides enough evidence for the MCCNN model; (ii) unbalanced distribution of relations (3022 training examples for 461 relations) heavily influences the performance of MCCNN model towards frequently seen relations. The remaining errors are the failure of unstructured inference due to insufficient evidence in Wikipedia or misclassification.",
"In the entity linking component, we had handcrafted POS tag patterns to identify entity mentions, e.g., DT-JJ-NN (noun phrase), NN-IN-NN (prepositional phrase). These patterns are designed to have high recall. Around 80% of entity linking errors are due to incorrect entity prediction even when the correct mention span was found.",
"Around 136 questions (15%) of dev data contains compositional questions, leading to 292 sub-questions (around 2.1 subquestions for a compositional question). Since our question decomposition component is based on manual rules, one question of interest is how these rules perform on other datasets. By human evaluation, we found these rules achieves 95% on a more general but complex QA dataset QALD-5.",
"While our unstructured inference alleviates representational issues to some extent, we still fail at modeling compositional questions such as who is the mother of the father of prince william involving multi-hop relations and the inter alia. Our current assumption that unstructured data could provide evidence for questions may work only for frequently typed queries or for popular domains like movies, politics and geography. We note these limitations and hope our result will foster further research in this area."
],
[
"Over time, the QA task has evolved into two main streams – QA on unstructured data, and QA on structured data. TREC QA evaluations BIBREF26 were a major boost to unstructured QA leading to richer datasets and sophisticated methods BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . While initial progress on structured QA started with small toy domains like GeoQuery BIBREF34 , recent focus has shifted to large scale structured KBs like Freebase, DBPedia BIBREF35 , BIBREF36 , BIBREF3 , BIBREF4 , BIBREF37 , and on noisy KBs BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 . An exciting development in structured QA is to exploit multiple KBs (with different schemas) at the same time to answer questions jointly BIBREF43 , BIBREF44 , BIBREF45 . QALD tasks and linked data initiatives are contributing to this trend.",
"Our model combines the best of both worlds by inferring over structured and unstructured data. Though earlier methods exploited unstructured data for KB-QA BIBREF40 , BIBREF3 , BIBREF7 , BIBREF6 , BIBREF16 , these methods do not rely on unstructured data at test time. Our work is closely related to joshi:2014 who aim to answer noisy telegraphic queries using both structured and unstructured data. Their work is limited in answering single relation queries. Our work also has similarities to sun2015open who does question answering on unstructured data but enrich it with Freebase, a reversal of our pipeline. Other line of very recent related work include Yahya:2016:RQE:2835776.2835795 and savenkovknowledge.",
"Our work also intersects with relation extraction methods. While these methods aim to predict a relation between two entities in order to populate KBs BIBREF46 , BIBREF47 , BIBREF48 , we work with sentence level relation extraction for question answering. krishnamurthy2012weakly and fader2014open adopt open relation extraction methods for QA but they require hand-coded grammar for parsing queries. Closest to our extraction method is yao-jacana-freebase-acl2014 and yao-scratch-qa-naacl2015 who also uses sentence level relation extraction for QA. Unlike them, we can predict multiple relations per question, and our MCCNN architecture is more robust to unseen contexts compared to their logistic regression models.",
"dong-EtAl:2015:ACL-IJCNLP1 were the first to use MCCNN for question answering. Yet our approach is very different in spirit to theirs. Dong et al. aim to maximize the similarity between the distributed representation of a question and its answer entities, whereas our network aims to predict Freebase relations. Our search space is several times smaller than theirs since we do not require potential answer entities beforehand (the number of relations is much smaller than the number of entities in Freebase). In addition, our method can explicitly handle compositional questions involving multiple relations, whereas Dong et al. learn latent representation of relation joins which is difficult to comprehend. Moreover, we outperform their method by 7 points even without unstructured inference."
],
[
"We have presented a method that could infer both on structured and unstructured data to answer natural language questions. Our experiments reveal that unstructured inference helps in mitigating representational issues in structured inference. We have also introduced a relation extraction method using MCCNN which is capable of exploiting syntax in addition to sentential features. Our main model which uses joint entity linking and relation extraction along with unstructured inference achieves the state-of-the-art results on WebQuestions dataset. A potential application of our method is to improve KB-question answering using the documents retrieved by a search engine.",
"Since we pipeline structured inference first and then unstructured inference, our method is limited by the coverage of Freebase. Our future work involves exploring other alternatives such as treating structured and unstructured data as two independent resources in order to overcome the knowledge gaps in either of the two resources."
],
[
"We would like to thank Weiwei Sun, Liwei Chen, and the anonymous reviewers for their helpful feedback. This work is supported by National High Technology R&D Program of China (Grant No. 2015AA015403, 2014AA015102), Natural Science Foundation of China (Grant No. 61202233, 61272344, 61370055) and the joint project with IBM Research. For any correspondence, please contact Yansong Feng."
],
[
"The syntax-based patterns for question decomposition are shown in fig:patterns. The first four patterns are designed to extract sub-questions from simple questions, while the latter two are designed for complex questions involving clauses."
]
],
"section_name": [
"Introduction",
"Our Method",
"Inference on Freebase",
"Entity Linking",
"Relation Extraction",
"Joint Entity Linking & Relation Extraction",
"Inference on Wikipedia",
"Finding Evidence from Wikipedia",
"Refinement Model",
"Lexical Features",
"Experiments",
"Training and Evaluation Data",
"Experimental Settings",
"Results and Discussion",
"Related Work",
"Conclusion and Future Work",
"Acknowledgments",
"Appendix"
]
} | {
"answers": [
{
"annotation_id": [
"f6b4737b5920af299e52e7813ed51ce21f83139f",
"d3434912ff20486205a195a31968af9908e55c48"
],
"answer": [
{
"evidence": [
"In this section we introduce the experimental setup, the main results and detailed analysis of our system.",
"Training and Evaluation Data",
"We use the WebQuestions BIBREF3 dataset, which contains 5,810 questions crawled via Google Suggest service, with answers annotated on Amazon Mechanical Turk. The questions are split into training and test sets, which contain 3,778 questions (65%) and 2,032 questions (35%), respectively. We further split the training questions into 80%/20% for development."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this section we introduce the experimental setup, the main results and detailed analysis of our system.",
"Training and Evaluation Data\nWe use the WebQuestions BIBREF3 dataset, which contains 5,810 questions crawled via Google Suggest service, with answers annotated on Amazon Mechanical Turk. The questions are split into training and test sets, which contain 3,778 questions (65%) and 2,032 questions (35%), respectively. We further split the training questions into 80%/20% for development."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"We present a novel method for question answering which infers on both structured and unstructured resources. Our method consists of two main steps as outlined in sec:overview. In the first step we extract answers for a given question using a structured KB (here Freebase) by jointly performing entity linking and relation extraction (sec:kb-qa). In the next step we validate these answers using an unstructured resource (here Wikipedia) to prune out the wrong answers and select the correct ones (sec:refine). Our evaluation results on a benchmark dataset WebQuestions show that our method outperforms existing state-of-the-art models. Details of our experimental setup and results are presented in sec:experiments. Our code, data and results can be downloaded from https://github.com/syxu828/QuestionAnsweringOverFB.",
"We use the WebQuestions BIBREF3 dataset, which contains 5,810 questions crawled via Google Suggest service, with answers annotated on Amazon Mechanical Turk. The questions are split into training and test sets, which contain 3,778 questions (65%) and 2,032 questions (35%), respectively. We further split the training questions into 80%/20% for development."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Our evaluation results on a benchmark dataset WebQuestions show that our method outperforms existing state-of-the-art models.",
"We use the WebQuestions BIBREF3 dataset, which contains 5,810 questions crawled via Google Suggest service, with answers annotated on Amazon Mechanical Turk."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"1d91b27b22a138a14652e8b329fa37b413ceb82a",
"cffb82e83f830c03a13c8cb28cd3bf89f6df2219"
],
"answer": [
{
"evidence": [
"Table 1 summarizes the results on the test data along with the results from the literature. We can see that joint EL and RE performs better than the default pipelined approach, and outperforms most semantic parsing based models, except BIBREF24 which searches partial logical forms in strategic order by combining imitation learning and agenda-based parsing. In addition, inference on unstructured data helps the default model. The joint EL and RE combined with inference on unstructured data further improves the default pipelined model by 9.2% (from 44.1% to 53.3%), and achieves a new state-of-the-art result beating the previous reported best result of yih-EtAl:2015:ACL-IJCNLP (with one-tailed t-test significance of $p < 0.05$ ).",
"We now proceed to identify the relation between the answer and the entity in the question. Inspired by the recent success of neural network models in KB question-answering BIBREF16 , BIBREF12 , and the success of syntactic dependencies for relation extraction BIBREF17 , BIBREF18 , we propose a Multi-Channel Convolutional Neural Network (MCCNN) which could exploit both syntactic and sentential information for relation extraction."
],
"extractive_spans": [
"BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10",
"BIBREF11 , BIBREF12",
"BIBREF7 , BIBREF13 , BIBREF14",
" BIBREF16"
],
"free_form_answer": "",
"highlighted_evidence": [
"(",
"Table 1 summarizes the results on the test data along with the results from the literature. ",
" Inspired by the recent success of neural network models in KB question-answering BIBREF16 , BIBREF12 , and the success of syntactic dependencies for relation extraction BIBREF17 , BIBREF18 , we propose a Multi-Channel Convolutional Neural Network (MCCNN) which could exploit both syntactic and sentential information for relation extraction."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Results on the test set."
],
"extractive_spans": [],
"free_form_answer": "Berant et al. (2013), Yao and Van Durme (2014), Xu et al. (2014), Berant and Liang (2014), Bao et al. (2014), Border et al. (2014), Dong et al. (2015), Yao (2015), Bast and Haussmann (2015), Berant and Liang (2015), Reddy et al. (2016), Yih et al. (2015)",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Results on the test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"d0ff8cff4f4dce573b523c53e90216d317c4f89a",
"f6afe55709b9486415238dd1678dc6cd335a8061"
],
"answer": [
{
"evidence": [
"Knowledge bases like Freebase capture real world facts, and Web resources like Wikipedia provide a large repository of sentences that validate or support these facts. For example, a sentence in Wikipedia says, Denali (also known as Mount McKinley, its former official name) is the highest mountain peak in North America, with a summit elevation of 20,310 feet (6,190 m) above sea level. To answer our example question against a KB using a relation extractor, we can use this sentence as external evidence, filter out wrong answers and pick the correct one.",
"Using textual evidence not only mitigates representational issues in relation extraction, but also alleviates the data scarcity problem to some extent. Consider the question, who was queen isabella's mother. Answering this question involves predicting two constraints hidden in the word mother. One constraint is that the answer should be the parent of Isabella, and the other is that the answer's gender is female. Such words with multiple latent constraints have been a pain-in-the-neck for both semantic parsing and relation extraction, and requires larger training data (this phenomenon is coined as sub-lexical compositionality by wang2015). Most systems are good at triggering the parent constraint, but fail on the other, i.e., the answer entity should be female. Whereas the textual evidence from Wikipedia, ...her mother was Isabella of Barcelos ..., can act as a further constraint to answer the question correctly."
],
"extractive_spans": [],
"free_form_answer": "Wikipedia sentences that validate or support KB facts",
"highlighted_evidence": [
"these",
"Knowledge bases like Freebase capture real world facts, and Web resources like Wikipedia provide a large repository of sentences that validate or support these facts. For example, a sentence in Wikipedia says, Denali (also known as Mount McKinley, its former official name) is the highest mountain peak in North America, with a summit elevation of 20,310 feet (6,190 m) above sea level. To answer our example question against a KB using a relation extractor, we can use this sentence as external evidence, filter out wrong answers and pick the correct one.",
"Using textual evidence not only mitigates representational issues in relation extraction, but also alleviates the data scarcity problem to some extent."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"fig:qaframework gives an overview of our method for the question “who did shaq first play for”. We have two main steps: (1) inference on Freebase (KB-QA box); and (2) further inference on Wikipedia (Answer Refinement box). Let us take a close look into step 1. Here we perform entity linking to identify a topic entity in the question and its possible Freebase entities. We employ a relation extractor to predict the potential Freebase relations that could exist between the entities in the question and the answer entities. Later we perform a joint inference step over the entity linking and relation extraction results to find the best entity-relation configuration which will produce a list of candidate answer entities. In the step 2, we refine these candidate answers by applying an answer refinement model which takes the Wikipedia page of the topic entity into consideration to filter out the wrong answers and pick the correct ones."
],
"extractive_spans": [
"by applying an answer refinement model which takes the Wikipedia page of the topic entity into consideration to filter out the wrong answers and pick the correct ones"
],
"free_form_answer": "",
"highlighted_evidence": [
"In the step 2, we refine these candidate answers by applying an answer refinement model which takes the Wikipedia page of the topic entity into consideration to filter out the wrong answers and pick the correct ones."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"3b196f839fa5129ef2cddb2ce14745a7119ad55d",
"8806c343f940fa8106dd554a7ef46f466497cc64"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1: Results on the test set."
],
"extractive_spans": [],
"free_form_answer": "0.8 point improvement",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Results on the test set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 1: Results on the test set."
],
"extractive_spans": [],
"free_form_answer": "0.8 point on average (question-wise) F1 measure ",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Results on the test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"annotation_id": [
"a490591bbe9fc1992166031859f6f7a7abb9498b",
"bf559aae9dff4afc60cf8af4c1028f7ce5a4df57"
],
"answer": [
{
"evidence": [
"The state-of-the-art methods for this task can be roughly categorized into two streams. The first is based on semantic parsing BIBREF3 , BIBREF4 , which typically learns a grammar that can parse natural language to a sophisticated meaning representation language. But such sophistication requires a lot of annotated training examples that contains compositional structures, a practically impossible solution for large KBs such as Freebase. Furthermore, mismatches between grammar predicted structures and KB structure is also a common problem BIBREF4 , BIBREF5 , BIBREF6 .",
"On the other hand, instead of building a formal meaning representation, information extraction methods retrieve a set of candidate answers from KB using relation extraction BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 or distributed representations BIBREF11 , BIBREF12 . Designing large training datasets for these methods is relatively easy BIBREF7 , BIBREF13 , BIBREF14 . These methods are often good at producing an answer irrespective of their correctness. However, handling compositional questions that involve multiple entities and relations, still remains a challenge. Consider the question what mountain is the highest in north america. Relation extraction methods typically answer with all the mountains in North America because of the lack of sophisticated representation for the mathematical function highest. To select the correct answer, one has to retrieve all the heights of the mountains, and sort them in descending order, and then pick the first entry. We propose a method based on textual evidence which can answer such questions without solving the mathematic functions implicitly.",
"We now proceed to identify the relation between the answer and the entity in the question. Inspired by the recent success of neural network models in KB question-answering BIBREF16 , BIBREF12 , and the success of syntactic dependencies for relation extraction BIBREF17 , BIBREF18 , we propose a Multi-Channel Convolutional Neural Network (MCCNN) which could exploit both syntactic and sentential information for relation extraction.",
"FLOAT SELECTED: Table 1: Results on the test set."
],
"extractive_spans": [],
"free_form_answer": "F1 score of 39.9 for semantic-based parsing methods. For information extraction methods, 49.4 using relation extraction, 40.8 using distributed representations, and 52.5 using neural networks models",
"highlighted_evidence": [
"The state-of-the-art methods for this task can be roughly categorized into two streams. The first is based on semantic parsing BIBREF3 , BIBREF4 , which typically learns a grammar that can parse natural language to a sophisticated meaning representation language. But such sophistication requires a lot of annotated training examples that contains compositional structures, a practically impossible solution for large KBs such as Freebase. Furthermore, mismatches between grammar predicted structures and KB structure is also a common problem BIBREF4 , BIBREF5 , BIBREF6 .",
"On the other hand, instead of building a formal meaning representation, information extraction methods retrieve a set of candidate answers from KB using relation extraction BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 or distributed representations BIBREF11 , BIBREF12 .",
"Inspired by the recent success of neural network models in KB question-answering BIBREF16 , BIBREF12 , and the success of syntactic dependencies for relation extraction BIBREF17 , BIBREF18 , we propose a Multi-Channel Convolutional Neural Network (MCCNN) which could exploit both syntactic and sentential information for relation extraction.",
"FLOAT SELECTED: Table 1: Results on the test set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Table 1 summarizes the results on the test data along with the results from the literature. We can see that joint EL and RE performs better than the default pipelined approach, and outperforms most semantic parsing based models, except BIBREF24 which searches partial logical forms in strategic order by combining imitation learning and agenda-based parsing. In addition, inference on unstructured data helps the default model. The joint EL and RE combined with inference on unstructured data further improves the default pipelined model by 9.2% (from 44.1% to 53.3%), and achieves a new state-of-the-art result beating the previous reported best result of yih-EtAl:2015:ACL-IJCNLP (with one-tailed t-test significance of $p < 0.05$ )."
],
"extractive_spans": [
"yih-EtAl:2015:ACL-IJCNLP"
],
"free_form_answer": "",
"highlighted_evidence": [
"The joint EL and RE combined with inference on unstructured data further improves the default pipelined model by 9.2% (from 44.1% to 53.3%), and achieves a new state-of-the-art result beating the previous reported best result of yih-EtAl:2015:ACL-IJCNLP (with one-tailed t-test significance of $p < 0.05$ )."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
],
"nlp_background": [
"five",
"five",
"two",
"two",
"two"
],
"paper_read": [
"somewhat",
"somewhat",
"no",
"no",
"no"
],
"question": [
"Are experiments conducted on multiple datasets?",
"What baselines is the neural relation extractor compared to?",
"What additional evidence they use?",
"How much improvement they get from the previous state-of-the-art?",
"What is the previous state-of-the-art?"
],
"question_id": [
"55bafa0f7394163f4afd1d73340aac94c2d9f36c",
"cbb4eba59434d596749408be5b923efda7560890",
"1d9d7c96c5e826ac06741eb40e89fca6b4b022bd",
"d1d37dec9053d465c8b6f0470e06316bccf344b3",
"90eeb1b27f84c83ffcc8a88bc914a947c01a0c8b"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"question answering",
"question answering",
"Question Answering",
"Question Answering",
"Question Answering"
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: An illustration of our method to find answers for the given question who did shaq first play for.",
"Figure 2: Overview of the multi-channel convolutional neural network for relation extraction. We is the word embedding matrix, W1 is the convolution matrix, W2 is the activation matrix and W3 is the classification matrix.",
"Table 1: Results on the test set.",
"Table 2: Impact of the joint inference on the development set",
"Table 3: Impact of different MCCNN channels on the development set.",
"Table 4: Example questions and corresponding predicted answers before and after using unstructured inference. Before uses (Structured + Joint) model, and After uses Structured + Joint + Unstructured model for prediction. The colors blue and red indicate correct and wrong answers respectively.",
"Figure 3: Syntax-based patterns for question decomposition."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"6-Table1-1.png",
"7-Table2-1.png",
"7-Table3-1.png",
"7-Table4-1.png",
"9-Figure3-1.png"
]
} | [
"What baselines is the neural relation extractor compared to?",
"What additional evidence they use?",
"How much improvement they get from the previous state-of-the-art?",
"What is the previous state-of-the-art?"
] | [
[
"1603.00957-6-Table1-1.png",
"1603.00957-Results and Discussion-5",
"1603.00957-Relation Extraction-0"
],
[
"1603.00957-Introduction-4",
"1603.00957-Introduction-3",
"1603.00957-Our Method-0"
],
[
"1603.00957-6-Table1-1.png"
],
[
"1603.00957-6-Table1-1.png",
"1603.00957-Introduction-1",
"1603.00957-Relation Extraction-0",
"1603.00957-Results and Discussion-5",
"1603.00957-Introduction-2"
]
] | [
"Berant et al. (2013), Yao and Van Durme (2014), Xu et al. (2014), Berant and Liang (2014), Bao et al. (2014), Border et al. (2014), Dong et al. (2015), Yao (2015), Bast and Haussmann (2015), Berant and Liang (2015), Reddy et al. (2016), Yih et al. (2015)",
"Wikipedia sentences that validate or support KB facts",
"0.8 point on average (question-wise) F1 measure ",
"F1 score of 39.9 for semantic-based parsing methods. For information extraction methods, 49.4 using relation extraction, 40.8 using distributed representations, and 52.5 using neural networks models"
] | 368 |
1804.08000 | Fine-grained Entity Typing through Increased Discourse Context and Adaptive Classification Thresholds | Fine-grained entity typing is the task of assigning fine-grained semantic types to entity mentions. We propose a neural architecture which learns a distributional semantic representation that leverages a greater amount of semantic context -- both document and sentence level information -- than prior work. We find that additional context improves performance, with further improvements gained by utilizing adaptive classification thresholds. Experiments show that our approach without reliance on hand-crafted features achieves the state-of-the-art results on three benchmark datasets. | {
"paragraphs": [
[
"Named entity typing is the task of detecting the type (e.g., person, location, or organization) of a named entity in natural language text. Entity type information has shown to be useful in natural language tasks such as question answering BIBREF0 , knowledge-base population BIBREF1 , BIBREF2 , and co-reference resolution BIBREF3 . Motivated by its application to downstream tasks, recent work on entity typing has moved beyond standard coarse types towards finer-grained semantic types with richer ontologies BIBREF0 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Rather than assuming an entity can be uniquely categorized into a single type, the task has been approached as a multi-label classification problem: e.g., in “... became a top seller ... Monopoly is played in 114 countries. ...” (fig:arch), “Monopoly” is considered both a game as well as a product.",
"The state-of-the-art approach BIBREF8 for fine-grained entity typing employs an attentive neural architecture to learn representations of the entity mention as well as its context. These representations are then combined with hand-crafted features (e.g., lexical and syntactic features), and fed into a linear classifier with a fixed threshold. While this approach outperforms previous approaches which only use sparse binary features BIBREF4 , BIBREF6 or distributed representations BIBREF9 , it has a few drawbacks: (1) the representations of left and right contexts are learnt independently, ignoring their mutual connection; (2) the attention on context is computed solely upon the context, considering no alignment to the entity; (3) document-level contexts which could be useful in classification are not exploited; and (4) hand-crafted features heavily rely on system or human annotations.",
"To overcome these drawbacks, we propose a neural architecture (fig:arch) which learns more context-aware representations by using a better attention mechanism and taking advantage of semantic discourse information available in both the document as well as sentence-level contexts. Further, we find that adaptive classification thresholds leads to further improvements. Experiments demonstrate that our approach, without any reliance on hand-crafted features, outperforms prior work on three benchmark datasets."
],
[
"Fine-grained entity typing is considered a multi-label classification problem: Each entity INLINEFORM0 in the text INLINEFORM1 is assigned a set of types INLINEFORM2 drawn from the fine-grained type set INLINEFORM3 . The goal of this task is to predict, given entity INLINEFORM4 and its context INLINEFORM5 , the assignment of types to the entity. This assignment can be represented by a binary vector INLINEFORM6 where INLINEFORM7 is the size of INLINEFORM8 . INLINEFORM9 iff the entity is assigned type INLINEFORM10 ."
],
[
"Given a type embedding vector INLINEFORM0 and a featurizer INLINEFORM1 that takes entity INLINEFORM2 and its context INLINEFORM3 , we employ the logistic regression (as shown in fig:arch) to model the probability of INLINEFORM4 assigned INLINEFORM5 (i.e., INLINEFORM6 ) DISPLAYFORM0 ",
"and we seek to learn a type embedding matrix INLINEFORM0 and a featurizer INLINEFORM1 such that DISPLAYFORM0 ",
"At inference, the predicted type set INLINEFORM0 assigned to entity INLINEFORM1 is carried out by DISPLAYFORM0 ",
"with INLINEFORM0 the threshold for predicting INLINEFORM1 has type INLINEFORM2 ."
],
[
"As shown in fig:arch, featurizer INLINEFORM0 in our model contains three encoders which encode entity INLINEFORM1 and its context INLINEFORM2 into feature vectors, and we consider both sentence-level context INLINEFORM3 and document-level context INLINEFORM4 in contrast to prior work which only takes sentence-level context BIBREF6 , BIBREF8 . ",
"The output of featurizer INLINEFORM0 is the concatenation of these feature vectors: DISPLAYFORM0 ",
" We define the computation of these feature vectors in the followings.",
"Entity Encoder: The entity encoder INLINEFORM0 computes the average of all the embeddings of tokens in entity INLINEFORM1 .",
"Sentence-level Context Encoder: The encoder INLINEFORM0 for sentence-level context INLINEFORM1 employs a single bi-directional RNN to encode INLINEFORM2 . Formally, let the tokens in INLINEFORM3 be INLINEFORM4 . The hidden state INLINEFORM5 for token INLINEFORM6 is a concatenation of a left-to-right hidden state INLINEFORM7 and a right-to-left hidden state INLINEFORM8 , DISPLAYFORM0 ",
" where INLINEFORM0 and INLINEFORM1 are INLINEFORM2 -layer stacked LSTMs units BIBREF10 . This is different from shimaoka-EtAl:2017:EACLlong who use two separate bi-directional RNNs for context on each side of the entity mention.",
"Attention: The feature representation for INLINEFORM0 is a weighted sum of the hidden states: INLINEFORM1 , where INLINEFORM2 is the attention to hidden state INLINEFORM3 . We employ the dot-product attention BIBREF11 . It computes attention based on the alignment between the entity and its context: DISPLAYFORM0 ",
" where INLINEFORM0 is the weight matrix. The dot-product attention differs from the self attention BIBREF8 which only considers the context.",
"Document-level Context Encoder: The encoder INLINEFORM0 for document-level context INLINEFORM1 is a multi-layer perceptron: DISPLAYFORM0 ",
" where DM is a pretrained distributed memory model BIBREF12 which converts the document-level context into a distributed representation. INLINEFORM0 and INLINEFORM1 are weight matrices."
],
[
"In prior work, a fixed threshold ( INLINEFORM0 ) is used for classification of all types BIBREF4 , BIBREF8 . We instead assign a different threshold to each type that is optimized to maximize the overall strict INLINEFORM1 on the dev set. We show the definition of strict INLINEFORM2 in Sectionsubsec:metrics."
],
[
"We conduct experiments on three publicly available datasets. tab:stat shows the statistics of these datasets.",
"OntoNotes: gillick2014context sampled sentences from OntoNotes BIBREF13 and annotated entities in these sentences using 89 types. We use the same train/dev/test splits in shimaoka-EtAl:2017:EACLlong. Document-level contexts are retrieved from the original OntoNotes corpus.",
"BBN: weischedel2005bbn annotated entities in Wall Street Journal using 93 types. We use the train/test splits in Ren:2016:LNR:2939672.2939822 and randomly hold out 2,000 pairs for dev. Document contexts are retrieved from the original corpus.",
"FIGER: Ling2012 sampled sentences from 780k Wikipedia articles and 434 news reports to form the train and test data respectively, and annotated entities using 113 types. The splits we use are the same in shimaoka-EtAl:2017:EACLlong."
],
[
"We adopt the metrics used in Ling2012 where results are evaluated via strict, loose macro, loose micro INLINEFORM0 scores. For the INLINEFORM1 -th instance, let the predicted type set be INLINEFORM2 , and the reference type set INLINEFORM3 . The precision ( INLINEFORM4 ) and recall ( INLINEFORM5 ) for each metric are computed as follow.",
"Strict: INLINEFORM0 ",
"Loose Macro: INLINEFORM0 ",
"Loose Micro: INLINEFORM0 "
],
[
"We use open-source GloVe vectors BIBREF14 trained on Common Crawl 840B with 300 dimensions to initialize word embeddings used in all encoders. All weight parameters are sampled from INLINEFORM0 . The encoder for sentence-level context is a 2-layer bi-directional RNN with 200 hidden units. The DM output size is 50. Sizes of INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 respectively. Adam optimizer BIBREF15 and mini-batch gradient is used for optimization. Batch size is 200. Dropout (rate=0.5) is applied to three feature functions. To avoid overfitting, we choose models which yield the best strict INLINEFORM7 on dev sets."
],
[
"We compare experimental results of our approach with previous approaches, and study contribution of our base model architecture, document-level contexts and adaptive thresholds via ablation. To ensure our findings are reliable, we run each experiment twice and report the average performance.",
"Overall, our approach significantly increases the state-of-the-art macro INLINEFORM0 on both OntoNotes and BBN datasets.",
"On OntoNotes (tab:ontonotes), our approach improves the state of the art across all three metrics. Note that (1) without adaptive thresholds or document-level contexts, our approach still outperforms other approaches on macro INLINEFORM0 and micro INLINEFORM1 ; (2) adding hand-crafted features BIBREF8 does not improve the performance. This indicates the benefits of our proposed model architecture for learning fine-grained entity typing, which is discussed in detail in Sectionsec:ana; and (3) Binary and Kwasibie were trained on a different dataset, so their results are not directly comparable.",
"On BBN (tab:bbn), while C16-1017's label embedding algorithm holds the best strict INLINEFORM0 , our approach notably improves both macro INLINEFORM1 and micro INLINEFORM2 . The performance drops to a competitive level with other approaches if adaptive thresholds or document-level contexts are removed.",
"On FIGER (tab:figer) where no document-level context is currently available, our proposed approach still achieves the state-of-the-art strict and micro INLINEFORM0 . If compared with the ablation variant of the Neural approach, i.e., w/o hand-crafted features, our approach gains significant improvement. We notice that removing adaptive thresholds only causes a small performance drop; this is likely because the train and test splits of FIGER are from different sources, and adaptive thresholds are not generalized well enough to the test data. Kwasibie, Attentive and Fnet were trained on a different dataset, so their results are not directly comparable."
],
[
"tab:cases shows examples illustrating the benefits brought by our proposed approach. Example A illustrates that sentence-level context sometimes is not informative enough, and attention, though already placed on the head verbs, can be misleading. Including document-level context (i.e., “Canada's declining crude output” in this case) helps preclude wrong predictions (i.e., /other/health and /other/health/treatment). Example B shows that the semantic patterns learnt by our attention mechanism help make the correct prediction. As we observe in tab:ontonotes and tab:figer, adding hand-crafted features to our approach does not improve the results. One possible explanation is that hand-crafted features are mostly about syntactic-head or topic information, and such information are already covered by our attention mechanism and document-level contexts as shown in tab:cases. Compared to hand-crafted features that heavily rely on system or human annotations, attention mechanism requires significantly less supervision, and document-level or paragraph-level contexts are much easier to get.",
"Through experiments, we observe no improvement by encoding type hierarchical information BIBREF8 . To explain this, we compute cosine similarity between each pair of fine-grained types based on the type embeddings learned by our model, i.e., INLINEFORM3 in eq:prob. tab:type-sim shows several types and their closest types: these types do not always share coarse-grained types with their closest types, but they often co-occur in the same context."
],
[
"We propose a new approach for fine-grained entity typing. The contributions are: (1) we propose a neural architecture which learns a distributional semantic representation that leverage both document and sentence level information, (2) we find that context increased with document-level information improves performance, and (3) we utilize adaptive classification thresholds to further boost the performance. Experiments show our approach achieves new state-of-the-art results on three benchmarks."
],
[
"This work was supported in part by the JHU Human Language Technology Center of Excellence (HLTCOE), and DARPA LORELEI. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government."
]
],
"section_name": [
"Introduction",
"Model",
"General Model",
"Featurizer",
"Adaptive Thresholds",
"Experiments",
"Metrics",
"Hyperparameters",
"Results",
"Analysis",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"1e11c26fe1f3b8160a264541e4fea91b4a2faaea",
"e897d47189bb3ea21c0ca8b212481885c1540df4"
],
"answer": [
{
"evidence": [
"General Model",
"Given a type embedding vector INLINEFORM0 and a featurizer INLINEFORM1 that takes entity INLINEFORM2 and its context INLINEFORM3 , we employ the logistic regression (as shown in fig:arch) to model the probability of INLINEFORM4 assigned INLINEFORM5 (i.e., INLINEFORM6 ) DISPLAYFORM0",
"and we seek to learn a type embedding matrix INLINEFORM0 and a featurizer INLINEFORM1 such that DISPLAYFORM0"
],
"extractive_spans": [
"logistic regression"
],
"free_form_answer": "",
"highlighted_evidence": [
"General Model\nGiven a type embedding vector INLINEFORM0 and a featurizer INLINEFORM1 that takes entity INLINEFORM2 and its context INLINEFORM3 , we employ the logistic regression (as shown in fig:arch) to model the probability of INLINEFORM4 assigned INLINEFORM5 (i.e., INLINEFORM6 ) DISPLAYFORM0\n\nand we seek to learn a type embedding matrix INLINEFORM0 and a featurizer INLINEFORM1 such that DISPLAYFORM0"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Figure 1: Neural architecture for predicting the types of entity mention “Monopoly” in the text “... became a top seller ... Monopoly is played in 114 countries. ...”. Part of document-level context is omitted."
],
"extractive_spans": [],
"free_form_answer": "Document-level context encoder, entity and sentence-level context encoders with common attention, then logistic regression, followed by adaptive thresholds.",
"highlighted_evidence": [
"FLOAT SELECTED: Figure 1: Neural architecture for predicting the types of entity mention “Monopoly” in the text “... became a top seller ... Monopoly is played in 114 countries. ...”. Part of document-level context is omitted."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1ddd4df7a5e941368e40c34e9590524c890554d2",
"e3c7d565b7d5a73c223340909fd177504455a45c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 6: Type similarity."
],
"extractive_spans": [],
"free_form_answer": "/other/event/accident, /person/artist/music, /other/product/mobile phone, /other/event/sports event, /other/product/car",
"highlighted_evidence": [
"FLOAT SELECTED: Table 6: Type similarity."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"708cdef6bf18e78fd46b5e4fba603ccee9696598",
"fb33ae38ac5db0fb504a61429a04620ae5f7239b"
],
"answer": [
{
"evidence": [
"The state-of-the-art approach BIBREF8 for fine-grained entity typing employs an attentive neural architecture to learn representations of the entity mention as well as its context. These representations are then combined with hand-crafted features (e.g., lexical and syntactic features), and fed into a linear classifier with a fixed threshold. While this approach outperforms previous approaches which only use sparse binary features BIBREF4 , BIBREF6 or distributed representations BIBREF9 , it has a few drawbacks: (1) the representations of left and right contexts are learnt independently, ignoring their mutual connection; (2) the attention on context is computed solely upon the context, considering no alignment to the entity; (3) document-level contexts which could be useful in classification are not exploited; and (4) hand-crafted features heavily rely on system or human annotations."
],
"extractive_spans": [
"lexical and syntactic features"
],
"free_form_answer": "",
"highlighted_evidence": [
"These representations are then combined with hand-crafted features (e.g., lexical and syntactic features), and fed into a linear classifier with a fixed threshold."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The state-of-the-art approach BIBREF8 for fine-grained entity typing employs an attentive neural architecture to learn representations of the entity mention as well as its context. These representations are then combined with hand-crafted features (e.g., lexical and syntactic features), and fed into a linear classifier with a fixed threshold. While this approach outperforms previous approaches which only use sparse binary features BIBREF4 , BIBREF6 or distributed representations BIBREF9 , it has a few drawbacks: (1) the representations of left and right contexts are learnt independently, ignoring their mutual connection; (2) the attention on context is computed solely upon the context, considering no alignment to the entity; (3) document-level contexts which could be useful in classification are not exploited; and (4) hand-crafted features heavily rely on system or human annotations."
],
"extractive_spans": [
"e.g., lexical and syntactic features"
],
"free_form_answer": "",
"highlighted_evidence": [
"These representations are then combined with hand-crafted features (e.g., lexical and syntactic features), and fed into a linear classifier with a fixed threshold."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What is the architecture of the model?",
"What fine-grained semantic types are considered?",
"What hand-crafted features do other approaches use?"
],
"question_id": [
"e057fa254ea7a4335de22fd97a0f08814b88aea4",
"134a66580c363287ec079f353ead8f770ac6d17b",
"610fc593638c5e9809ea9839912d0b282541d42d"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Neural architecture for predicting the types of entity mention “Monopoly” in the text “... became a top seller ... Monopoly is played in 114 countries. ...”. Part of document-level context is omitted.",
"Table 1: Statistics of the datasets.",
"Table 2: Examples showing the improvement brought by document-level contexts and dot-product attention. Entities are shown in the green box. The gray boxes visualize attention weights (darkness) on context tokens.",
"Table 3: Results on the OntoNotes dataset.",
"Table 5: Results on the FIGER dataset.",
"Table 4: Results on the BBN dataset.",
"Table 6: Type similarity."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Table2-1.png",
"4-Table3-1.png",
"4-Table5-1.png",
"4-Table4-1.png",
"5-Table6-1.png"
]
} | [
"What is the architecture of the model?",
"What fine-grained semantic types are considered?"
] | [
[
"1804.08000-1-Figure1-1.png"
],
[
"1804.08000-5-Table6-1.png"
]
] | [
"Document-level context encoder, entity and sentence-level context encoders with common attention, then logistic regression, followed by adaptive thresholds.",
"/other/event/accident, /person/artist/music, /other/product/mobile phone, /other/event/sports event, /other/product/car"
] | 369 |
1908.05803 | Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning | Machine comprehension of texts longer than a single sentence often requires coreference resolution. However, most current reading comprehension benchmarks do not contain complex coreferential phenomena and hence fail to evaluate the ability of models to resolve coreference. We present a new crowdsourced dataset containing more than 24K span-selection questions that require resolving coreference among entities in over 4.7K English paragraphs from Wikipedia. Obtaining questions focused on such phenomena is challenging, because it is hard to avoid lexical cues that shortcut complex reasoning. We deal with this issue by using a strong baseline model as an adversary in the crowdsourcing loop, which helps crowdworkers avoid writing questions with exploitable surface cues. We show that state-of-the-art reading comprehension models perform significantly worse than humans on this benchmark---the best model performance is 70.5 F1, while the estimated human performance is 93.4 F1. | {
"paragraphs": [
[
"Paragraphs and other longer texts typically make multiple references to the same entities. Tracking these references and resolving coreference is essential for full machine comprehension of these texts. Significant progress has recently been made in reading comprehension research, due to large crowdsourced datasets BIBREF0, BIBREF1, BIBREF2, BIBREF3. However, these datasets focus largely on understanding local predicate-argument structure, with very few questions requiring long-distance entity tracking. Obtaining such questions is hard for two reasons: (1) teaching crowdworkers about coreference is challenging, with even experts disagreeing on its nuances BIBREF4, BIBREF5, BIBREF6, BIBREF7, and (2) even if we can get crowdworkers to target coreference phenomena in their questions, these questions may contain giveaways that let models arrive at the correct answer without performing the desired reasoning (see §SECREF3 for examples).",
"We introduce a new dataset, Quoref , that contains questions requiring coreferential reasoning (see examples in Figure FIGREF1). The questions are derived from paragraphs taken from a diverse set of English Wikipedia articles and are collected using an annotation process (§SECREF2) that deals with the aforementioned issues in the following ways: First, we devise a set of instructions that gets workers to find anaphoric expressions and their referents, asking questions that connect two mentions in a paragraph. These questions mostly revolve around traditional notions of coreference (Figure FIGREF1 Q1), but they can also involve referential phenomena that are more nebulous (Figure FIGREF1 Q3). Second, inspired by BIBREF8, we disallow questions that can be answered by an adversary model (uncased base BERT, BIBREF9, trained on SQuAD 1.1, BIBREF0) running in the background as the workers write questions. This adversary is not particularly skilled at answering questions requiring coreference, but can follow obvious lexical cues—it thus helps workers avoid writing questions that shortcut coreferential reasoning.",
"Quoref contains more than 15K questions whose answers are spans or sets of spans in 3.5K paragraphs from English Wikipedia that can be arrived at by resolving coreference in those paragraphs. We manually analyze a sample of the dataset (§SECREF3) and find that 78% of the questions cannot be answered without resolving coreference. We also show (§SECREF4) that the best system performance is 49.1% $F_1$, while the estimated human performance is 87.2%. These findings indicate that this dataset is an appropriate benchmark for coreference-aware reading comprehension."
],
[
"We scraped paragraphs from Wikipedia pages about English movies, art and architecture, geography, history, and music. For movies, we followed the list of English language films, and extracted plot summaries that are at least 40 tokens, and for the remaining categories, we followed the lists of featured articles. Since movie plot summaries usually mention many characters, it was easier to find hard Quoref questions for them, and we sampled about 60% of the paragraphs from this category."
],
[
"We crowdsourced questions about these paragraphs on Mechanical Turk. We asked workers to find two or more co-referring spans in the paragraph, and to write questions such that answering them would require the knowledge that those spans are coreferential. We did not ask them to explicitly mark the co-referring spans. Workers were asked to write questions for a random sample of paragraphs from our pool, and we showed them examples of good and bad questions in the instructions (see Appendix ). For each question, the workers were also required to select one or more spans in the corresponding paragraph as the answer, and these spans are not required to be same as the coreferential spans that triggered the questions. We used an uncased base BERT QA model BIBREF9 trained on SQuAD 1.1 BIBREF0 as an adversary running in the background that attempted to answer the questions written by workers in real time, and the workers were able to submit their questions only if their answer did not match the adversary's prediction. Appendix further details the logistics of the crowdsourcing tasks. Some basic statistics of the resulting dataset can be seen in Table ."
],
[
"To better understand the phenomena present in Quoref , we manually analyzed a random sample of 100 paragraph-question pairs. The following are some empirical observations."
],
[
"We found that 78% of the manually analyzed questions cannot be answered without coreference resolution. The remaining 22% involve some form of coreference, but do not require it to be resolved for answering them. Examples include a paragraph that mentions only one city, “Bristol”, and a sentence that says “the city was bombed”. The associated question, Which city was bombed?, does not really require coreference resolution from a model that can identify city names, making the content in the question after Which city unnecessary."
],
[
"Questions in Quoref require resolving pronominal and nominal mentions of entities. Table shows percentages and examples of analyzed questions that fall into these two categories. These are not disjoint sets, since we found that 32% of the questions require both (row 3). We also found that 10% require some form of commonsense reasoning (row 4)."
],
[
"Unlike traditional coreference annotations in datasets like those of BIBREF4, BIBREF10, BIBREF11 and BIBREF7, which aim to obtain complete coreference clusters, our questions require understanding coreference between only a few spans. While this means that the notion of coreference captured by our dataset is less comprehensive, it is also less conservative and allows questions about coreference relations that are not marked in OntoNotes annotations. Since the notion is not as strict, it does not require linguistic expertise from annotators, making it more amenable to crowdsourcing."
],
[
"There are many reading comprehension datasets BIBREF12, BIBREF0, BIBREF3, BIBREF8. Most of these datasets principally require understanding local predicate-argument structure in a paragraph of text. Quoref also requires understanding local predicate-argument structure, but makes the reading task harder by explicitly querying anaphoric references, requiring a system to track entities throughout the discourse."
],
[
"We present Quoref , a focused reading comprehension benchmark that evaluates the ability of models to resolve coreference. We crowdsourced questions over paragraphs from Wikipedia, and manual analysis confirmed that most cannot be answered without coreference resolution. We show that current state-of-the-art reading comprehension models perform poorly on this benchmark, significantly lower than human performance. Both these findings provide evidence that Quoref is an appropriate benchmark for coreference-aware reading comprehension."
]
],
"section_name": [
"Introduction",
"Dataset Construction ::: Collecting paragraphs",
"Dataset Construction ::: Crowdsourcing setup",
"Semantic Phenomena in Quoref",
"Semantic Phenomena in Quoref ::: Requirement of coreference resolution",
"Semantic Phenomena in Quoref ::: Types of coreferential reasoning",
"Related Work ::: Traditional coreference datasets",
"Related Work ::: Reading comprehension datasets",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"6189f8f5ea579ed90b2a27b72aef3d6ac6a0d2e5",
"d8755dd39ac935d43dbcae8fad14d365822acdb6"
],
"answer": [
{
"evidence": [
"We crowdsourced questions about these paragraphs on Mechanical Turk. We asked workers to find two or more co-referring spans in the paragraph, and to write questions such that answering them would require the knowledge that those spans are coreferential. We did not ask them to explicitly mark the co-referring spans. Workers were asked to write questions for a random sample of paragraphs from our pool, and we showed them examples of good and bad questions in the instructions (see Appendix ). For each question, the workers were also required to select one or more spans in the corresponding paragraph as the answer, and these spans are not required to be same as the coreferential spans that triggered the questions. We used an uncased base BERT QA model BIBREF9 trained on SQuAD 1.1 BIBREF0 as an adversary running in the background that attempted to answer the questions written by workers in real time, and the workers were able to submit their questions only if their answer did not match the adversary's prediction. Appendix further details the logistics of the crowdsourcing tasks. Some basic statistics of the resulting dataset can be seen in Table ."
],
"extractive_spans": [
"an uncased base BERT QA model BIBREF9 trained on SQuAD 1.1 BIBREF0"
],
"free_form_answer": "",
"highlighted_evidence": [
"We used an uncased base BERT QA model BIBREF9 trained on SQuAD 1.1 BIBREF0 as an adversary running in the background that attempted to answer the questions written by workers in real time, and the workers were able to submit their questions only if their answer did not match the adversary's prediction."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Performance of various baselines on QUOREF, measured by Exact Match (EM) and F1. Boldface marks the best systems for each metric and split."
],
"extractive_spans": [],
"free_form_answer": "Passage-only heuristic baseline, QANet, QANet+BERT, BERT QA",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Performance of various baselines on QUOREF, measured by Exact Match (EM) and F1. Boldface marks the best systems for each metric and split."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"1de00e6629e3147b381d3fe105007fb3770f1b30",
"e98c2fd2039ae029088c2c49fd181cf4af973c28"
],
"answer": [
{
"evidence": [
"We crowdsourced questions about these paragraphs on Mechanical Turk. We asked workers to find two or more co-referring spans in the paragraph, and to write questions such that answering them would require the knowledge that those spans are coreferential. We did not ask them to explicitly mark the co-referring spans. Workers were asked to write questions for a random sample of paragraphs from our pool, and we showed them examples of good and bad questions in the instructions (see Appendix ). For each question, the workers were also required to select one or more spans in the corresponding paragraph as the answer, and these spans are not required to be same as the coreferential spans that triggered the questions. We used an uncased base BERT QA model BIBREF9 trained on SQuAD 1.1 BIBREF0 as an adversary running in the background that attempted to answer the questions written by workers in real time, and the workers were able to submit their questions only if their answer did not match the adversary's prediction. Appendix further details the logistics of the crowdsourcing tasks. Some basic statistics of the resulting dataset can be seen in Table ."
],
"extractive_spans": [
"Mechanical Turk"
],
"free_form_answer": "",
"highlighted_evidence": [
"We crowdsourced questions about these paragraphs on Mechanical Turk. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We crowdsourced questions about these paragraphs on Mechanical Turk. We asked workers to find two or more co-referring spans in the paragraph, and to write questions such that answering them would require the knowledge that those spans are coreferential. We did not ask them to explicitly mark the co-referring spans. Workers were asked to write questions for a random sample of paragraphs from our pool, and we showed them examples of good and bad questions in the instructions (see Appendix ). For each question, the workers were also required to select one or more spans in the corresponding paragraph as the answer, and these spans are not required to be same as the coreferential spans that triggered the questions. We used an uncased base BERT QA model BIBREF9 trained on SQuAD 1.1 BIBREF0 as an adversary running in the background that attempted to answer the questions written by workers in real time, and the workers were able to submit their questions only if their answer did not match the adversary's prediction. Appendix further details the logistics of the crowdsourcing tasks. Some basic statistics of the resulting dataset can be seen in Table ."
],
"extractive_spans": [
"Mechanical Turk"
],
"free_form_answer": "",
"highlighted_evidence": [
"We crowdsourced questions about these paragraphs on Mechanical Turk."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"What is the strong baseline model used?",
"What crowdsourcing platform did they obtain the data from?"
],
"question_id": [
"ab895ed198374f598e13d6d61df88142019d13b8",
"8795bb1f874e5f3337710d8c3d5be49e672ab43a"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"dataset",
"dataset"
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 1: Example paragraph and questions from the dataset. Highlighted text in paragraphs is where the questions with matching highlights are anchored. Next to the questions are the relevant coreferent mentions from the paragraph. They are bolded for the first question, italicized for the second, and underlined for the third in the paragraph.",
"Table 1: Key statistics of QUOREF splits.",
"Table 2: Phenomena in QUOREF. Note that the first two classes are not disjoint. In the final example, the paragraph does not explicitly say that Fania is Arieh’s wife.",
"Table 3: Performance of various baselines on QUOREF, measured by Exact Match (EM) and F1. Boldface marks the best systems for each metric and split."
],
"file": [
"1-Figure1-1.png",
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png"
]
} | [
"What is the strong baseline model used?"
] | [
[
"1908.05803-Dataset Construction ::: Crowdsourcing setup-0",
"1908.05803-3-Table3-1.png"
]
] | [
"Passage-only heuristic baseline, QANet, QANet+BERT, BERT QA"
] | 370 |
1910.02677 | Controllable Sentence Simplification | Text simplification aims at making a text easier to read and understand by simplifying grammar and structure while keeping the underlying information identical. It is often considered an all-purpose generic task where the same simplification is suitable for all; however multiple audiences can benefit from simplified text in different ways. We adapt a discrete parametrization mechanism that provides explicit control on simplification systems based on Sequence-to-Sequence models. As a result, users can condition the simplifications returned by a model on parameters such as length, amount of paraphrasing, lexical complexity and syntactic complexity. We also show that carefully chosen values of these parameters allow out-of-the-box Sequence-to-Sequence models to outperform their standard counterparts on simplification benchmarks. Our model, which we call ACCESS (as shorthand for AudienCe-CEntric Sentence Simplification), increases the state of the art to 41.87 SARI on the WikiLarge test set, a +1.42 gain over previously reported scores. | {
"paragraphs": [
[
"In Natural Language Processing, the Text Simplification task aims at making a text easier to read and understand. Text simplification can be beneficial for people with cognitive disabilities such as aphasia BIBREF0, dyslexia BIBREF1 and autism BIBREF2 but also for second language learners BIBREF3 and people with low literacy BIBREF4. The type of simplification needed for each of these audiences is different. Some aphasic patients struggle to read sentences with a high cognitive load such as long sentences with intricate syntactic structures, whereas second language learners might not understand texts with rare or specific vocabulary. Yet, research in text simplification has been mostly focused on developing models that generate a single generic simplification for a given source text with no possibility to adapt outputs for the needs of various target populations.",
"In this paper, we propose a controllable simplification model that provides explicit ways for users to manipulate and update simplified outputs as they see fit. This work only considers the task of Sentence Simplification (SS) where the input of the model is a single source sentence and the output can be composed of one sentence or splitted into multiple. Our work builds upon previous work on controllable text generation BIBREF5, BIBREF6, BIBREF7, BIBREF8 where a Sequence-to-Sequence (Seq2Seq) model is modified to control attributes of the output text. We tailor this mechanism to the task of SS by considering relevant attributes of the output sentence such as the output length, the amount of paraphrasing, lexical complexity, and syntactic complexity. To this end, we condition the model at train time, by feeding those parameters along with the source sentence as additional inputs.",
"Our contributions are the following: (1) We adapt a parametrization mechanism to the specific task of Sentence Simplification by choosing relevant parameters; (2) We show through a detailed analysis that our model can indeed control the considered attributes, making the simplifications potentially able to fit the needs of various end audiences; (3) With careful calibration, our controllable parametrization improves the performance of out-of-the-box Seq2Seq models leading to a new state-of-the-art score of 41.87 SARI BIBREF9 on the WikiLarge benchmark BIBREF10, a +1.42 gain over previous scores, without requiring any external resource or modified training objective."
],
[
"Text simplification has gained more and more interest through the years and has benefited from advances in Natural Language Processing and notably Machine Translation.",
"In recent years, SS was largely treated as a monolingual variant of machine translation (MT), where simplification operations are learned from complex-simple sentence pairs automatically extracted from English Wikipedia and Simple English Wikipedia BIBREF11, BIBREF12.",
"Phrase-based and Syntax-based MT was successfully used for SS BIBREF11 and further tailored to the task using deletion models BIBREF13 and candidate reranking BIBREF12. The candidate reranking method by BIBREF12 favors simplifications that are most dissimilar to the source using Levenshtein distance. The authors argue that dissimilarity is a key factor of simplification.",
"Lately, SS has mostly been tackled using Seq2Seq MT models BIBREF14. Seq2Seq models were either used as-is BIBREF15 or combined with reinforcement learning thanks to a specific simplification reward BIBREF10, augmented with an external simplification database as a dynamic memory BIBREF16 or trained with multi-tasking on entailment and paraphrase generation BIBREF17.",
"This work builds upon Seq2Seq as well. We prepend additional inputs to the source sentences at train time, in the form of plain text special tokens. Our approach does not require any external data or modified training objective."
],
[
"Conditional training with Seq2Seq models was applied to multiple natural language processing tasks such as summarization BIBREF5, BIBREF6, dialog BIBREF18, sentence compression BIBREF19, BIBREF20 or poetry generation BIBREF21.",
"Most approaches for controllable text generation are either decoding-based or learning-based.",
"Decoding-based methods use a standard Seq2Seq training setup but modify the system during decoding to control a given attribute. For instance, the length of summaries was controlled by preventing the decoder from generating the End-Of-Sentence token before reaching the desired length or by only selecting hypotheses of a given length during the beam search BIBREF5. Weighted decoding (i.e. assigning weights to specific words during decoding) was also used with dialog models BIBREF18 or poetry generation models BIBREF21 to control the number of repetitions, alliterations, sentiment or style.",
"On the other hand, learning-based methods condition the Seq2Seq model on the considered attribute at train time, and can then be used to control the output at inference time. BIBREF5 explored learning-based methods to control the length of summaries, e.g. by feeding a target length vector to the neural network. They concluded that learning-based methods worked better than decoding-based methods and allowed finer control on the length without degrading performances. Length control was likewise used in sentence compression by feeding the network a length countdown scalar BIBREF19 or a length vector BIBREF20.",
"Our work uses a simpler approach: we concatenate plain text special tokens to the source text. This method only modifies the source data and not the training procedure. Such mechanism was used to control politeness in MT BIBREF22, to control summaries in terms of length, of news source style, or to make the summary more focused on a given named entity BIBREF6. BIBREF7 and BIBREF8 similarly showed that adding special tokens at the beginning of sentences can improve the performance of Seq2Seq models for SS. Plain text special tokens were used to encode attributes such as the target school grade-level (i.e. understanding level) and the type of simplification operation applied between the source and the ground truth simplification (identical, elaboration, one-to-many, many-to-one). Our work goes further by using a more diverse set of parameters that represent specific grammatical attributes of the text simplification process. Moreover, we investigate the influence of those parameter on the generated simplification in a detailed analysis."
],
[
"In this section we present ACCESS, our approach for AudienCe-CEntric Sentence Simplification. We parametrize a Seq2Seq model on a given attribute of the target simplification, e.g. its length, by prepending a special token at the beginning of the source sentence. The special token value is the ratio of this parameter calculated on the target sentence with respect to its value on the source sentence. For example when trying to control the number of characters of a generated simplification, we compute the compression ratio between the number of characters in the source and the number of characters in the target sentence (see Table TABREF4 for an illustration). Ratios are discretized into bins of fixed width of 0.05 in our experiments and capped to a maximum ratio of 2. Special tokens are then included in the vocabulary (40 unique values per parameter).",
"At inference time, we just set the ratio to a fixed value for all samples. For instance, to get simplifications that are 80% of the source length, we prepend the token $<$NbChars_0.8$>$ to each source sentence. This fixed ratio can be user-defined or automatically set. In our setting, we choose fixed ratios that maximize the SARI on the validation set.",
"We conditioned our model on four selected parameters, so that they each cover an important aspect of the simplification process: length, paraphrasing, lexical complexity and syntactic complexity.",
"NbChars: character length ratio between source sentence and target sentence (compression level). This parameter accounts for sentence compression, and content deletion. Previous work showed that simplicity is best correlated with length-based metrics, and especially in terms of number of characters BIBREF23. The number of characters indeed accounts for the lengths of words which is itself correlated to lexical complexity.",
"LevSim: normalized character-level Levenshtein similarity BIBREF24 between source and target. LevSim quantifies the amount of modification operated on the source sentence (through paraphrasing, adding and deleting content). We use this parameter following previous claims that dissimilarity is a key factor of simplification BIBREF12.",
"WordRank: as a proxy to lexical complexity, we compute a sentence-level measure, that we call WordRank, by taking the third-quartile of log-ranks (inverse frequency order) of all words in a sentence. We subsequently divide the WordRank of the target by that of the source to get a ratio. Word frequencies have shown to be the best indicators of word complexity in the Semeval 2016 task 11 BIBREF25.",
"DepTreeDepth: maximum depth of the dependency tree of the source divided by that of the target (we do not feed any syntactic information other than this ratio to the model). This parameter is designed to approximate syntactic complexity. Deeper dependency trees indicate dependencies that span longer and possibly more intricate sentences. DepTreeDepth proved better in early experiments over other candidates for measuring syntactic complexity such as the maximum length of a dependency relation, or the maximum inter-word dependency flux."
],
[
"We train a Transformer model BIBREF26 using the FairSeq toolkit BIBREF27. ,",
"Our models are trained and evaluated on the WikiLarge dataset BIBREF10 which contains 296,402/2,000/359 samples (train/validation/test). WikiLarge is a set of automatically aligned complex-simple sentence pairs from English Wikipedia (EW) and Simple English Wikipedia (SEW). It is compiled from previous extractions of EW-SEW BIBREF11, BIBREF28, BIBREF29. Its validation and test sets are taken from Turkcorpus BIBREF9, where each complex sentence has 8 human simplifications created by Amazon Mechanical Turk workers. Human annotators were instructed to only paraphrase the source sentences while keeping as much meaning as possible. Hence, no sentence splitting, minimal structural simplification and little content reduction occurs in this test set BIBREF9.",
"We evaluate our methods with FKGL (Flesch-Kincaid Grade Level) BIBREF30 to account for simplicity and SARI BIBREF9 as an overall score. FKGL is a commonly used metric for measuring readability however it should not be used alone for evaluating systems because it does not account for grammaticality and meaning preservation BIBREF12. It is computed as a linear combination of the number of words per simple sentence and the number of syllables per word:",
"On the other hand SARI compares the predicted simplification with both the source and the target references. It is an average of F1 scores for three $n$-gram operations: additions, keeps and deletions. For each operation, these scores are then averaged for all $n$-gram orders (from 1 to 4) to get the overall F1 score.",
"We compute FKGL and SARI using the EASSE python package for SS BIBREF31. We do not use BLEU because it is not suitable for evaluating SS systems BIBREF32, and favors models that do not modify the source sentence BIBREF9."
],
[
"Table TABREF24 compares our best model to state-of-the-art methods:",
"BIBREF12",
"Phrase-Based MT system with candidate reranking. Dissimilar candidates are favored based on their Levenshtein distance to the source.",
"BIBREF33",
"Deep semantics sentence representation fed to a monolingual MT system.",
"BIBREF9",
"Syntax-based MT model augmented using the PPDB paraphrase database BIBREF34 and fine-tuned towards SARI.",
"BIBREF10",
"Seq2Seq trained with reinforcement learning, combined with a lexical simplification model.",
"BIBREF17",
"Seq2Seq model based on the pointer-copy mechanism and trained via multi-task learning on the Entailment and Paraphrase Generation tasks.",
"BIBREF15",
"Standard Seq2Seq model. The second beam search hypothesis is selected during decoding; the hypothesis number is an hyper-parameter fine-tuned with SARI.",
"BIBREF35",
"Seq2Seq with a memory-augmented Neural Semantic Encoder, tuned with SARI.",
"BIBREF16",
"Seq2Seq integrating the simple PPDB simplification database BIBREF36 as a dynamic memory. The database is also used to modify the loss and re-weight word probabilities to favor simpler words.",
"We select the model with the best SARI on the validation set and report its scores on the test set. This model only uses three parameters out of four: NbChars$_{0.95}$, LevSim$_{0.75}$ and WordRank$_{0.75}$ (optimal target ratios are in subscript).",
"ACCESS scores best on SARI (41.87), a significant improvement over previous state of the art (40.45), and third to best FKGL (7.22). The second and third models in terms of SARI, DMASS+DCSS (40.45) and SBMT+PPDB+SARI (39.96), both use the external resource Simple PPDB BIBREF36 that was extracted from 1000 times more data than what we used for training. Our FKGL is also better (lower) than these methods. The Hybrid model scores best on FKGL (4.56) i.e. they generated the simplest (and shortest) sentences, but it was done at the expense of SARI (31.40).",
"Parametrization encourages the model to rely on explicit aspects of the simplification process, and to associate them with the parameters. The model can then be adapted more precisely to the type of simplification needed. In WikiLarge, for instance, the compression ratio distribution is different than that of human simplifications (see Figure FIGREF25). The NbChars parameter helps the model decorrelate the compression aspect from other attributes of the simplification process. This parameter is then adapted to the amount of compression required in a given evaluation dataset, such as a true, human simplified SS dataset. Our best model indeed worked best with a NbChars target ratio set to 0.95 which is the closest bucketed value to the compression ratio of human annotators on the WikiLarge validation set (0.93)."
],
[
"In this section we investigate the contribution of each parameter to the final SARI score of ACCESS. Table TABREF26 reports scores of models trained with different combinations of parameters on the WikiLarge validation set (2000 source sentences, with 8 human simplifications each). We combined parameters using greedy forward selection; at each step, we add the parameter leading to the best performance when combined with previously added parameters. With only one parameter, WordRank proves to be best (+2.28 SARI over models without parametrization). As the WikiLarge validation set mostly contains small paraphrases, it only seems natural that the parameter related to lexical simplification gets the largest increase in performance.",
"LevSim (+1.23) is the second best parameter. This confirms the intuition that hypotheses that are more dissimilar to the source are better simplifications, as claimed in BIBREF12, BIBREF15.",
"There is little content reduction in the WikiLarge validation set (see Figure FIGREF25), thus parameters that are closely related to sentence length will be less effective. This is the case for the NbChars and DepTreeDepth parameters (shorter sentences, will have lower tree depths): they bring more modest improvements, +0.88 and +0.66.",
"The performance boost is nearly additive at first when adding more parameters (WordRank+LevSim: +4.04) but saturates quickly with 3+ parameters. In fact, no combination of 3 or more parameters gets a statistically significant improvement over the WordRank+LevSim setup (p-value $< 0.01$ for a Student's T-test). This indicates that parameters are not all useful to improve the scores on this benchmark, and that they might be not independent from one another. The addition of the DepTreeDepth as a final parameter even decreases the SARI score slightly, most probably because the considered validation set does not include sentence splitting and structural modifications."
],
[
"Our goal is to give the user control over how the model will simplify sentences on four important attributes of SS: length, paraphrasing, lexical complexity and syntactic complexity. To this end, we introduced four parameters: NbChars, LevSim, WordRank and DepTreeDepth. Even though the parameters improve the performance in terms of SARI, it is not sure whether they have the desired effect on their associated attribute. In this section we investigate to what extent each parameter controls the generated simplification. We first used separate models, each trained with a single parameter to isolate their respective influence on the output simplifications. However, we witnessed that with only one parameter, the effect of LevSim, WordRank and DepTreeDepth was mainly to reduce the length of the sentence (Appendix Figure FIGREF30). Indeed, shortening the sentence will decrease the Levenshtein similarity, decrease the WordRank (when complex words are deleted) and decrease the dependency tree depth (shorter sentences have shallower dependency trees). Therefore, to clearly study the influence of those parameters, we also add the NbChars parameter during training, and set its ratio to 1.00 at inference time, as a constraint toward not modifying the length.",
"Figure FIGREF27 highlights the cross influence of each of the four parameters on their four associated attributes. Parameters are successively set to ratios of 0.25 (yellow), 0.50 (blue), 0.75 (violet) and 1.00 (red); the ground truth is displayed in green. Plots located on the diagonal show that most parameters have an effect their respective attributes (NbChars affects compression ratio, LevSim controls Levenshtein similarity...), although not with the same level of effectiveness.",
"The histogram located at (row 1, col 1) shows the effect of the NbChars parameter on the compression ratio of the predicted simplifications. The resulting distributions are centered on the 0.25, 0.5, 0.75 and 1 target ratios as expected, and with little overlap. This indicates that the lengths of predictions closely follow what is asked of the model. Table TABREF28 illustrates this with an example. The NbChars parameter affects Levenshtein similarity: reducing the length decreases the Levenshtein similarity. Finally, NbChars has a marginal impact on the WordRank ratio distribution, but clearly influences the dependency tree depth. This is natural considered that the depth of a dependency tree is very correlated with the length of the sentence.",
"The LevSim parameter also has a clear cut impact on the Levenshtein similarity (row 2, col 2). The example in Table TABREF28 highlights that LevSim increases the amount of paraphrasing in the simplifications. However, with an extreme target ratio of 0.25, the model outputs ungrammatical and meaningless predictions, thus demonstrating that the choice of a target ratio is important for generating proper simplifications.",
"WordRank and DepTreeDepth do not seem to control their respective attribute as well as NbChars and LevSim according to Figure FIGREF27. However we witness more lexical simplifications when using the WordRank ratio than with other parameters. In Table TABREF28's example, \"designated as\" is simplified by \"called\" or \"known as\" with the WordRank parameter. Equivalently, DepTreeDepth splits the source sentence in multiple shorter sentences in Table FIGREF30's example. More examples exhibit the same behaviour in Appendix's Table TABREF31. This demonstrates that the WordRank and DepTreeDepth parameters have the desired effect."
],
[
"This paper showed that explicitly conditioning Seq2Seq models on parameters such as length, paraphrasing, lexical complexity or syntactic complexity increases their performance significantly for sentence simplification. We confirmed through an analysis that each parameter has the desired effect on the generated simplifications. In addition to being easy to extend to other attributes of text simplification, our method paves the way toward adapting the simplification to audiences with different needs."
],
[
""
],
[
"Our architecture is the base architecture from BIBREF26. We used an embedding dimension of 512, fully connected layers of dimension 2048, 8 attention heads, 6 layers in the encoder and 6 layers in the decoder. Dropout is set to 0.2. We use the Adam optimizer BIBREF37 with $\\beta _1 = 0.9$, $\\beta _2 = 0.999$, $\\epsilon = 10^{ -8}$ and a learning rate of $lr = 0.00011$. We add label smoothing with a uniform prior distribution of $\\epsilon = 0.54$. We use early stopping when SARI does not increase for more than 5 epochs. We tokenize sentences using the NLTK NIST tokenizer and preprocess using SentencePiece BIBREF38 with 10k vocabulary size to handle rare and unknown words. For generation we use beam search with a beam size of 8.",
""
]
],
"section_name": [
"Introduction",
"Related Work ::: Sentence Simplification",
"Related Work ::: Controllable Text Generation",
"Adding Explicit Parameters to Seq2Seq",
"Experiments ::: Experimental Setting",
"Experiments ::: Overall Performance",
"Ablation Studies",
"Analysis of each Parameter's Influence",
"Conclusion",
"Appendix",
"Appendix ::: Architecture details"
]
} | {
"answers": [
{
"annotation_id": [
"981d92561bcace231b04d92adf28b52e9cadab5c",
"dcffc4b7eec54d1c17e99ceb22f55bc13e8a06ec"
],
"answer": [
{
"evidence": [
"Our models are trained and evaluated on the WikiLarge dataset BIBREF10 which contains 296,402/2,000/359 samples (train/validation/test). WikiLarge is a set of automatically aligned complex-simple sentence pairs from English Wikipedia (EW) and Simple English Wikipedia (SEW). It is compiled from previous extractions of EW-SEW BIBREF11, BIBREF28, BIBREF29. Its validation and test sets are taken from Turkcorpus BIBREF9, where each complex sentence has 8 human simplifications created by Amazon Mechanical Turk workers. Human annotators were instructed to only paraphrase the source sentences while keeping as much meaning as possible. Hence, no sentence splitting, minimal structural simplification and little content reduction occurs in this test set BIBREF9."
],
"extractive_spans": [
"359 samples"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our models are trained and evaluated on the WikiLarge dataset BIBREF10 which contains 296,402/2,000/359 samples (train/validation/test)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our models are trained and evaluated on the WikiLarge dataset BIBREF10 which contains 296,402/2,000/359 samples (train/validation/test). WikiLarge is a set of automatically aligned complex-simple sentence pairs from English Wikipedia (EW) and Simple English Wikipedia (SEW). It is compiled from previous extractions of EW-SEW BIBREF11, BIBREF28, BIBREF29. Its validation and test sets are taken from Turkcorpus BIBREF9, where each complex sentence has 8 human simplifications created by Amazon Mechanical Turk workers. Human annotators were instructed to only paraphrase the source sentences while keeping as much meaning as possible. Hence, no sentence splitting, minimal structural simplification and little content reduction occurs in this test set BIBREF9."
],
"extractive_spans": [
"359 samples"
],
"free_form_answer": "",
"highlighted_evidence": [
"Our models are trained and evaluated on the WikiLarge dataset BIBREF10 which contains 296,402/2,000/359 samples (train/validation/test)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"39fcf0e2fe5d0009c4a19d2f6900f35260a4087a",
"860b96f240ded411141e757acca6c63051c066ca"
],
"answer": [
{
"evidence": [
"On the other hand SARI compares the predicted simplification with both the source and the target references. It is an average of F1 scores for three $n$-gram operations: additions, keeps and deletions. For each operation, these scores are then averaged for all $n$-gram orders (from 1 to 4) to get the overall F1 score."
],
"extractive_spans": [
"SARI compares the predicted simplification with both the source and the target references"
],
"free_form_answer": "",
"highlighted_evidence": [
"On the other hand SARI compares the predicted simplification with both the source and the target references. It is an average of F1 scores for three $n$-gram operations: additions, keeps and deletions. For each operation, these scores are then averaged for all $n$-gram orders (from 1 to 4) to get the overall F1 score."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"On the other hand SARI compares the predicted simplification with both the source and the target references. It is an average of F1 scores for three $n$-gram operations: additions, keeps and deletions. For each operation, these scores are then averaged for all $n$-gram orders (from 1 to 4) to get the overall F1 score."
],
"extractive_spans": [
"the predicted simplification with both the source and the target references"
],
"free_form_answer": "",
"highlighted_evidence": [
"On the other hand SARI compares the predicted simplification with both the source and the target references. It is an average of F1 scores for three $n$-gram operations: additions, keeps and deletions. For each operation, these scores are then averaged for all $n$-gram orders (from 1 to 4) to get the overall F1 score."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"1de195816c21902abbbcf63e52222a81bf1bf927",
"2b839b51fe25259e56570a22c6e0dd28a3d0d71c"
],
"answer": [
{
"evidence": [
"Table TABREF24 compares our best model to state-of-the-art methods:",
"In recent years, SS was largely treated as a monolingual variant of machine translation (MT), where simplification operations are learned from complex-simple sentence pairs automatically extracted from English Wikipedia and Simple English Wikipedia BIBREF11, BIBREF12.",
"Phrase-Based MT system with candidate reranking. Dissimilar candidates are favored based on their Levenshtein distance to the source.",
"BIBREF33",
"Deep semantics sentence representation fed to a monolingual MT system.",
"Our contributions are the following: (1) We adapt a parametrization mechanism to the specific task of Sentence Simplification by choosing relevant parameters; (2) We show through a detailed analysis that our model can indeed control the considered attributes, making the simplifications potentially able to fit the needs of various end audiences; (3) With careful calibration, our controllable parametrization improves the performance of out-of-the-box Seq2Seq models leading to a new state-of-the-art score of 41.87 SARI BIBREF9 on the WikiLarge benchmark BIBREF10, a +1.42 gain over previous scores, without requiring any external resource or modified training objective.",
"Syntax-based MT model augmented using the PPDB paraphrase database BIBREF34 and fine-tuned towards SARI.",
"Seq2Seq trained with reinforcement learning, combined with a lexical simplification model.",
"Lately, SS has mostly been tackled using Seq2Seq MT models BIBREF14. Seq2Seq models were either used as-is BIBREF15 or combined with reinforcement learning thanks to a specific simplification reward BIBREF10, augmented with an external simplification database as a dynamic memory BIBREF16 or trained with multi-tasking on entailment and paraphrase generation BIBREF17.",
"Seq2Seq model based on the pointer-copy mechanism and trained via multi-task learning on the Entailment and Paraphrase Generation tasks.",
"Standard Seq2Seq model. The second beam search hypothesis is selected during decoding; the hypothesis number is an hyper-parameter fine-tuned with SARI.",
"BIBREF35",
"Seq2Seq with a memory-augmented Neural Semantic Encoder, tuned with SARI.",
"Seq2Seq integrating the simple PPDB simplification database BIBREF36 as a dynamic memory. The database is also used to modify the loss and re-weight word probabilities to favor simpler words."
],
"extractive_spans": [],
"free_form_answer": "PBMT-R, Hybrid, SBMT+PPDB+SARI, DRESS-LS, Pointer+Ent+Par, NTS+SARI, NSELSTM-S and DMASS+DCSS",
"highlighted_evidence": [
"Table TABREF24 compares our best model to state-of-the-art methods:\n\nBIBREF12\n\nPhrase-Based MT system with candidate reranking. Dissimilar candidates are favored based on their Levenshtein distance to the source.\n\nBIBREF33\n\nDeep semantics sentence representation fed to a monolingual MT system.\n\nBIBREF9\n\nSyntax-based MT model augmented using the PPDB paraphrase database BIBREF34 and fine-tuned towards SARI.\n\nBIBREF10\n\nSeq2Seq trained with reinforcement learning, combined with a lexical simplification model.\n\nBIBREF17\n\nSeq2Seq model based on the pointer-copy mechanism and trained via multi-task learning on the Entailment and Paraphrase Generation tasks.\n\nBIBREF15\n\nStandard Seq2Seq model. The second beam search hypothesis is selected during decoding; the hypothesis number is an hyper-parameter fine-tuned with SARI.\n\nBIBREF35\n\nSeq2Seq with a memory-augmented Neural Semantic Encoder, tuned with SARI.\n\nBIBREF16\n\nSeq2Seq integrating the simple PPDB simplification database BIBREF36 as a dynamic memory. The database is also used to modify the loss and re-weight word probabilities to favor simpler words."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In recent years, SS was largely treated as a monolingual variant of machine translation (MT), where simplification operations are learned from complex-simple sentence pairs automatically extracted from English Wikipedia and Simple English Wikipedia BIBREF11, BIBREF12.",
"Phrase-Based MT system with candidate reranking. Dissimilar candidates are favored based on their Levenshtein distance to the source.",
"BIBREF33",
"Deep semantics sentence representation fed to a monolingual MT system.",
"Our contributions are the following: (1) We adapt a parametrization mechanism to the specific task of Sentence Simplification by choosing relevant parameters; (2) We show through a detailed analysis that our model can indeed control the considered attributes, making the simplifications potentially able to fit the needs of various end audiences; (3) With careful calibration, our controllable parametrization improves the performance of out-of-the-box Seq2Seq models leading to a new state-of-the-art score of 41.87 SARI BIBREF9 on the WikiLarge benchmark BIBREF10, a +1.42 gain over previous scores, without requiring any external resource or modified training objective.",
"Syntax-based MT model augmented using the PPDB paraphrase database BIBREF34 and fine-tuned towards SARI.",
"Seq2Seq trained with reinforcement learning, combined with a lexical simplification model.",
"Lately, SS has mostly been tackled using Seq2Seq MT models BIBREF14. Seq2Seq models were either used as-is BIBREF15 or combined with reinforcement learning thanks to a specific simplification reward BIBREF10, augmented with an external simplification database as a dynamic memory BIBREF16 or trained with multi-tasking on entailment and paraphrase generation BIBREF17.",
"Seq2Seq model based on the pointer-copy mechanism and trained via multi-task learning on the Entailment and Paraphrase Generation tasks.",
"Standard Seq2Seq model. The second beam search hypothesis is selected during decoding; the hypothesis number is an hyper-parameter fine-tuned with SARI.",
"BIBREF35",
"Seq2Seq with a memory-augmented Neural Semantic Encoder, tuned with SARI.",
"Seq2Seq integrating the simple PPDB simplification database BIBREF36 as a dynamic memory. The database is also used to modify the loss and re-weight word probabilities to favor simpler words."
],
"extractive_spans": [
"BIBREF12",
"BIBREF33",
"BIBREF9",
"BIBREF10",
"BIBREF17",
"BIBREF15",
"BIBREF35",
"BIBREF16"
],
"free_form_answer": "",
"highlighted_evidence": [
"BIBREF12\n\nPhrase-Based MT system with candidate reranking. Dissimilar candidates are favored based on their Levenshtein distance to the source.\n\nBIBREF33\n\nDeep semantics sentence representation fed to a monolingual MT system.\n\nBIBREF9\n\nSyntax-based MT model augmented using the PPDB paraphrase database BIBREF34 and fine-tuned towards SARI.\n\nBIBREF10\n\nSeq2Seq trained with reinforcement learning, combined with a lexical simplification model.\n\nBIBREF17\n\nSeq2Seq model based on the pointer-copy mechanism and trained via multi-task learning on the Entailment and Paraphrase Generation tasks.\n\nBIBREF15\n\nStandard Seq2Seq model. The second beam search hypothesis is selected during decoding; the hypothesis number is an hyper-parameter fine-tuned with SARI.\n\nBIBREF35\n\nSeq2Seq with a memory-augmented Neural Semantic Encoder, tuned with SARI.\n\nBIBREF16\n\nSeq2Seq integrating the simple PPDB simplification database BIBREF36 as a dynamic memory. The database is also used to modify the loss and re-weight word probabilities to favor simpler words."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"",
"",
""
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How large is the test set?",
"What does SARI measure?",
"What are the baseline models?"
],
"question_id": [
"c30b0d6b23f0f01573eea315176c5ffe4e0c6b5c",
"311f9971d61b91c7d76bba1ad6f038390977a8be",
"23cbf6ab365c1eb760b565d8ba51fb3f06257d62"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: Example of parametrization on the number of characters. Here the source and target simplifications respectively contain 71 and 22 characters which gives a compression ratio of 0.3. We prepend the <NbChars 0.3> token to the source sentence. Similarly, the Levenshtein similarity between the source and the sentence is 0.37 which gives the <LevSim 0.4> special token after bucketing.",
"Table 2: Comparison to the literature. We report the results of the model that performed the best on the validation set among all runs and parametrizations. The ratios used for parametrizations are written as subscripts.",
"Figure 1: Density distribution of the compression ratios between the source sentence and the target sentence. The automatically aligned pairs from WikiLarge train set are spread (red) while human simplifications from the validation and test set (green) are gathered together with a mean ratio of 0.93 (i.e. nearly no compression).",
"Table 3: Ablation study on the parameters using greedy forward selection. We report SARI and FKGL on WikiLarge validation set. Each score is a mean over 10 runs with a 95% confidence interval. Scores with ∗ are statistically significantly better than the Transformer baseline (p-value < 0.01 for a Student’s T-test).",
"Figure 2: Influence of each parameter on the corresponding attributes of the output simplifications. Rows represent parameters (each model is trained either only with one parameter or with one parameter and the NbChars1.00 constraint), columns represent output attributes of the predictions and colors represent the fixed target ratio of the parameter (yellow=0.25, blue=0.50, violet=0.75, red=1.00, green=Ground truth). We plot the results on the 2000 validation sentences. See Appendix Figure 3 for models without the NbChars1.00 constraint.",
"Table 4: Influence of parameters on example sentences. Each source sentence is simplified with models trained with each of the four parameters with varying target ratios; modified words are in bold. The NbChars1.00 constraint is added for LevSim, WordRank and DepTreeDepth. More examples can be found in Table 5.",
"Figure 3: Same as Figure 3 but without the NbChars1.00 constraint.",
"Table 5: Additional examples to Table 4"
],
"file": [
"2-Table1-1.png",
"4-Table2-1.png",
"4-Figure1-1.png",
"5-Table3-1.png",
"6-Figure2-1.png",
"6-Table4-1.png",
"9-Figure3-1.png",
"10-Table5-1.png"
]
} | [
"What are the baseline models?"
] | [
[
"1910.02677-Introduction-2",
"1910.02677-Experiments ::: Overall Performance-12",
"1910.02677-Experiments ::: Overall Performance-4",
"1910.02677-Experiments ::: Overall Performance-10",
"1910.02677-Related Work ::: Sentence Simplification-3",
"1910.02677-Experiments ::: Overall Performance-14",
"1910.02677-Related Work ::: Sentence Simplification-1",
"1910.02677-Experiments ::: Overall Performance-3",
"1910.02677-Experiments ::: Overall Performance-8",
"1910.02677-Experiments ::: Overall Performance-6",
"1910.02677-Experiments ::: Overall Performance-0",
"1910.02677-Experiments ::: Overall Performance-2",
"1910.02677-Experiments ::: Overall Performance-16",
"1910.02677-Experiments ::: Overall Performance-13"
]
] | [
"PBMT-R, Hybrid, SBMT+PPDB+SARI, DRESS-LS, Pointer+Ent+Par, NTS+SARI, NSELSTM-S and DMASS+DCSS"
] | 371 |
1909.08211 | Modeling Conversation Structure and Temporal Dynamics for Jointly Predicting Rumor Stance and Veracity | Automatically verifying rumorous information has become an important and challenging task in natural language processing and social media analytics. Previous studies reveal that people's stances towards rumorous messages can provide indicative clues for identifying the veracity of rumors, and thus determining the stances of public reactions is a crucial preceding step for rumor veracity prediction. In this paper, we propose a hierarchical multi-task learning framework for jointly predicting rumor stance and veracity on Twitter, which consists of two components. The bottom component of our framework classifies the stances of tweets in a conversation discussing a rumor via modeling the structural property based on a novel graph convolutional network. The top component predicts the rumor veracity by exploiting the temporal dynamics of stance evolution. Experimental results on two benchmark datasets show that our method outperforms previous methods in both rumor stance classification and veracity prediction. | {
"paragraphs": [
[
"Social media websites have become the main platform for users to browse information and share opinions, facilitating news dissemination greatly. However, the characteristics of social media also accelerate the rapid spread and dissemination of unverified information, i.e., rumors BIBREF0. The definition of rumor is “items of information that are unverified at the time of posting” BIBREF1. Ubiquitous false rumors bring about harmful effects, which has seriously affected public and individual lives, and caused panic in society BIBREF2, BIBREF3. Because online content is massive and debunking rumors manually is time-consuming, there is a great need for automatic methods to identify false rumors BIBREF4.",
"Previous studies have observed that public stances towards rumorous messages are crucial signals to detect trending rumors BIBREF5, BIBREF6 and indicate the veracity of them BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. Therefore, stance classification towards rumors is viewed as an important preceding step of rumor veracity prediction, especially in the context of Twitter conversations BIBREF12.",
"The state-of-the-art methods for rumor stance classification are proposed to model the sequential property BIBREF13 or the temporal property BIBREF14 of a Twitter conversation thread. In this paper, we propose a new perspective based on structural property: learning tweet representations through aggregating information from their neighboring tweets. Intuitively, a tweet's nearer neighbors in its conversation thread are more informative than farther neighbors because the replying relationships of them are closer, and their stance expressions can help classify the stance of the center tweet (e.g., in Figure FIGREF1, tweets “1”, “4” and “5” are the one-hop neighbors of the tweet “2”, and their influences on predicting the stance of “2” are larger than that of the two-hop neighbor “3”). To achieve this, we represent both tweet contents and conversation structures into a latent space using a graph convolutional network (GCN) BIBREF15, aiming to learn stance feature for each tweet by aggregating its neighbors' features. Compared with the sequential and temporal based methods, our aggregation based method leverages the intrinsic structural property in conversations to learn tweet representations.",
"After determining the stances of people's reactions, another challenge is how we can utilize public stances to predict rumor veracity accurately. We observe that the temporal dynamics of public stances can indicate rumor veracity. Figure FIGREF2 illustrates the stance distributions of tweets discussing $true$ rumors, $false$ rumors, and $unverified$ rumors, respectively. As we can see, $supporting$ stance dominates the inception phase of spreading. However, as time goes by, the proportion of $denying$ tweets towards $false$ rumors increases quite significantly. Meanwhile, the proportion of $querying$ tweets towards $unverified$ rumors also shows an upward trend. Based on this observation, we propose to model the temporal dynamics of stance evolution with a recurrent neural network (RNN), capturing the crucial signals containing in stance features for effective veracity prediction.",
"Further, most existing methods tackle stance classification and veracity prediction separately, which is suboptimal and limits the generalization of models. As shown previously, they are two closely related tasks in which stance classification can provide indicative clues to facilitate veracity prediction. Thus, these two tasks can be jointly learned to make better use of their interrelation.",
"Based on the above considerations, in this paper, we propose a hierarchical multi-task learning framework for jointly predicting rumor stance and veracity, which achieves deep integration between the preceding task (stance classification) and the subsequent task (veracity prediction). The bottom component of our framework classifies the stances of tweets in a conversation discussing a rumor via aggregation-based structure modeling, and we design a novel graph convolution operation customized for conversation structures. The top component predicts rumor veracity by exploiting the temporal dynamics of stance evolution, taking both content features and stance features learned by the bottom component into account. Two components are jointly trained to utilize the interrelation between the two tasks for learning more powerful feature representations.",
"The contributions of this work are as follows.",
"$\\bullet $ We propose a hierarchical framework to tackle rumor stance classification and veracity prediction jointly, exploiting both structural characteristic and temporal dynamics in rumor spreading process.",
"$\\bullet $ We design a novel graph convolution operation customized to encode conversation structures for learning stance features. To our knowledge, we are the first to employ graph convolution for modeling the structural property of Twitter conversations.",
"$\\bullet $ Experimental results on two benchmark datasets verify that our hierarchical framework performs better than existing methods in both rumor stance classification and veracity prediction."
],
[
"Rumor Stance Classification Stance analysis has been widely studied in online debate forums BIBREF17, BIBREF18, and recently has attracted increasing attention in different contexts BIBREF19, BIBREF20, BIBREF21, BIBREF22. After the pioneering studies on stance classification towards rumors in social media BIBREF7, BIBREF5, BIBREF8, linguistic feature BIBREF23, BIBREF24 and point process based methods BIBREF25, BIBREF26 have been developed.",
"Recent work has focused on Twitter conversations discussing rumors. BIBREF12 proposed to capture the sequential property of conversations with linear-chain CRF, and also used a tree-structured CRF to consider the conversation structure as a whole. BIBREF27 developed a novel feature set that scores the level of users' confidence. BIBREF28 designed affective and dialogue-act features to cover various facets of affect. BIBREF29 proposed a semi-supervised method that propagates the stance labels on similarity graph. Beyond feature-based methods, BIBREF13 utilized an LSTM to model the sequential branches in a conversation, and their system ranked the first in SemEval-2017 task 8. BIBREF14 adopted attention to model the temporal property of a conversation and achieved the state-of-the-art performance.",
"Rumor Veracity Prediction Previous studies have proposed methods based on various features such as linguistics, time series and propagation structures BIBREF30, BIBREF31, BIBREF32, BIBREF33. Neural networks show the effectiveness of modeling time series BIBREF34, BIBREF35 and propagation paths BIBREF36. BIBREF37's model adopted recursive neural networks to incorporate structure information into tweet representations and outperformed previous methods.",
"Some studies utilized stance labels as the input feature of veracity classifiers to improve the performance BIBREF9, BIBREF38. BIBREF39 proposed to recognize the temporal patterns of true and false rumors' stances by two hidden Markov models (HMMs). Unlike their solution, our method learns discriminative features of stance evolution with an RNN. Moreover, our method jointly predicts stance and veracity by exploiting both structural and temporal characteristics, whereas HMMs need stance labels as the input sequence of observations.",
"Joint Predictions of Rumor Stance and Veracity Several work has addressed the problem of jointly predicting rumor stance and veracity. These studies adopted multi-task learning to jointly train two tasks BIBREF40, BIBREF41, BIBREF42 and learned shared representations with parameter-sharing. Compared with such solutions based on “parallel” architectures, our method is deployed in a hierarchical fashion that encodes conversation structures to learn more powerful stance features by the bottom component, and models stance evolution by the top component, achieving deep integration between the two tasks' feature learning."
],
[
"Consider a Twitter conversation thread $\\mathcal {C}$ which consists of a source tweet $t_1$ (originating a rumor) and a number of reply tweets $\\lbrace t_2,t_3,\\ldots ,t_{|\\mathcal {C}|}\\rbrace $ that respond $t_1$ directly or indirectly, and each tweet $t_i$ ($i\\in [1, |\\mathcal {C}|]$) expresses its stance towards the rumor. The thread $\\mathcal {C}$ is a tree structure, in which the source tweet $t_1$ is the root node, and the replying relationships among tweets form the edges.",
"This paper focuses on two tasks. The first task is rumor stance classification, aiming to determine the stance of each tweet in $\\mathcal {C}$, which belongs to $\\lbrace supporting,denying,querying,commenting\\rbrace $. The second task is rumor veracity prediction, with the aim of identifying the veracity of the rumor, belonging to $\\lbrace true,false,unverified\\rbrace $."
],
[
"We propose a Hierarchical multi-task learning framework for jointly Predicting rumor Stance and Veracity (named Hierarchical-PSV). Figure FIGREF4 illustrates its overall architecture that is composed of two components. The bottom component is to classify the stances of tweets in a conversation thread, which learns stance features via encoding conversation structure using a customized graph convolutional network (named Conversational-GCN). The top component is to predict the rumor's veracity, which takes the learned features from the bottom component into account and models the temporal dynamics of stance evolution with a recurrent neural network (named Stance-Aware RNN)."
],
[
"Now we detail Conversational-GCN, the bottom component of our framework. We first adopt a bidirectional GRU (BGRU) BIBREF43 layer to learn the content feature for each tweet in the thread $\\mathcal {C}$. For a tweet $t_i$ ($i\\in [1,|\\mathcal {C}|]$), we run the BGRU over its word embedding sequence, and use the final step's hidden vector to represent the tweet. The content feature representation of $t_i$ is denoted as $\\mathbf {c}_i\\in \\mathbb {R}^{d}$, where $d$ is the output size of the BGRU.",
"As we mentioned in Section SECREF1, the stance expressions of a tweet $t_i$'s nearer neighbors can provide more informative signals than farther neighbors for learning $t_i$'s stance feature. Based on the above intuition, we model the structural property of the conversation thread $\\mathcal {C}$ to learn stance feature representation for each tweet in $\\mathcal {C}$. To this end, we encode structural contexts to improve tweet representations by aggregating information from neighboring tweets with a graph convolutional network (GCN) BIBREF15.",
"Formally, the conversation $\\mathcal {C}$'s structure can be represented by a graph $\\mathcal {C}_{G}=\\langle \\mathcal {T}, \\mathcal {E} \\rangle $, where $\\mathcal {T}=\\lbrace t_i\\rbrace _{i=1}^{|\\mathcal {C}|}$ denotes the node set (i.e., tweets in the conversation), and $\\mathcal {E}$ denotes the edge set composed of all replying relationships among the tweets. We transform the edge set $\\mathcal {E}$ to an adjacency matrix $\\mathbf {A}\\in \\mathbb {R}^{|\\mathcal {C}|\\times |\\mathcal {C}|}$, where $\\mathbf {A}_{ij}=\\mathbf {A}_{ji}=1$ if the tweet $t_i$ directly replies the tweet $t_j$ or $i=j$. In one GCN layer, the graph convolution operation for one tweet $t_i$ on $\\mathcal {C}_G$ is defined as:",
"where $\\mathbf {h}_i^{\\text{in}}\\in \\mathbb {R}^{d_{\\text{in}}}$ and $\\mathbf {h}_i^{\\text{out}}\\in \\mathbb {R}^{d_{\\text{out}}}$ denote the input and output feature representations of the tweet $t_i$ respectively. The convolution filter $\\mathbf {W}\\in \\mathbb {R}^{d_{\\text{in}}\\times d_{\\text{out}}}$ and the bias $\\mathbf {b}\\in \\mathbb {R}^{d_{\\text{out}}}$ are shared over all tweets in a conversation. We apply symmetric normalized transformation $\\hat{\\mathbf {A}}={\\mathbf {D}}^{-\\frac{1}{2}}\\mathbf {A}{\\mathbf {D}}^{-\\frac{1}{2}}$ to avoid the scale changing of feature representations, where ${\\mathbf {D}}$ is the degree matrix of $\\mathbf {A}$, and $\\lbrace j\\mid \\hat{\\mathbf {A}}_{ij}\\ne 0\\rbrace $ contains $t_i$'s one-hop neighbors and $t_i$ itself.",
"In this original graph convolution operation, given a tweet $t_i$, the receptive field for $t_i$ contains its one-hop neighbors and $t_i$ itself, and the aggregation level of two tweets $t_i$ and $t_j$ is dependent on $\\hat{\\mathbf {A}}_{ij}$. In the context of encoding conversation structures, we observe that such operation can be further improved for two issues. First, a tree-structured conversation may be very deep, which means that the receptive field of a GCN layer is restricted in our case. Although we can stack multiple GCN layers to expand the receptive field, it is still difficult to handle conversations with deep structures and increases the number of parameters. Second, the normalized matrix $\\hat{\\mathbf {A}}$ partly weakens the importance of the tweet $t_i$ itself. To address these issues, we design a novel graph convolution operation which is customized to encode conversation structures. Formally, it is implemented by modifying the matrix $\\hat{\\mathbf {A}}$ in Eq. (DISPLAY_FORM6):",
"where the multiplication operation expands the receptive field of a GCN layer, and adding an identity matrix elevates the importance of $t_i$ itself.",
"After defining the above graph convolution operation, we adopt an $L$-layer GCN to model conversation structures. The $l^{\\text{th}}$ GCN layer ($l\\in [1, L]$) computed over the entire conversation structure can be written as an efficient matrix operation:",
"where $\\mathbf {H}^{(l-1)}\\in \\mathbb {R}^{|\\mathcal {C}|\\times d_{l-1}}$ and $\\mathbf {H}^{(l)}\\in \\mathbb {R}^{|\\mathcal {C}|\\times d_l}$ denote the input and output features of all tweets in the conversation $\\mathcal {C}$ respectively.",
"Specifically, the first GCN layer takes the content features of all tweets as input, i.e., $\\mathbf {H}^{(0)}=(\\mathbf {c}_1,\\mathbf {c}_2,\\ldots ,\\mathbf {c}_{|\\mathcal {C}|})^{\\top }\\in \\mathbb {R}^{|\\mathcal {C}|\\times d}$. The output of the last GCN layer represents the stance features of all tweets in the conversation, i.e., $\\mathbf {H}^{(L)}=(\\mathbf {s}_1,\\mathbf {s}_2,\\ldots ,\\mathbf {s}_{|\\mathcal {C}|})^{\\top }\\in \\mathbb {R}^{|\\mathcal {C}|\\times 4}$, where $\\mathbf {s}_i$ is the unnormalized stance distribution of the tweet $t_i$.",
"For each tweet $t_i$ in the conversation $\\mathcal {C}$, we apply softmax to obtain its predicted stance distribution:",
"The ground-truth labels of stance classification supervise the learning process of Conversational-GCN. The loss function of $\\mathcal {C}$ for stance classification is computed by cross-entropy criterion:",
"where $s_i$ is a one-hot vector that denotes the stance label of the tweet $t_i$. For batch-wise training, the objective function for a batch is the averaged cross-entropy loss of all tweets in these conversations.",
"In previous studies, GCNs are used to encode dependency trees BIBREF44, BIBREF45 and cross-document relations BIBREF46, BIBREF47 for downstream tasks. Our work is the first to leverage GCNs for encoding conversation structures."
],
[
"The top component, Stance-Aware RNN, aims to capture the temporal dynamics of stance evolution in a conversation discussing a rumor. It integrates both content features and stance features learned from the bottom Conversational-GCN to facilitate the veracity prediction of the rumor.",
"Specifically, given a conversation thread $\\mathcal {C}=\\lbrace t_1,t_2,\\ldots ,t_{|\\mathcal {C}|}\\rbrace $ (where the tweets $t_*$ are ordered chronologically), we combine the content feature and the stance feature for each tweet, and adopt a GRU layer to model the temporal evolution:",
"where $[\\cdot ;\\cdot ]$ denotes vector concatenation, and $(\\mathbf {v}_1,\\mathbf {v}_2,\\ldots ,\\mathbf {v}_{|\\mathcal {C}|})$ is the output sequence that represents the temporal feature. We then transform the sequence to a vector $\\mathbf {v}$ by a max-pooling function that captures the global information of stance evolution, and feed it into a one-layer feed-forward neural network (FNN) with softmax normalization to produce the predicted veracity distribution $\\hat{\\mathbf {v}}$:",
"The loss function of $\\mathcal {C}$ for veracity prediction is also computed by cross-entropy criterion:",
"where $v$ denotes the veracity label of $\\mathcal {C}$."
],
[
"To leverage the interrelation between the preceding task (stance classification) and the subsequent task (veracity prediction), we jointly train two components in our framework. Specifically, we add two tasks' loss functions to obtain a joint loss function $\\mathcal {L}$ (with a trade-off parameter $\\lambda $), and optimize $\\mathcal {L}$ to train our framework:",
"In our Hierarchical-PSV, the bottom component Conversational-GCN learns content and stance features, and the top component Stance-Aware RNN takes the learned features as input to further exploit temporal evolution for predicting rumor veracity. Our multi-task framework achieves deep integration of the feature representation learning process for the two closely related tasks."
],
[
"In this section, we first evaluate the performance of Conversational-GCN on rumor stance classification and evaluate Hierarchical-PSV on veracity prediction (Section SECREF21). We then give a detailed analysis of our proposed method (Section SECREF26)."
],
[
"To evaluate our proposed method, we conduct experiments on two benchmark datasets.",
"The first is SemEval-2017 task 8 BIBREF16 dataset. It includes 325 rumorous conversation threads, and has been split into training, development and test sets. These threads cover ten events, and two events of that only appear in the test set. This dataset is used to evaluate both stance classification and veracity prediction tasks.",
"The second is PHEME dataset BIBREF48. It provides 2,402 conversations covering nine events. Following previous work, we conduct leave-one-event-out cross-validation: in each fold, one event's conversations are used for testing, and all the rest events are used for training. The evaluation metric on this dataset is computed after integrating the outputs of all nine folds. Note that only a subset of this dataset has stance labels, and all conversations in this subset are already contained in SemEval-2017 task 8 dataset. Thus, PHEME dataset is used to evaluate veracity prediction task.",
"Table TABREF19 shows the statistics of two datasets. Because of the class-imbalanced problem, we use macro-averaged $F_1$ as the evaluation metric for two tasks. We also report accuracy for reference."
],
[
"In all experiments, the number of GCN layers is set to $L=2$. We list the implementation details in Appendix A."
],
[
"Baselines We compare our Conversational-GCN with the following methods in the literature:",
"$\\bullet $ Affective Feature + SVM BIBREF28 extracts affective and dialogue-act features for individual tweets, and then trains an SVM for classifying stances.",
"$\\bullet $ BranchLSTM BIBREF13 is the winner of SemEval-2017 shared task 8 subtask A. It adopts an LSTM to model the sequential branches in a conversation thread. Before feeding branches into the LSTM, some additional hand-crafted features are used to enrich the tweet representations.",
"$\\bullet $ TemporalAttention BIBREF14 is the state-of-the-art method. It uses a tweet's “neighbors in the conversation timeline” as the context, and utilizes attention to model such temporal sequence for learning the weight of each neighbor. Extra hand-crafted features are also used.",
"Performance Comparison Table TABREF20 shows the results of different methods for rumor stance classification. Clearly, the macro-averaged $F_1$ of Conversational-GCN is better than all baselines.",
"Especially, our method shows the effectiveness of determining $denying$ stance, while other methods can not give any correct prediction for $denying$ class (the $F_{\\text{D}}$ scores of them are equal to zero). Further, Conversational-GCN also achieves higher $F_1$ score for $querying$ stance ($F_{\\text{Q}}$). Identifying $denying$ and $querying$ stances correctly is crucial for veracity prediction because they play the role of indicators for $false$ and $unverified$ rumors respectively (see Figure FIGREF2). Meanwhile, the class-imbalanced problem of data makes this a challenge. Conversational-GCN effectively encodes structural context for each tweet via aggregating information from its neighbors, learning powerful stance features without feature engineering. It is also more computationally efficient than sequential and temporal based methods. The information aggregations for all tweets in a conversation are worked in parallel and thus the running time is not sensitive to conversation's depth."
],
[
"To evaluate our framework Hierarchical-PSV, we consider two groups of baselines: single-task and multi-task baselines.",
"Single-task Baselines In single-task setting, stance labels are not available. Only veracity labels can be used to supervise the training process.",
"$\\bullet $ TD-RvNN BIBREF37 models the top-down tree structure using a recursive neural network for veracity classification.",
"$\\bullet $ Hierarchical GCN-RNN is the single-task variant of our framework: we optimize $\\mathcal {L}_{\\rm {veracity}}$ (i.e., $\\lambda =0$ in Eq. (DISPLAY_FORM16)) during training. Thus, the bottom Conversational-GCN only has indirect supervision (veracity labels) to learn stance features.",
"Multi-task Baselines In multi-task setting, both stance labels and veracity labels are available for training.",
"$\\bullet $ BranchLSTM+NileTMRG BIBREF41 is a pipeline method, combining the winner systems of two subtasks in SemEval-2017 shared task 8. It first trains a BranchLSTM for stance classification, and then uses the predicted stance labels as extra features to train an SVM for veracity prediction BIBREF38.",
"$\\bullet $ MTL2 (Veracity+Stance) BIBREF41 is a multi-task learning method that adopts BranchLSTM as the shared block across tasks. Then, each task has a task-specific output layer, and two tasks are jointly learned.",
"Performance Comparison Table TABREF23 shows the comparisons of different methods. By comparing single-task methods, Hierarchical GCN-RNN performs better than TD-RvNN, which indicates that our hierarchical framework can effectively model conversation structures to learn high-quality tweet representations. The recursive operation in TD-RvNN is performed in a fixed direction and runs over all tweets, thus may not obtain enough useful information. Moreover, the training speed of Hierarchical GCN-RNN is significantly faster than TD-RvNN: in the condition of batch-wise optimization for training one step over a batch containing 32 conversations, our method takes only 0.18 seconds, while TD-RvNN takes 5.02 seconds.",
"Comparisons among multi-task methods show that two joint methods outperform the pipeline method (BranchLSTM+NileTMRG), indicating that jointly learning two tasks can improve the generalization through leveraging the interrelation between them. Further, compared with MTL2 which uses a “parallel” architecture to make predictions for two tasks, our Hierarchical-PSV performs better than MTL2. The hierarchical architecture is more effective to tackle the joint predictions of rumor stance and veracity, because it not only possesses the advantage of parameter-sharing but also offers deep integration of the feature representation learning process for the two tasks. Compared with Hierarchical GCN-RNN that does not use the supervision from stance classification task, Hierarchical-PSV provides a performance boost, which demonstrates that our framework benefits from the joint learning scheme."
],
[
"We conduct additional experiments to further demonstrate the effectiveness of our model."
],
[
"To show the effect of our customized graph convolution operation (Eq. (DISPLAY_FORM7)) for modeling conversation structures, we further compare it with the original graph convolution (Eq. (DISPLAY_FORM6), named Original-GCN) on stance classification task.",
"Specifically, we cluster tweets in the test set according to their depths in the conversation threads (e.g., the cluster “depth = 0” consists of all source tweets in the test set). For BranchLSTM, Original-GCN and Conversational-GCN, we report their macro-averaged $F_1$ on each cluster in Figure FIGREF28.",
"We observe that our Conversational-GCN outperforms Original-GCN and BranchLSTM significantly in most levels of depth. BranchLSTM may prefer to “shallow” tweets in a conversation because they often occur in multiple branches (e.g., in Figure FIGREF1, the tweet “2” occurs in two branches and thus it will be modeled twice). The results indicate that Conversational-GCN has advantage to identify stances of “deep” tweets in conversations."
],
[
"Effect of Stance Features To understand the importance of stance features for veracity prediction, we conduct an ablation study: we only input the content features of all tweets in a conversation to the top component RNN. It means that the RNN only models the temporal variation of tweet contents during spreading, but does not consider their stances and is not “stance-aware”. Table TABREF30 shows that “– stance features” performs poorly, and thus the temporal modeling process benefits from the indicative signals provided by stance features. Hence, combining the low-level content features and the high-level stance features is crucial to improve rumor veracity prediction.",
"Effect of Temporal Evolution Modeling We modify the Stance-Aware RNN by two ways: (i) we replace the GRU layer by a CNN that only captures local temporal information; (ii) we remove the GRU layer. Results in Table TABREF30 verify that replacing or removing the GRU block hurts the performance, and thus modeling the stance evolution of public reactions towards a rumorous message is indeed necessary for effective veracity prediction."
],
[
"We vary the value of $\\lambda $ in the joint loss $\\mathcal {L}$ and train models with various $\\lambda $ to show the interrelation between stance and veracity in Figure FIGREF31. As $\\lambda $ increases from 0.0 to 1.0, the performance of identifying $false$ and $unverified$ rumors generally gains. Therefore, when the supervision signal of stance classification becomes strong, the learned stance features can produce more accurate clues for predicting rumor veracity."
],
[
"Figure FIGREF33 illustrates a $false$ rumor identified by our model. We can observe that the stances of reply tweets present a typical temporal pattern “$supporting\\rightarrow querying\\rightarrow denying$”. Our model captures such stance evolution with RNN and predicts its veracity correctly. Further, the visualization of tweets shows that the max-pooling operation catches informative tweets in the conversation. Hence, our framework can notice salience indicators of rumor veracity in the spreading process and combine them to give correct prediction."
],
[
"We propose a hierarchical multi-task learning framework for jointly predicting rumor stance and veracity on Twitter. We design a new graph convolution operation, Conversational-GCN, to encode conversation structures for classifying stance, and then the top Stance-Aware RNN combines the learned features to model the temporal dynamics of stance evolution for veracity prediction. Experimental results verify that Conversational-GCN can handle deep conversation structures effectively, and our hierarchical framework performs much better than existing methods. In future work, we shall explore to incorporate external context BIBREF16, BIBREF50, and extend our model to multi-lingual scenarios BIBREF51. Moreover, we shall investigate the diffusion process of rumors from social science perspective BIBREF52, draw deeper insights from there and try to incorporate them into the model design."
],
[
"This work was supported in part by the National Key R&D Program of China under Grant #2016QY02D0305, NSFC Grants #71621002, #71472175, #71974187 and #71602184, and Ministry of Health of China under Grant #2017ZX10303401-002. We thank all the anonymous reviewers for their valuable comments. We also thank Qianqian Dong for her kind assistance."
]
],
"section_name": [
"Introduction",
"Related Work",
"Problem Definition",
"Proposed Method",
"Proposed Method ::: Conversational-GCN: Aggregation-based Structure Modeling for Stance Prediction",
"Proposed Method ::: Stance-Aware RNN: Temporal Dynamics Modeling for Veracity Prediction",
"Proposed Method ::: Jointly Learning Two Tasks",
"Experiments",
"Experiments ::: Data & Evaluation Metric",
"Experiments ::: Implementation Details",
"Experiments ::: Experimental Results ::: Results: Rumor Stance Classification",
"Experiments ::: Experimental Results ::: Results: Rumor Veracity Prediction",
"Experiments ::: Further Analysis and Discussions",
"Experiments ::: Further Analysis and Discussions ::: Effect of Customized Graph Convolution",
"Experiments ::: Further Analysis and Discussions ::: Ablation Tests",
"Experiments ::: Further Analysis and Discussions ::: Interrelation of Stance and Veracity",
"Experiments ::: Case Study",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"964f58b459fd91591fa78ec84cc006a3e07def5a",
"f8c695fa9c87103a74fb32a9ef8f02848ed0a555"
],
"answer": [
{
"evidence": [
"The first is SemEval-2017 task 8 BIBREF16 dataset. It includes 325 rumorous conversation threads, and has been split into training, development and test sets. These threads cover ten events, and two events of that only appear in the test set. This dataset is used to evaluate both stance classification and veracity prediction tasks.",
"The second is PHEME dataset BIBREF48. It provides 2,402 conversations covering nine events. Following previous work, we conduct leave-one-event-out cross-validation: in each fold, one event's conversations are used for testing, and all the rest events are used for training. The evaluation metric on this dataset is computed after integrating the outputs of all nine folds. Note that only a subset of this dataset has stance labels, and all conversations in this subset are already contained in SemEval-2017 task 8 dataset. Thus, PHEME dataset is used to evaluate veracity prediction task."
],
"extractive_spans": [],
"free_form_answer": "SemEval-2017 task 8 dataset includes 325 rumorous conversation threads, and has been split into training, development and test sets. \nThe PHEME dataset provides 2,402 conversations covering nine events - in each fold, one event's conversations are used for testing, and all the rest events are used for training. ",
"highlighted_evidence": [
"The first is SemEval-2017 task 8 BIBREF16 dataset. It includes 325 rumorous conversation threads, and has been split into training, development and test sets. These threads cover ten events, and two events of that only appear in the test set. This dataset is used to evaluate both stance classification and veracity prediction tasks.\n\nThe second is PHEME dataset BIBREF48. It provides 2,402 conversations covering nine events. Following previous work, we conduct leave-one-event-out cross-validation: in each fold, one event's conversations are used for testing, and all the rest events are used for training. The evaluation metric on this dataset is computed after integrating the outputs of all nine folds. Note that only a subset of this dataset has stance labels, and all conversations in this subset are already contained in SemEval-2017 task 8 dataset. Thus, PHEME dataset is used to evaluate veracity prediction task."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first is SemEval-2017 task 8 BIBREF16 dataset. It includes 325 rumorous conversation threads, and has been split into training, development and test sets. These threads cover ten events, and two events of that only appear in the test set. This dataset is used to evaluate both stance classification and veracity prediction tasks.",
"The second is PHEME dataset BIBREF48. It provides 2,402 conversations covering nine events. Following previous work, we conduct leave-one-event-out cross-validation: in each fold, one event's conversations are used for testing, and all the rest events are used for training. The evaluation metric on this dataset is computed after integrating the outputs of all nine folds. Note that only a subset of this dataset has stance labels, and all conversations in this subset are already contained in SemEval-2017 task 8 dataset. Thus, PHEME dataset is used to evaluate veracity prediction task."
],
"extractive_spans": [],
"free_form_answer": "SemEval-2017 task 8 dataset is split into train, development and test sets. Two events go into test set and eight events go to train and development sets for every thread in the dataset. PHEME dataset is split as leave-one-event-out cross-validation. One event goes to test and the rest of events go to training set for each conversation. Nine folds are created",
"highlighted_evidence": [
"The first is SemEval-2017 task 8 BIBREF16 dataset. It includes 325 rumorous conversation threads, and has been split into training, development and test sets. These threads cover ten events, and two events of that only appear in the test set.",
"The second is PHEME dataset BIBREF48. It provides 2,402 conversations covering nine events. Following previous work, we conduct leave-one-event-out cross-validation: in each fold, one event's conversations are used for testing, and all the rest events are used for training. The evaluation metric on this dataset is computed after integrating the outputs of all nine folds."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1e76d595e00a523e8e669ca906b772ec55fbd78d",
"ce76894ba7149663a2496d02f77f58c3e9c4273b"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"After determining the stances of people's reactions, another challenge is how we can utilize public stances to predict rumor veracity accurately. We observe that the temporal dynamics of public stances can indicate rumor veracity. Figure FIGREF2 illustrates the stance distributions of tweets discussing $true$ rumors, $false$ rumors, and $unverified$ rumors, respectively. As we can see, $supporting$ stance dominates the inception phase of spreading. However, as time goes by, the proportion of $denying$ tweets towards $false$ rumors increases quite significantly. Meanwhile, the proportion of $querying$ tweets towards $unverified$ rumors also shows an upward trend. Based on this observation, we propose to model the temporal dynamics of stance evolution with a recurrent neural network (RNN), capturing the crucial signals containing in stance features for effective veracity prediction."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We observe that the temporal dynamics of public stances can indicate rumor veracity."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"77c0aa38112a7aab849c0d55eb8c678c01865ab9",
"bdfb52561f7837c4478846ee98e94357144f9cae"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Results of rumor stance classification. FS, FD, FQ and FC denote the F1 scores of supporting, denying, querying and commenting classes respectively. “–” indicates that the original paper does not report the metric.",
"FLOAT SELECTED: Table 3: Results of veracity prediction. Single-task setting means that stance labels cannot be used to train models."
],
"extractive_spans": [],
"free_form_answer": "Their model improves macro-averaged F1 by 0.017 over previous best model in Rumor Stance Classification and improves macro-averaged F1 by 0.03 and 0.015 on Multi-task Rumor Veracity Prediction on SemEval and PHEME datasets respectively",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Results of rumor stance classification. FS, FD, FQ and FC denote the F1 scores of supporting, denying, querying and commenting classes respectively. “–” indicates that the original paper does not report the metric.",
"FLOAT SELECTED: Table 3: Results of veracity prediction. Single-task setting means that stance labels cannot be used to train models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 3: Results of veracity prediction. Single-task setting means that stance labels cannot be used to train models."
],
"extractive_spans": [],
"free_form_answer": "For single-task, proposed method show\noutperform by 0.031 and 0.053 Macro-F1 for SemEval and PHEME dataset respectively.\nFor multi-task, proposed method show\noutperform by 0.049 and 0.036 Macro-F1 for SemEval and PHEME dataset respectively.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 3: Results of veracity prediction. Single-task setting means that stance labels cannot be used to train models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How do they split the dataset when training and evaluating their models?",
"Do they demonstrate the relationship between veracity and stance over time in the Twitter dataset?",
"How much improvement does their model yield over previous methods?"
],
"question_id": [
"a9d5f83f4b32c52105f2ae1c570f1c590ac52487",
"288f0c003cad82b3db5e7231c189c0108ae7423e",
"562a995dfc8d95777aa2a3c6353ee5cd4a9aeb08"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"stance",
"stance",
"stance"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: A conversation thread discussing the rumorous tweet “1”. Three different perspectives for learning the stance feature of the reply tweet “2” are illustrated.",
"Figure 2: Stance distributions of tweets discussing true rumors, false rumors, and unverified rumors, respectively (Better viewed in color). The horizontal axis is the spreading time of the first rumor. It is visualized based on SemEval-2017 task 8 dataset (Derczynski et al., 2017). All tweets are relevant to the event “Ottawa Shooting”.",
"Figure 3: Overall architecture of our proposed framework for joint predictions of rumor stance and veracity. In this illustration, the number of GCN layers is one. The information aggregation process for the tweet t2 based on original graph convolution operation (Eq. (1)) is detailed.",
"Table 1: Statistics of two datasets. The column “Depth” denotes the average depth of all conversation threads.",
"Table 2: Results of rumor stance classification. FS, FD, FQ and FC denote the F1 scores of supporting, denying, querying and commenting classes respectively. “–” indicates that the original paper does not report the metric.",
"Table 3: Results of veracity prediction. Single-task setting means that stance labels cannot be used to train models.",
"Figure 4: Stance classification results w.r.t. different depths (see Appendix B for exact numerical numbers).",
"Figure 5: Veracity prediction results v.s. various values of λ on PHEME dataset. FF and FU denote the F1 scores of false and unverified classes respectively.",
"Table 4: Ablation tests of stance features and temporal modeling for veracity prediction on PHEME dataset.",
"Figure 6: Case study: a false rumor. Each tweet is colored by the number of dimensions it contributes to v in the max-pooling operation (Eq. (7)). We show important tweets in the conversation and truncate others.",
"Table 5: Stance classification results w.r.t. different depths."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"4-Figure3-1.png",
"6-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"8-Figure4-1.png",
"8-Figure5-1.png",
"8-Table4-1.png",
"8-Figure6-1.png",
"12-Table5-1.png"
]
} | [
"How do they split the dataset when training and evaluating their models?",
"How much improvement does their model yield over previous methods?"
] | [
[
"1909.08211-Experiments ::: Data & Evaluation Metric-1",
"1909.08211-Experiments ::: Data & Evaluation Metric-2"
],
[
"1909.08211-6-Table2-1.png",
"1909.08211-7-Table3-1.png"
]
] | [
"SemEval-2017 task 8 dataset is split into train, development and test sets. Two events go into test set and eight events go to train and development sets for every thread in the dataset. PHEME dataset is split as leave-one-event-out cross-validation. One event goes to test and the rest of events go to training set for each conversation. Nine folds are created",
"For single-task, proposed method show\noutperform by 0.031 and 0.053 Macro-F1 for SemEval and PHEME dataset respectively.\nFor multi-task, proposed method show\noutperform by 0.049 and 0.036 Macro-F1 for SemEval and PHEME dataset respectively."
] | 373 |
2001.05540 | Insertion-Deletion Transformer | We propose the Insertion-Deletion Transformer, a novel transformer-based neural architecture and training method for sequence generation. The model consists of two phases that are executed iteratively, 1) an insertion phase and 2) a deletion phase. The insertion phase parameterizes a distribution of insertions on the current output hypothesis, while the deletion phase parameterizes a distribution of deletions over the current output hypothesis. The training method is a principled and simple algorithm, where the deletion model obtains its signal directly on-policy from the insertion model output. We demonstrate the effectiveness of our Insertion-Deletion Transformer on synthetic translation tasks, obtaining significant BLEU score improvement over an insertion-only model. | {
"paragraphs": [
[
"Neural sequence models BIBREF0, BIBREF1 typically generate outputs in an autoregressive left-to-right manner. These models have been successfully applied to a range of task, for example machine translation BIBREF2. They often rely on an encoder that processes the source sequence, and a decoder that generates the output sequence conditioned on the output of the encoder. The decoder will typically generate the target sequence one token at a time, in an autoregressive left-to-right fashion.",
"Recently, research in insertion-based non- or partially- autoregressive models has spiked BIBREF3, BIBREF4, BIBREF5, BIBREF6. These model are more flexible than their autoregressive counterparts. They can generate sequences in any order, and can benefit from parallel token generation. They can learn complex orderings (e.g., tree orderings) and may be more applicable to task like cloze question answering BIBREF6 and text simplification, where the order of generation is not naturally left to right, and the source sequence might not be fully observed. One recently proposed approach is the Insertion Transformer BIBREF3, where the target sequence is modelled with insertion-edits. As opposed to traditional sequence-to-sequence models, the Insertion Transformer can generate sequences in any arbitrary order, where left-to-right is a special case. Additionally, during inference, the model is endowed with parallel token generation capabilities. The Insertion Transformer can be trained to follow a soft balanced binary tree order, thus allowing the model to generate $n$ tokens in $O(\\log _2 n)$ iterations.",
"In this work we propose to generalize this insertion-based framework, we present a framework which emits both insertions and deletions. Our Insertion-Deletion Transformer consists of an insertion phase and a deletion phase that are executed iteratively. The insertion phase follows the typical insertion-based framework BIBREF3. However, in the deletion phase, we teach the model to do deletions with on-policy training. We sample an input sequence on-policy from the insertion model (with on-policy insertion errors), and teach the deletion model its appropriate deletions.",
"This insertion-deletion framework allows for flexible sequence generation, parallel token generation and text editing. In a conventional insertion-based model, if the model makes a mistake during generation, this cannot be undone. Introducing the deletion phase makes it possible to undo the mistakes made by the insertion model, since it is trained on the on-policy errors of the insertion phase. The deletion model extension also enables the framework to efficiently handle tasks like text simplification and style transfer by starting the decoding process from the original source sequence.",
"A concurrent work was recently proposed, called the Levenshtein Transformer (LevT) BIBREF7. The LevT framework also generates sequences with insertion and deletion operations. Our approach has some important distinctions and can be seen as a simplified version, for both the architecture and the training algorithm. The training algorithm used in the LevT framework uses an expert policy. This expert policy requires dynamic programming to minimize Levenshtein distance between the current input and the target. This approach was also explored by BIBREF8, BIBREF9. Their learning algorithm arguably adds more complexity than needed over the simple on-policy method we propose. The LevT framework consists of three stages, first the number of tokens to be inserted is predicted, then the actual tokens are predicted, and finally the deletion actions are emitted. The extra classifier to predict the number of tokens needed to be inserted adds an additional Transformer pass to each generation step. In practice, it is also unclear whether the LevT exhibits speedups over an insertion-based model following a balanced binary tree order. In contrast, our Insertion-Deletion framework only has one insertion phase and one deletion phase, without the need to predict the number of tokens needed to be inserted. This greatly simplifies the model architecture, training procedure and inference runtime.",
"An alternative approach for text editing is proposed by BIBREF10, which they dub Deliberation Networks. This work also acknowledges the potential benefits from post-editing output sequences and proposes a two-phase decoding framework to facilitate this.",
"In this paper, we present the insertion-deletion framework as a proof of concept by applying it to two synthetic character-based translation tasks and showing it can significantly increase the BLEU score over the insertion-only framework."
],
[
"In this section, we describe our Insertion-Deletion model. We extend the Insertion Transformer BIBREF3, an insertion-only framework to handle both insertions and deletions.",
"First, we describe the insertion phase. Given an incomplete (or empty) target sequence $\\vec{y}_{t}$ and a permutation of indices representing the generation order $\\vec{z}$, the Insertion Transformer generates a sequence of insertion operations that produces a complete output sequence $\\vec{y}$ of length $n$. It does this by iteratively extending the current sequence $\\vec{y}_{t}$. In parallel inference, the model predicts a token to be inserted at each location $[1, t]$. We denote tokens by $c \\in C$, where $C$ represents the vocabulary and locations by $l \\in \\lbrace 1, \\dots , |\\vec{y}_t|\\rbrace $. If the insertion model predicts the special symbol denoting an end-of-sequence, the insertions at that location stop. The insertion model will induce a distribution of insertion edits of content $c$ at location $l$ via $p(c, l | \\hat{y}_t)$.",
"The insertion phase is followed by the deletion phase. The deletion model defines a probability distribution over the entire current hypothesis $\\vec{y}_t$, where for each token we capture whether we want to delete it. We define $d \\in [0, 1]$, where $d = 0$ denotes the probability of not deleting and $d = 1$ of deleting a token. The model induces a deletion distribution $p(d, l | \\vec{y}_t)$ representing whether to delete at each location $l \\in [0, |\\vec{y}_t|]$.",
"One full training iteration consisting of an insertion phase followed by a deletion phase can be represented by the following steps:",
"Sample a generation step $i \\sim \\text{Uniform}([1, n])$",
"Sample a partial permutation $z_{1:i-1} \\sim p(z_{1:i-1})$ for the first $i - 1$ insertions",
"Pass this sequence through the insertion model to get the probability distribution over $p(c_i^z \\mid x_{1:i-1}^{z, i-1})$ (denote $\\hat{x}_t$ short for $x_{1:i-1}^{z, i-1}$).",
"Insert the predicted tokens into the current sequence $\\hat{x}_t$ to get sequence $x_{1:i-1+n^i}^{z, i-1+n^i}$ (where $n^i$ denotes the number of insertions, shorten $x_{1:i-1+n^i}^{z, i-1+n^i}$ by $\\hat{x}^*_t$) and pass it through the deletion model.",
"The output of the deletion model represents the probability distribution $p(d_l \\mid l, \\hat{x}^*_t) \\quad \\forall \\quad l \\in \\lbrace 1, \\dots , t\\rbrace $"
],
[
"We parametrize both the insertion and deletion probability distributions with two stacked transformer decoders, where $\\theta _i$ denotes the parameters of the insertion model and $\\theta _d$ of the deletion model. The models are trained at the same time, where the deletion model's signal is dependent on the state of the current insertion model. For sampling from the insertion model we take the argument that maximizes the probability of the current sequence via parallel decoding: $\\hat{c}_l = \\arg \\max _{c}p(c, \\mid l, \\hat{x}_t)$. We do not backpropagate through the sampling process, i.e., the gradient during training can not flow from the output of the deletion model through the insertion model. Both models are trained to maximize the log-probability of their respective distributions. A graphical depiction of the model is shown in Figure FIGREF7.",
"Since the signal for the deletion model is dependent on the insertion model's state, it is possible that the deletion model does not receive a learning signal during training. This happens when either the insertion model is too good and never inserts a wrong token, or when the insertion model does not insert anything at all. To mitigate this problem we propose an adversarial sampling method. To ensure that the deletion model always has a signal, with some probability $p_{\\text{adv}}$ we mask the ground-truth tokens in the target for the insertion model during training. This has the effect that when selecting the token to insert in the input sequence, before passing it to the deletion model, the insertion model selects the incorrect token it is most confident about. Therefore, the deletion model always has a signal and trains for a situation that it will most likely also encounter during inference."
],
[
"We demonstrate the capabilities of our Insertion-Deletion model through experiments on synthetic translation datasets. We show how the addition of deletion improves BLEU score, and how the insertion and deletion model interact as shown in Table TABREF9. We found that adversarial deletion training did not improve BLEU scores on these synthetic tasks. However, the adversarial training scheme can still be helpful when the deletion model does not receive a signal during training by sampling from the insertion model alone (i.e., when the insertion-model does not make any errors)."
],
[
"The first task we train the insertion-deletion model on is shifting alphabetic sequences. For generation of data we sample a sequence length $\\text{min}_n <= n < \\text{max}_n$ from a uniform distribution where $\\text{min}_n = 3$ and $\\text{max}_n = 10$. We then uniformly sample the starting token and finish the alphabetic sequence until it has length $n$. For a sampled $n = 5$ and starting letter $\\text{c}$, shifting each letter by $\\text{max}_n$ to ensure the source and target have no overlapping sequence, here is one example sequence:",
"Source $ c\\ d\\ e\\ f\\ g $",
"Target $ m\\ n\\ o\\ p\\ q $",
"We generate 1000 of examples for training, and evaluate on 100 held-out examples. Table TABREF10 reports our BLEU. We train our models for 200k steps, batch size of 32 and perform no model selection. We see our Insertion-Deletion Transformer model outperforms the Insertion Transformer significantly on this task. One randomly chosen example of the interaction between the insertion and the deletion model during a decoding step is shown in Table TABREF9."
],
[
"The shifted alphabetic sequence task should be trivial to solve for a powerful sequence to sequence model implemented with Transformers. The next translation task we teach the model is Caesar's cipher. This is an old encryption method, in which each letter in the source sequence is replaced by a letter some fixed number of positions down the alphabet. The sequences do not need to be in alphabetic order, meaning the diversity of input sequences will be much larger than with the previous task. We again sample a $\\text{min}_n <= n < \\text{max}_n$, where $\\text{min}_n = 3$ and $\\text{max}_n = 25$ this time. We shift each letter in the source sequence by $\\text{max}_n = 25$. If the sampled $n$ is 5, we randomly sample 5 letters from the alphabet and shift each letter in the target to the left by one character we get the following example:",
"Source $ h\\ k\\ b\\ e\\ t $",
"Target $ g\\ j\\ a\\ d\\ s $",
"We generate 100k examples to train on, and evaluate on 1000 held-out examples. We train our models for 200k steps, batch size of 32 and perform no model selection. The table below shows that the deletion model again increases the BLEU score over just the insertion model, by around 2 BLEU points."
],
[
"In this work we proposed the Insertion-Deletion transformer, that can be implemented with a simple stack of two Transformer decoders, where the top deletion transformer layer gets its signal from the bottom insertion transformer. We demonstrated the capabilities of the model on two synthetic data sets and showed that the deletion model can significantly increase the BLEU score on simple tasks by iteratively refining the output sequence via sequences of insertion-deletions. The approach can be applied to tasks with variable length input and output sequences, like machine translation, without any adjustments by allowing the model to perform as many insertion and deletion phases as necessary until a maximum amount of iterations is reached or the model predicted an end-of-sequence token for all locations. In future work, we want to verify the capabilities of the model on non-synthetic data for tasks like machine translation, paraphrasing and style transfer, where in the latter two tasks we can efficiently utilize the model's capability of starting the decoding process from the source sentence and iteratively edit the text."
]
],
"section_name": [
"Introduction and Related Work",
"Method",
"Method ::: Learning",
"Experiments",
"Experiments ::: Learning shifted alphabetic sequences",
"Experiments ::: Learning Caesar's Cipher",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"3abf3b8fd8647ab9bb506667148bd49ab35bab78",
"66ed26eafa0c9053eb8edd5eb01b20591e96d8ad"
],
"answer": [
{
"evidence": [
"The shifted alphabetic sequence task should be trivial to solve for a powerful sequence to sequence model implemented with Transformers. The next translation task we teach the model is Caesar's cipher. This is an old encryption method, in which each letter in the source sequence is replaced by a letter some fixed number of positions down the alphabet. The sequences do not need to be in alphabetic order, meaning the diversity of input sequences will be much larger than with the previous task. We again sample a $\\text{min}_n <= n < \\text{max}_n$, where $\\text{min}_n = 3$ and $\\text{max}_n = 25$ this time. We shift each letter in the source sequence by $\\text{max}_n = 25$. If the sampled $n$ is 5, we randomly sample 5 letters from the alphabet and shift each letter in the target to the left by one character we get the following example:",
"Source $ h\\ k\\ b\\ e\\ t $",
"Target $ g\\ j\\ a\\ d\\ s $",
"The first task we train the insertion-deletion model on is shifting alphabetic sequences. For generation of data we sample a sequence length $\\text{min}_n <= n < \\text{max}_n$ from a uniform distribution where $\\text{min}_n = 3$ and $\\text{max}_n = 10$. We then uniformly sample the starting token and finish the alphabetic sequence until it has length $n$. For a sampled $n = 5$ and starting letter $\\text{c}$, shifting each letter by $\\text{max}_n$ to ensure the source and target have no overlapping sequence, here is one example sequence:",
"Source $ c\\ d\\ e\\ f\\ g $",
"Target $ m\\ n\\ o\\ p\\ q $"
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We shift each letter in the source sequence by $\\text{max}_n = 25$. If the sampled $n$ is 5, we randomly sample 5 letters from the alphabet and shift each letter in the target to the left by one character we get the following example:\n\nSource $ h\\ k\\ b\\ e\\ t $\n\nTarget $ g\\ j\\ a\\ d\\ s $",
"For a sampled $n = 5$ and starting letter $\\text{c}$, shifting each letter by $\\text{max}_n$ to ensure the source and target have no overlapping sequence, here is one example sequence:\n\nSource $ c\\ d\\ e\\ f\\ g $\n\nTarget $ m\\ n\\ o\\ p\\ q $"
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"First, we describe the insertion phase. Given an incomplete (or empty) target sequence $\\vec{y}_{t}$ and a permutation of indices representing the generation order $\\vec{z}$, the Insertion Transformer generates a sequence of insertion operations that produces a complete output sequence $\\vec{y}$ of length $n$. It does this by iteratively extending the current sequence $\\vec{y}_{t}$. In parallel inference, the model predicts a token to be inserted at each location $[1, t]$. We denote tokens by $c \\in C$, where $C$ represents the vocabulary and locations by $l \\in \\lbrace 1, \\dots , |\\vec{y}_t|\\rbrace $. If the insertion model predicts the special symbol denoting an end-of-sequence, the insertions at that location stop. The insertion model will induce a distribution of insertion edits of content $c$ at location $l$ via $p(c, l | \\hat{y}_t)$."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Given an incomplete (or empty) target sequence $\\vec{y}_{t}$ and a permutation of indices representing the generation order $\\vec{z}$, the Insertion Transformer generates a sequence of insertion operations that produces a complete output sequence $\\vec{y}$ of length $n$."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"1ee72bb5989357c06f802f997a74b29773b438ff",
"69d36e8d50d55259f44b8452e17246786e117194"
],
"answer": [
{
"evidence": [
"We generate 100k examples to train on, and evaluate on 1000 held-out examples. We train our models for 200k steps, batch size of 32 and perform no model selection. The table below shows that the deletion model again increases the BLEU score over just the insertion model, by around 2 BLEU points."
],
"extractive_spans": [
" deletion model again increases the BLEU score over just the insertion model, by around 2 BLEU points"
],
"free_form_answer": "",
"highlighted_evidence": [
"The table below shows that the deletion model again increases the BLEU score over just the insertion model, by around 2 BLEU points."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We generate 1000 of examples for training, and evaluate on 100 held-out examples. Table TABREF10 reports our BLEU. We train our models for 200k steps, batch size of 32 and perform no model selection. We see our Insertion-Deletion Transformer model outperforms the Insertion Transformer significantly on this task. One randomly chosen example of the interaction between the insertion and the deletion model during a decoding step is shown in Table TABREF9.",
"FLOAT SELECTED: Table 3: BLEU scores for the Caesar’s cipher task.",
"FLOAT SELECTED: Table 2: BLEU scores for the sequence shifting task."
],
"extractive_spans": [],
"free_form_answer": "Learning shifted alphabetic sequences: 21.34\nCaesar's Cipher: 2.02",
"highlighted_evidence": [
"Table TABREF10 reports our BLEU. We train our models for 200k steps, batch size of 32 and perform no model selection. We see our Insertion-Deletion Transformer model outperforms the Insertion Transformer significantly on this task. One randomly chosen example of the interaction between the insertion and the deletion model during a decoding step is shown in Table TABREF9.",
"FLOAT SELECTED: Table 3: BLEU scores for the Caesar’s cipher task.",
"FLOAT SELECTED: Table 2: BLEU scores for the sequence shifting task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"Is this model trained in unsuperized manner?",
"How much is BELU score difference between proposed approach and insertion-only method?"
],
"question_id": [
"ed15a593d64a5ba58f63c021ae9fd8f50051a667",
"e86fb784011de5fda6ff8ccbe4ee4deadd7ee7d6"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 3: BLEU scores for the Caesar’s cipher task.",
"Figure 1: Insertion-Deletion Transformer; reads from bottom to top. The bottom row are the source and target sequence, as sampled according to step 1 and 2 in Section 2.1. These are passed through the models to create an output sequence. [CLS] and [SEP] are separator tokens, described in more detail in the BERT paper (Devlin et al., 2018). Note that allowing insertions on the input side is not necessary but trains a model that can be conditioned on the input sequence to generate the target as well as vice versa. For details refer to (Chan et al., 2019)",
"Table 2: BLEU scores for the sequence shifting task.",
"Table 1: Example decoding iteration during inference. Here [ ] denotes a space and insertions are inserted to the left of each token in the target sequence (occurring after [SEP])."
],
"file": [
"3-Table3-1.png",
"4-Figure1-1.png",
"4-Table2-1.png",
"4-Table1-1.png"
]
} | [
"How much is BELU score difference between proposed approach and insertion-only method?"
] | [
[
"2001.05540-Experiments ::: Learning shifted alphabetic sequences-3",
"2001.05540-Experiments ::: Learning Caesar's Cipher-3",
"2001.05540-4-Table2-1.png",
"2001.05540-3-Table3-1.png"
]
] | [
"Learning shifted alphabetic sequences: 21.34\nCaesar's Cipher: 2.02"
] | 375 |
1805.00195 | An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols | We describe an effort to annotate a corpus of natural language instructions consisting of 622 wet lab protocols to facilitate automatic or semi-automatic conversion of protocols into a machine-readable format and benefit biological research. Experimental results demonstrate the utility of our corpus for developing machine learning approaches to shallow semantic parsing of instructional texts. We make our annotated Wet Lab Protocol Corpus available to the research community. | {
"paragraphs": [
[
"As the complexity of biological experiments increases, there is a growing need to automate wet laboratory procedures to avoid mistakes due to human error and also to enhance the reproducibility of experimental biological research BIBREF0 . Several efforts are currently underway to define machine-readable formats for writing wet lab protocols BIBREF1 , BIBREF2 , BIBREF3 . The vast majority of today's protocols, however, are written in natural language with jargon and colloquial language constructs that emerge as a byproduct of ad-hoc protocol documentation. This motivates the need for machine reading systems that can interpret the meaning of these natural language instructions, to enhance reproducibility via semantic protocols (e.g. the Aquarium project) and enable robotic automation BIBREF4 by mapping natural language instructions to executable actions.",
"In this study we take a first step towards this goal by annotating a database of wet lab protocols with semantic actions and their arguments; and conducting initial experiments to demonstrate its utility for machine learning approaches to shallow semantic parsing of natural language instructions. To the best of our knowledge, this is the first annotated corpus of natural language instructions in the biomedical domain that is large enough to enable machine learning approaches.",
"There have been many recent data collection and annotation efforts that have initiated natural language processing research in new directions, for example political framing BIBREF5 , question answering BIBREF6 and cooking recipes BIBREF7 . Although mapping natural language instructions to machine readable representations is an important direction with many practical applications, we believe current research in this area is hampered by the lack of available annotated corpora. Our annotated corpus of wet lab protocols could enable further research on interpreting natural language instructions, with practical applications in biology and life sciences.",
"Prior work has explored the problem of learning to map natural language instructions to actions, often learning through indirect supervision to address the lack of labeled data in instructional domains. This is done, for example, by interacting with the environment BIBREF8 , BIBREF9 or observing weakly aligned sequences of instructions and corresponding actions BIBREF10 , BIBREF11 . In contrast, we present the first steps towards a pragmatic approach based on linguistic annotation (Figure FIGREF4 ). We describe our effort to exhaustively annotate wet lab protocols with actions corresponding to lab procedures and their attributes including materials, instruments and devices used to perform specific actions. As we demonstrate in § SECREF6 , our corpus can be used to train machine learning models which are capable of automatically annotating lab-protocols with action predicates and their arguments BIBREF12 , BIBREF13 ; this could provide a useful linguistic representation for robotic automation BIBREF14 and other downstream applications."
],
[
"Wet laboratories are laboratories for conducting biology and chemistry experiments which involve chemicals, drugs, or other materials in liquid solutions or volatile phases. Figure FIGREF2 shows one representative wet lab protocol. Research groups around the world curate their own repositories of protocols, each adapted from a canonical source and typically published in the Materials and Method section at the end of a scientific article in biology and chemistry fields. Only recently has there been an effort to gather collections of these protocols and make them easily available. Leveraging an openly accessible repository of protocols curated on the https://www.protocols.io platform, we annotated hundreds of academic and commercial protocols maintained by many of the leading bio-science laboratory groups, including Verve Net, Innovative Genomics Institute and New England Biolabs. The protocols cover a large spectrum of experimental biology, including neurology, epigenetics, metabolomics, cancer and stem cell biology, etc (Table TABREF5 ). Wet lab protocols consist of a sequence of steps, mostly composed of imperative statements meant to describe an action. They also can contain declarative sentences describing the results of a previous action, in addition to general guidelines or warnings about the materials being used."
],
[
"In developing our annotation guidelines we had three primary goals: (1) We aim to produce a semantic representation that is well motivated from a biomedical and linguistic perspective; (2) The guidelines should be easily understood by annotators with or without biology background, as evaluated in Table TABREF7 ; (3) The resulting corpus should be useful for training machine learning models to automatically extract experimental actions for downstream applications, as evaluated in § SECREF6 .",
"We utilized the EXACT2 framework BIBREF2 as a basis for our annotation scheme. We borrowed and renamed 9 object-based entities from EXACT2, in addition, we created 5 measure-based (Numerical, Generic-Measure, Size, pH, Measure-Type) and 3 other (Mention, Modifier, Seal) entity types. EXACT2 connects the entities directly to the action without describing the type of relations, whereas we defined and annotated 12 types of relations between actions and entities, or pairs of entities (see Appendix for a full description).",
"For each protocol, the annotators were requested to identify and mark every span of text that corresponds to one of 17 types of entities or an action (see examples in Figure FIGREF3 ). Intersection or overlap of text spans, and the subdivision of words between two spans were not allowed. The annotation guideline was designed to keep the span short for entities, with the average length being 1.6 words. For example, Concentration tags are often very short: 60% 10x, 10M, 1 g/ml. The Method tag has the longest average span of 2.232 words with examples such as rolling back and forth between two hands. The methods in wet lab protocols tend to be descriptive, which pose distinct challenges from existing named entity extraction research in the medical BIBREF15 and other domains. After all entities were labelled, the annotators connected pairs of spans within each sentence by using one of 12 directed links to capture various relationships between spans tagged in the protocol text. While most protocols are written in scientific language, we also observe some non-standard usage, for example using RT to refer to room temperature, which is tagged as Temperature."
],
[
"Our final corpus consists of 622 protocols annotated by a team of 10 annotators. Corpus statistics are provided in Table TABREF5 and TABREF6 . In the first phase of annotation, we worked with a subset of 4 annotators including one linguist and one biologist to develop the annotation guideline for 6 iterations. For each iteration, we asked all 4 annotators to annotate the same 10 protocols and measured their inter-annotator agreement, which in turn helped in determining the validity of the refined guidelines. The average time to annotate a single protocol of 40 sentences was approximately 33 minutes, across all annotators."
],
[
"We used Krippendorff's INLINEFORM0 for nominal data BIBREF16 to measure the inter-rater agreement for entities, actions and relations. For entities, we measured agreement at the word-level by tagging each word in a span with the span's label. To evaluate inter-rater agreement for relations between annotated spans, we consider every pair of spans within a step and then test for matches between annotators (partial entity matches are allowed). We then compute Krippendorff's INLINEFORM1 over relations between matching pairs of spans. Inter-rater agreement for entities, actions and relations is presented in Figure TABREF7 ."
],
[
"To demonstrate the utility of our annotated corpus, we explore two machine learning approaches for extracting actions and entities: a maximum entropy model and a neural network tagging model. We also present experiments for relation classification. We use the standard precision, recall and F INLINEFORM0 metrics to evaluate and compare the performance."
],
[
"In the maximum entropy model for action and entity extraction BIBREF17 , we used three types of features based on the current word and context words within a window of size 2:",
"",
"Parts of speech features which were generated by the GENIA POS Tagger BIBREF18 , which is specifically tuned for biomedical texts;",
"",
"Lexical features which include unigrams, bigrams as well as their lemmas and synonyms from WordNet BIBREF19 are used;",
"",
"Dependency parse features which include dependent and governor words as well as the dependency type to capture syntactic information related to actions, entities and their contexts. We used the Stanford dependency parser BIBREF20 ."
],
[
"We utilized the state-of-the-art Bidirectional LSTM with a Conditional Random Fields (CRF) layer BIBREF21 , BIBREF22 , BIBREF23 , initialized with 200-dimentional word vectors pretrained on 5.5 billion words from PubMed and PMC biomedical texts BIBREF24 . Words unseen in the pretrained vocabulary were randomly initialized using a uniform distribution in the range (-0.01, 0.01). We used Adadelta BIBREF25 optimization with a mini-batch of 16 sentences and trained each network with 5 different random seeds, in order to avoid any outlier results due to randomness in the model initialization."
],
[
"To demonstrate the utility of the relation annotations, we also experimented with a maximum entropy model for relation classification using features shown to be effective in prior work BIBREF26 , BIBREF27 , BIBREF28 . The features are divided into five groups:",
"",
"Word features which include the words contained in both arguments, all words in between, and context words surrounding the arguments;",
"",
"Entity type features which include action and entity types associated with both arguments;",
"",
"Overlapping features which are the number of words, as well as actions or entities, in between the candidate entity pair;",
"",
"Chunk features which are the chunk tags of both arguments predicted by the GENIA tagger;",
"",
"Dependency features which are context words related to the arguments in the dependency tree according to the Stanford Dependency Parser.",
"Also included are features indicating whether the two spans are in the same noun phrase, prepositional phrase, or verb phrase.",
"Finally, precision and recall at relation extraction are presented in Table 5. We used gold action and entity segments for the purposes of this particular evaluation. We obtained the best performance when using all feature sets."
],
[
"The full annotated dataset of 622 protocols are randomly split into training, dev and test sets using a 6:2:2 ratio. The training set contains 374 protocols of 8207 sentences, development set contains 123 protocols of 2736 sentences, and test set contains 125 protocols of 2736 sentences. We use the evaluation script from the CoNLL-03 shared task BIBREF29 , which requires exact matches of label spans and does not reward partial matches. During the data preprocessing, all digits were replaced by `0'."
],
[
"Table TABREF20 shows the performance of various methods for entity tagging. We found that the BiLSTM-CRF model consistently outperforms other methods, achieving an overall F1 score of 86.89 at identifying action triggers and 72.61 at identifying and classifying entities.",
"Table TABREF22 shows the system performance of the MaxEnt tagger using various features. Dependency based features have the highest impact on the detection of entities, as illustrated by the absolute drop of 7.84% in F-score when removed. Parts of speech features alone are the most effective in capturing action words. This is largely due to action words appearing as verbs or nouns in the majority of the sentences as shown in Table TABREF23 . We also notice that the GENIA POS tagger, which is is trained on Wall Street Journal and biomedical abstracts in the GENIA and PennBioIE corpora, under-identifies verbs in wet lab protocols. We suspect this is due to fewer imperative sentences in the training data. We leave further investigation for future work, and hope the release of our dataset can help draw more attention to NLP research on instructional languages."
],
[
"In this paper, we described our effort to annotate wet lab protocols with actions and their semantic arguments. We presented an annotation scheme that is both biologically and linguistically motivated and demonstrated that non-experts can effectively annotate lab protocols. Additionally, we empirically demonstrated the utility of our corpus for developing machine learning approaches to shallow semantic parsing of instructions. Our annotated corpus of protocols is available for use by the research community."
],
[
"We would like to thank the annotators: Bethany Toma, Esko Kautto, Sanaya Shroff, Alex Jacobs, Berkay Kaplan, Colins Sullivan, Junfa Zhu, Neena Baliga and Vardaan Gangal. We would like to thank Marie-Catherine de Marneffe and anonymous reviewers for their feedback."
],
[
"The wet lab protocol dataset annotation guidelines were designed primarily to provide a simple description of the various actions and their arguments in protocols so that it could be more accessible and be effectively used by non-biologists who may want to use this dataset for various natural language processing tasks such as action trigger detection or relation extraction. In the following sub-sections we summarize the guidelines that were used in annotating the 622 protocols as we explore the actions, entities and relations that were chosen to be labelled in this dataset."
],
[
"Under a broad categorization, Action is a process of doing something, typically to achieve an aim. In the context of wet lab protocols, action mentions in a sentence or a step are deliberate but short descriptions of a task tying together various entities in a meaningful way. Some examples of action words, (categorized using GENIA POS tagger), are present in Table TABREF23 along with their frequencies."
],
[
"We broadly classify entities commonly seen in protocols under 17 tags. Each of the entity tags were designed to encourage short span length, with the average number of words per entity tag being INLINEFORM0 . For example, Concentration tags are often very short: 60% 10x, 10M, 1 g/ml, while the Method tag has the longest average span of INLINEFORM1 words with examples such as rolling back and forth between two hands (as seen in Figure FIGREF28 ). The methods in wet lab protocols tend to be descriptive, which pose distinct challenges from existing named entity extraction research in the medical and other domains.",
"Reagent: A substance or mixture for use in any kind of reaction in preparing a product because of its chemical or biological activity.",
"Location: Containers for reagents or other physical entities. They lack any operation capabilities other than acting as a container. These could be laboratory glassware or plastic tubing meant to hold chemicals or biological substances.",
"Device: A machine capable of acting as a container as well as performing a specific task on the objects that it holds. A device and a location are similar in all aspects except that a device performs a specific set of operations on its contents, usually illustrated in the sentence itself, or sometimes implied.",
"Seal: Any kind of lid or enclosure for the location or device. It could be a cap, or a membrane that actively participates in the protocol action, and hence is essential to capture this type of entity.",
"Amount: The amount of any reagent being used in a given step, in terms of weight or volume.",
"Concentration: Measure of the relative proportions of two or more quantities in a mixture. Usually in terms of their percentages by weight or volume.",
"Time: Duration of a specific action described in a single step or steps, typically in secs, min, days, or weeks.",
"Temperature: Any temperature mentioned in degree Celsius, Fahrenheit, or Kelvin.",
"Method: A word or phrase used to concisely define the procedure to be performed in association with the chosen action verb. It’s usually a noun, but could also be a passive verb.",
"Speed: Typically a measure that represents rotation per min for centrifuges.",
"Numerical: A generic tag for a number that doesn't fit time, temp, etc and which isn't accompanied by its unit of measure.",
"Generic-Measure: Any measures that don't fit the list of defined measures in this list.",
"Size A measure of the dimension of an object. For example: length, area or thickness.",
"Measure-Type: A generic tag to mark the type of measurement associated with a number.",
"pH: measure of acidity or alkalinity of a solution.",
"Modifier: A word or a phrase that acts as an additional description of the entity it is modifying. For example, quickly mix vs slowly mix are clearly two different actions, informed by their modifiers \"quickly\" or \"slowly\" respectively.",
"Mention: Words that can refer to an object mentioned earlier in the sentence."
],
[
"Acts-On: Links the reagent, or location that the action acts on, typically linking the direct objects in the sentence to the action.",
"Creates: This relation marks the physical entity that the action creates.",
"Site: A link that associates a Location or Device to an action. It indicates that the Device or Location is the site where the action is performed. It is also used as a way to indicate which entity will finally hold/contain the result of the action.",
"Using: Any entity that the action verb makes ‘use’ of is linked with this relation.",
"Setting: Any measure type entity that is being used to set a device is linked to the action that is attempting to use that numerical.",
"Count: A Numerical entity that represents the number of times the action should take place.",
"Measure Type Link: Associates an action to a Measure Type entity that the Action is instructing to measure.",
"Coreference: A link that associates two phrases when those two phrases refer to the same entity.",
"Mod Link: A Modifier entity is linked to any entity that it is attempting to modify using this relation.",
"Settings: Links devices to their settings directly, only if there is no Action associated with those settings.",
"Measure: A link that associates the various numerical measures to the entity its trying to measure directly.",
"Meronym: Links reagents, locations or devices with materials contained in the reagent, location or device.",
"Or: Allows chaining multiple entities where either of them can be used for a given link.",
"Of-Type: used to specify the Measure-Type of a Generic-Measure or a Numerical, if the sentence contains this information."
]
],
"section_name": [
"Introduction",
"Wet Lab Protocols",
"Annotation Scheme",
"Annotation Process",
"Inter-Annotator Agreement",
"Methods",
"Maximum Entropy (MaxEnt) Tagger",
"Neural Sequence Tagger",
"Relation Classification",
"Results",
"Entity Identification and Classification",
"Conclusions",
"Acknowledgement",
"Annotation Guidelines",
"Actions",
"Entities",
"Relations"
]
} | {
"answers": [
{
"annotation_id": [
"90f2a35726b939353f74bc813fe072d93703d3fe",
"c999efe2a60871c4c96dd94cb732e30c1d9a2ecd"
],
"answer": [
{
"evidence": [
"Our final corpus consists of 622 protocols annotated by a team of 10 annotators. Corpus statistics are provided in Table TABREF5 and TABREF6 . In the first phase of annotation, we worked with a subset of 4 annotators including one linguist and one biologist to develop the annotation guideline for 6 iterations. For each iteration, we asked all 4 annotators to annotate the same 10 protocols and measured their inter-annotator agreement, which in turn helped in determining the validity of the refined guidelines. The average time to annotate a single protocol of 40 sentences was approximately 33 minutes, across all annotators."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Our final corpus consists of 622 protocols annotated by a team of 10 annotators. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"In developing our annotation guidelines we had three primary goals: (1) We aim to produce a semantic representation that is well motivated from a biomedical and linguistic perspective; (2) The guidelines should be easily understood by annotators with or without biology background, as evaluated in Table TABREF7 ; (3) The resulting corpus should be useful for training machine learning models to automatically extract experimental actions for downstream applications, as evaluated in § SECREF6 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In developing our annotation guidelines we had three primary goals: (1) We aim to produce a semantic representation that is well motivated from a biomedical and linguistic perspective; (2) The guidelines should be easily understood by annotators with or without biology background, as evaluated in Table TABREF7 ; (3) The resulting corpus should be useful for training machine learning models to automatically extract experimental actions for downstream applications, as evaluated in § SECREF6 ."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1ee8a3627f53c85b3130abbd311e363f409a0c9b",
"a1e7020ebb9dd76a25a078d378c87d13cf816890"
],
"answer": [
{
"evidence": [
"To demonstrate the utility of our annotated corpus, we explore two machine learning approaches for extracting actions and entities: a maximum entropy model and a neural network tagging model. We also present experiments for relation classification. We use the standard precision, recall and F INLINEFORM0 metrics to evaluate and compare the performance."
],
"extractive_spans": [
"maximum entropy",
"neural network tagging model"
],
"free_form_answer": "",
"highlighted_evidence": [
"To demonstrate the utility of our annotated corpus, we explore two machine learning approaches for extracting actions and entities: a maximum entropy model and a neural network tagging model. We also present experiments for relation classification."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 4: F1 scores for segmenting and classifying entities and action triggers compared across the various models.",
"Table TABREF20 shows the performance of various methods for entity tagging. We found that the BiLSTM-CRF model consistently outperforms other methods, achieving an overall F1 score of 86.89 at identifying action triggers and 72.61 at identifying and classifying entities."
],
"extractive_spans": [],
"free_form_answer": "MaxEnt, BiLSTM, BiLSTM+CRF",
"highlighted_evidence": [
"FLOAT SELECTED: Table 4: F1 scores for segmenting and classifying entities and action triggers compared across the various models.",
"Table TABREF20 shows the performance of various methods for entity tagging. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"",
""
],
"paper_read": [
"",
""
],
"question": [
"are the protocols manually annotated?",
"what ML approaches did they experiment with?"
],
"question_id": [
"d206f2cbcc3d2a6bd0ccaa3b57fece396159f609",
"633e2210c740b4558b1eea3f041b3ae8e0813293"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
""
],
"topic_background": [
"",
""
]
} | {
"caption": [
"Figure 2: Example sentences (#5 and #6) from the lab protocol in Figure 1 as shown in the BRAT annotation interface.",
"Figure 3: An action graph can be directly derived from annotations as seen in Figure 2 (example sentence #6) .",
"Table 1: Statistics of our Wet Lab Protocol Corpus by protocol category.",
"Table 3: Inter-annotator agreement (Krippendorff’s α) between annotators with biology, linguistics and other backgrounds.",
"Table 2: Statistics of the Wet Lab Protocol Corpus.",
"Table 4: F1 scores for segmenting and classifying entities and action triggers compared across the various models.",
"Table 5: Precision, Recall and F1 (micro-average) of the maximum entropy model for relation classification, as each feature is added.",
"Table 6: Performance of maximum entropy model with various features.*The POS features are especially useful for recognizing actions; dependency based features are more helpful for entities than actions.",
"Table 7: Frequency of different part-of-speech (POS) tags for action words. Majority of the action words either fall under the verb POS tags (VBs 60.48%) or nouns (NNs 30.84%). The GENIA POS tagger is under-identifying verbs in the wet lab protocols, tagging some as adjectives (JJ).",
"Figure 4: Examples, Frequency and Avg-Word for actions and entities.",
"Table 8: Relations along with their rules and examples"
],
"file": [
"2-Figure2-1.png",
"2-Figure3-1.png",
"3-Table1-1.png",
"3-Table3-1.png",
"3-Table2-1.png",
"4-Table4-1.png",
"4-Table5-1.png",
"5-Table6-1.png",
"5-Table7-1.png",
"8-Figure4-1.png",
"9-Table8-1.png"
]
} | [
"what ML approaches did they experiment with?"
] | [
[
"1805.00195-4-Table4-1.png",
"1805.00195-Entity Identification and Classification-0",
"1805.00195-Methods-0"
]
] | [
"MaxEnt, BiLSTM, BiLSTM+CRF"
] | 376 |
1612.02695 | Towards better decoding and language model integration in sequence to sequence models | The recently proposed Sequence-to-Sequence (seq2seq) framework advocates replacing complex data processing pipelines, such as an entire automatic speech recognition system, with a single neural network trained in an end-to-end fashion. In this contribution, we analyse an attention-based seq2seq speech recognition system that directly transcribes recordings into characters. We observe two shortcomings: overconfidence in its predictions and a tendency to produce incomplete transcriptions when language models are used. We propose practical solutions to both problems achieving competitive speaker independent word error rates on the Wall Street Journal dataset: without separate language models we reach 10.6% WER, while together with a trigram language model, we reach 6.7% WER. | {
"paragraphs": [
[
"Deep learning BIBREF0 has led to many breakthroughs including speech and image recognition BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . A subfamily of deep models, the Sequence-to-Sequence (seq2seq) neural networks have proved to be very successful on complex transduction tasks, such as machine translation BIBREF7 , BIBREF8 , BIBREF9 , speech recognition BIBREF10 , BIBREF11 , BIBREF12 , and lip-reading BIBREF13 . Seq2seq networks can typically be decomposed into modules that implement stages of a data processing pipeline: an encoding module that transforms its inputs into a hidden representation, a decoding (spelling) module which emits target sequences and an attention module that computes a soft alignment between the hidden representation and the targets. Training directly maximizes the probability of observing desired outputs conditioned on the inputs. This discriminative training mode is fundamentally different from the generative \"noisy channel\" formulation used to build classical state-of-the art speech recognition systems. As such, it has benefits and limitations that are different from classical ASR systems.",
"Understanding and preventing limitations specific to seq2seq models is crucial for their successful development. Discriminative training allows seq2seq models to focus on the most informative features. However, it also increases the risk of overfitting to those few distinguishing characteristics. We have observed that seq2seq models often yield very sharp predictions, and only a few hypotheses need to be considered to find the most likely transcription of a given utterance. However, high confidence reduces the diversity of transcripts obtained using beam search.",
"During typical training the models are conditioned on ground truth transcripts and are scored on one-step ahead predictions. By itself, this training criterion does not ensure that all relevant fragments of the input utterance are transcribed. Subsequently, mistakes that are introduced during decoding may cause the model to skip some words and jump to another place in the recording. The problem of incomplete transcripts is especially apparent when external language models are used."
],
[
"Our speech recognition system, builds on the recently proposed Listen, Attend and Spell network BIBREF12 . It is an attention-based seq2seq model that is able to directly transcribe an audio recording INLINEFORM0 into a space-delimited sequence of characters INLINEFORM1 . Similarly to other seq2seq neural networks, it uses an encoder-decoder architecture composed of three parts: a listener module tasked with acoustic modeling, a speller module tasked with emitting characters and an attention module serving as the intermediary between the speller and the listener: DISPLAYFORM0 "
],
[
"The listener is a multilayer Bi-LSTM network that transforms a sequence of INLINEFORM0 frames of acoustic features INLINEFORM1 into a possibly shorter sequence of hidden activations INLINEFORM2 , where INLINEFORM3 is a time reduction constant BIBREF11 , BIBREF12 ."
],
[
"The speller computes the probability of a sequence of characters conditioned on the activations of the listener. The probability is computed one character at a time, using the chain rule: DISPLAYFORM0 ",
"To emit a character the speller uses the attention mechanism to find a set of relevant activations of the listener INLINEFORM0 and summarize them into a context INLINEFORM1 . The history of previously emitted characters is encapsulated in a recurrent state INLINEFORM2 : DISPLAYFORM0 ",
" We implement the recurrent step using a single LSTM layer. The attention mechanism is sensitive to the location of frames selected during the previous step and employs the convolutional filters over the previous attention weights BIBREF10 . The output character distribution is computed using a SoftMax function."
],
[
"Our speech recognizer computes the probability of a character conditioned on the partially emitted transcript and the whole utterance. It can thus be trained to minimize the cross-entropy between the ground-truth characters and model predictions. The training loss over a single utterance is DISPLAYFORM0 ",
"where INLINEFORM0 denotes the target label function. In the baseline model INLINEFORM1 is the indicator INLINEFORM2 , i.e. its value is 1 for the correct character, and 0 otherwise. When label smoothing is used, INLINEFORM3 encodes a distribution over characters."
],
[
"Decoding new utterances amounts to finding the character sequence INLINEFORM0 that is most probable under the distribution computed by the network: DISPLAYFORM0 ",
"Due to the recurrent formulation of the speller function, the most probable transcript cannot be found exactly using the Viterbi algorithm. Instead, approximate search methods are used. Typically, best results are obtained using beam search. The search begins with the set (beam) of hypotheses containing only the empty transcript. At every step, candidate transcripts are formed by extending hypothesis in the beam by one character. The candidates are then scored using the model, and a certain number of top-scoring candidates forms the new beam. The model indicates that a transcript is considered to be finished by emitting a special EOS (end-of-sequence) token."
],
[
"The simplest solution to include a separate language model is to extend the beam search cost with a language modeling term BIBREF11 , BIBREF3 , BIBREF14 : DISPLAYFORM0 ",
"where coverage refers to a term that promotes longer transcripts described it in detail in Section SECREF16 .",
"We have identified two challenges in adding the language model. First, due to model overconfidence deviations from the best guess of the network drastically changed the term INLINEFORM0 , which made balancing the terms in eq. ( EQREF11 ) difficult. Second, incomplete transcripts were produced unless a recording coverage term was added.",
"Equation ( EQREF11 ) is a heuristic involving the multiplication of a conditional and unconditional probabilities of the transcript INLINEFORM0 . We have tried to justify it by adding an intrinsic language model suppression term INLINEFORM1 that would transform INLINEFORM2 into INLINEFORM3 . We have estimated the language modeling capability of the speller INLINEFORM4 by replacing the encoded speech with a constant, separately trained, biasing vector. The per character perplexity obtained was about 6.5 and we didn't observe consistent gains from this extension of the beam search criterion."
],
[
"We have analysed the impact of model confidence by separating its effects on model accuracy and beam search effectiveness. We also propose a practical solution to the partial transcriptions problem, relating to the coverage of the input utterance."
],
[
"Model confidence is promoted by the the cross-entropy training criterion. For the baseline network the training loss ( EQREF7 ) is minimized when the model concentrates all of its output distribution on the correct ground-truth character. This leads to very peaked probability distributions, effectively preventing the model from indicating sensible alternatives to a given character, such as its homophones. Moreover, overconfidence can harm learning the deeper layers of the network. The derivative of the loss backpropagated through the SoftMax function to the logit corresponding to character INLINEFORM0 equals INLINEFORM1 , which approaches 0 as the network's output becomes concentrated on the correct character. Therefore whenever the spelling RNN makes a good prediction, very little training signal is propagated through the attention mechanism to the listener.",
"Model overconfidence can have two consequences. First, next-step character predictions may have low accuracy due to overfitting. Second, overconfidence may impact the ability of beam search to find good solutions and to recover from errors.",
"We first investigate the impact of confidence on beam search by varying the temperature of the SoftMax function. Without retraining the model, we change the character probability distribution to depend on a temperature hyperparameter INLINEFORM0 : DISPLAYFORM0 ",
"At increased temperatures the distribution over characters becomes more uniform. However, the preferences of the model are retained and the ordering of tokens from the most to least probable is preserved. Tuning the temperature therefore allows to demonstrate the impact of model confidence on beam search, without affecting the accuracy of next step predictions.",
"Decoding results of a baseline model on the WSJ dev93 data set are presented in Figure FIGREF13 . We haven't used a language model. At high temperatures deletion errors dominated. We didn't want to change the beam search cost and instead constrained the search to emit the EOS token only when its probability was within a narrow range from the most probable token. We compare the default setting ( INLINEFORM0 ), with a sharper distribution ( INLINEFORM1 ) and smoother distributions ( INLINEFORM2 ). All strategies lead to the same greedy decoding accuracy, because temperature changes do not affect the selection of the most probable character. As temperature increases beam search finds better solutions, however care must be taken to prevent truncated transcripts."
],
[
"A elegant solution to model overconfidence was problem proposed for the Inception image recognition architecture BIBREF15 . For the purpose of computing the training cost the ground-truth label distribution is smoothed, with some fraction of the probability mass assigned to classes other than the correct one. This in turn prevents the model from learning to concentrate all probability mass on a single token. Additionally, the model receives more training signal because the error function cannot easily saturate.",
"Originally uniform label smoothing scheme was proposed in which the model is trained to assign INLINEFORM0 probability mass to he correct label, and spread the INLINEFORM1 probability mass uniformly over all classes BIBREF15 . Better results can be obtained with unigram smoothing which distributes the remaining probability mass proportionally to the marginal probability of classes BIBREF16 . In this contribution we propose a neighborhood smoothing scheme that uses the temporal structure of the transcripts: the remaining INLINEFORM2 probability mass is assigned to tokens neighboring in the transcript. Intuitively, this smoothing scheme helps the model to recover from beam search errors: the network is more likely to make mistakes that simply skip a character of the transcript.",
"We have repeated the analysis of SoftMax temperature on beam search accuracy on a network trained with neighborhood smoothing in Figure FIGREF13 . We can observe two effects. First, the model is regularized and greedy decoding leads to nearly 3 percentage smaller error rate. Second, the entropy of network predictions is higher, allowing beam search to discover good solutions without the need for temperature control. Moreover, the since model is trained and evaluated with INLINEFORM0 we didn't have to control the emission of EOS token."
],
[
"When a language model is used wide beam searches often yield incomplete transcripts. With narrow beams, the problem is less visible due to implicit hypothesis pruning. We illustrate a failed decoding in Table TABREF17 . The ground truth (first row) is the least probable transcript according both to the network and the language model. A width 100 beam search with a trigram language model finds the second transcript, which misses the beginning of the utterance. The last rows demonstrate severely incomplete transcriptions that may be discovered when decoding is performed with even wider beam sizes.",
"We compare three strategies designed to prevent incomplete transcripts. The first strategy doesn't change the beam search criterion, but forbids emitting the EOS token unless its probability is within a set range of that of the most probable token. This strategy prevents truncations, but is inefficient against omissions in the middle of the transcript, such as the failure shown in Table TABREF17 . Alternatively, beam search criterion can be extended to promote long transcripts. A term depending on the transcript length was proposed for both CTC BIBREF3 and seq2seq BIBREF11 networks, but its usage was reported to be difficult because beam search was looping over parts of the recording and additional constraints were needed BIBREF11 . To prevent looping we propose to use a coverage term that counts the number of frames that have received a cumulative attention greater than INLINEFORM0 : DISPLAYFORM0 ",
"The coverage criterion prevents looping over the utterance because once the cumulative attention bypasses the threshold INLINEFORM0 a frame is counted as selected and subsequent selections of this frame do not reduce the decoding cost. In our implementation, the coverage is recomputed at each beam search iteration using all attention weights produced up to this step.",
"In Figure FIGREF19 we compare the effects of the three methods when decoding a network that uses label smoothing and a trigram language model. Unlike BIBREF11 we didn't experience looping when beam search promoted transcript length. We hypothesize that label smoothing increases the cost of correct character emissions which helps balancing all terms used by beam search. We observe that at large beam widths constraining EOS emissions is not sufficient. In contrast, both promoting coverage and transcript length yield improvements with increasing beams. However, simply maximizing transcript length yields more word insertion errors and achieves an overall worse WER."
],
[
"We conducted all experiments on the Wall Street Journal dataset, training on si284, validating on dev93 and evaluating on eval92 set. The models were trained on 80-dimensional mel-scale filterbanks extracted every 10ms form 25ms windows, extended with their temporal first and second order differences and per-speaker mean and variance normalization. Our character set consisted of lowercase letters, the space, the apostrophe, a noise marker, and start- and end- of sequence tokens. For comparison with previously published results, experiments involving language models used an extended-vocabulary trigram language model built by the Kaldi WSJ s5 recipe BIBREF17 . We have use the FST framework to compose the language model with a \"spelling lexicon\" BIBREF5 , BIBREF11 , BIBREF18 . All models were implemented using the Tensorflow framework BIBREF19 .",
"Our base configuration implemented the Listener using 4 bidirectional LSTM layers of 256 units per direction (512 total), interleaved with 3 time-pooling layers which resulted in an 8-fold reduction of the input sequence length, approximately equating the length of hidden activations to the number of characters in the transcript. The Speller was a single LSTM layer with 256 units. Input characters were embedded into 30 dimensions. The attention MLP used 128 hidden units, previous attention weights were accessed using 3 convolutional filters spanning 100 frames. LSTM weights were initialized uniformly over the range INLINEFORM0 . Networks were trained using 8 asynchronous replica workers each employing the ADAM algorithm BIBREF20 with default parameters and the learning rate set initially to INLINEFORM1 , then reduced to INLINEFORM2 and INLINEFORM3 after 400k and 500k training steps, respectively. Static Gaussian weight noise with standard deviation 0.075 was applied to all weight matrices after 20000 training steps. We have also used a small weight decay of INLINEFORM4 .",
"We have compared two label smoothing methods: unigram smoothing BIBREF16 with the probability of the correct label set to INLINEFORM0 and neighborhood smoothing with the probability of correct token set to INLINEFORM1 and the remaining probability mass distributed symmetrically over neighbors at distance INLINEFORM2 and INLINEFORM3 with a INLINEFORM4 ratio. We have tuned the smoothing parameters with a small grid search and have found that good results can be obtained for a broad range of settings.",
"We have gathered results obtained without language models in Table TABREF20 . We have used a beam size of 10 and no mechanism to promote longer sequences. We report averages of two runs taken at the epoch with the lowest validation WER. Label smoothing brings a large error rate reduction, nearly matching the performance achieved with very deep and sophisticated encoders BIBREF21 .",
"Table TABREF21 gathers results that use the extended trigram language model. We report averages of two runs. For each run we have tuned beam search parameters on the validation set and applied them on the test set. A typical setup used beam width 200, language model weight INLINEFORM0 , coverage weight INLINEFORM1 and coverage threshold INLINEFORM2 . Our best result surpasses CTC-based networks BIBREF5 and matches the results of a DNN-HMM and CTC ensemble BIBREF22 ."
],
[
"Label smoothing was proposed as an efficient regularizer for the Inception architecture BIBREF15 . Several improved smoothing schemes were proposed, including sampling erroneous labels instead of using a fixed distribution BIBREF24 , using the marginal label probabilities BIBREF16 , or using early errors of the model BIBREF25 . Smoothing techniques increase the entropy of a model's predictions, a technique that was used to promote exploration in reinforcement learning BIBREF26 , BIBREF27 , BIBREF28 . Label smoothing prevents saturating the SoftMax nonlinearity and results in better gradient flow to lower layers of the network BIBREF15 . A similar concept, in which training targets were set slightly below the range of the output nonlinearity was proposed in BIBREF29 .",
"Our seq2seq networks are locally normalized, i.e. the speller produces a probability distribution at every step. Alternatively normalization can be performed globally on whole transcripts. In discriminative training of classical ASR systems normalization is performed over lattices BIBREF30 . In the case of recurrent networks lattices are replaced by beam search results. Global normalization has yielded important benefits on many NLP tasks including parsing and translation BIBREF31 , BIBREF32 . Global normalization is expensive, because each training step requires running beam search inference. It remains to be established whether globally normalized models can be approximated by cheaper to train locally normalized models with proper regularization such as label smoothing.",
"Using source coverage vectors has been investigated in neural machine translation models. Past attentions vectors were used as auxiliary inputs in the emitting RNN either directly BIBREF33 , or as cumulative coverage information BIBREF34 . Coverage embeddings vectors associated with source words end modified during training were proposed in BIBREF35 . Our solution that employs a coverage penalty at decode time only is most similar to the one used by the Google Translation system BIBREF9 ."
],
[
"We have demonstrated that with efficient regularization and careful decoding the sequence-to-sequence approach to speech recognition can be competitive with other non-HMM techniques, such as CTC."
]
],
"section_name": [
"Introduction",
"Model Description",
"The Listener",
"The Speller and the Attention Mechanism",
"Training Criterion",
"Decoding: Beam Search",
"Language Model Integration",
"Solutions to Seq2Seq Failure Modes",
"Impact of Model Overconfidence",
"Label Smoothing Prevents Overconfidence",
"Solutions to Partial Transcripts Problem",
"Experiments",
"Related Work",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"1eede96d33534ed67dde5406c2dbce17ff4a778a",
"fed5916c556a02d4a391df7c2fe1543a196de83c"
],
"answer": [
{
"evidence": [
"To emit a character the speller uses the attention mechanism to find a set of relevant activations of the listener INLINEFORM0 and summarize them into a context INLINEFORM1 . The history of previously emitted characters is encapsulated in a recurrent state INLINEFORM2 : DISPLAYFORM0"
],
"extractive_spans": [
"find a set of relevant activations of the listener INLINEFORM0 and summarize them into a context"
],
"free_form_answer": "",
"highlighted_evidence": [
"To emit a character the speller uses the attention mechanism to find a set of relevant activations of the listener INLINEFORM0 and summarize them into a context INLINEFORM1 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"b2f3d437617675cb0a3d2afc2f6e9f30f8d9816e",
"f394972529fc168d50c16f5cb21bd7ce73d7116c"
],
"answer": [
{
"evidence": [
"We compare three strategies designed to prevent incomplete transcripts. The first strategy doesn't change the beam search criterion, but forbids emitting the EOS token unless its probability is within a set range of that of the most probable token. This strategy prevents truncations, but is inefficient against omissions in the middle of the transcript, such as the failure shown in Table TABREF17 . Alternatively, beam search criterion can be extended to promote long transcripts. A term depending on the transcript length was proposed for both CTC BIBREF3 and seq2seq BIBREF11 networks, but its usage was reported to be difficult because beam search was looping over parts of the recording and additional constraints were needed BIBREF11 . To prevent looping we propose to use a coverage term that counts the number of frames that have received a cumulative attention greater than INLINEFORM0 : DISPLAYFORM0",
"The coverage criterion prevents looping over the utterance because once the cumulative attention bypasses the threshold INLINEFORM0 a frame is counted as selected and subsequent selections of this frame do not reduce the decoding cost. In our implementation, the coverage is recomputed at each beam search iteration using all attention weights produced up to this step.",
"Label Smoothing Prevents Overconfidence",
"A elegant solution to model overconfidence was problem proposed for the Inception image recognition architecture BIBREF15 . For the purpose of computing the training cost the ground-truth label distribution is smoothed, with some fraction of the probability mass assigned to classes other than the correct one. This in turn prevents the model from learning to concentrate all probability mass on a single token. Additionally, the model receives more training signal because the error function cannot easily saturate."
],
"extractive_spans": [
"forbids emitting the EOS token",
"beam search criterion can be extended to promote long transcripts",
"coverage criterion prevents looping over the utterance",
"ground-truth label distribution is smoothed"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare three strategies designed to prevent incomplete transcripts. The first strategy doesn't change the beam search criterion, but forbids emitting the EOS token unless its probability is within a set range of that of the most probable token. This strategy prevents truncations, but is inefficient against omissions in the middle of the transcript, such as the failure shown in Table TABREF17 . Alternatively, beam search criterion can be extended to promote long transcripts.",
"The coverage criterion prevents looping over the utterance because once the cumulative attention bypasses the threshold INLINEFORM0 a frame is counted as selected and subsequent selections of this frame do not reduce the decoding cost.",
"Label Smoothing Prevents Overconfidence\nA elegant solution to model overconfidence was problem proposed for the Inception image recognition architecture BIBREF15 . For the purpose of computing the training cost the ground-truth label distribution is smoothed, with some fraction of the probability mass assigned to classes other than the correct one."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"A elegant solution to model overconfidence was problem proposed for the Inception image recognition architecture BIBREF15 . For the purpose of computing the training cost the ground-truth label distribution is smoothed, with some fraction of the probability mass assigned to classes other than the correct one. This in turn prevents the model from learning to concentrate all probability mass on a single token. Additionally, the model receives more training signal because the error function cannot easily saturate.",
"We compare three strategies designed to prevent incomplete transcripts. The first strategy doesn't change the beam search criterion, but forbids emitting the EOS token unless its probability is within a set range of that of the most probable token. This strategy prevents truncations, but is inefficient against omissions in the middle of the transcript, such as the failure shown in Table TABREF17 . Alternatively, beam search criterion can be extended to promote long transcripts. A term depending on the transcript length was proposed for both CTC BIBREF3 and seq2seq BIBREF11 networks, but its usage was reported to be difficult because beam search was looping over parts of the recording and additional constraints were needed BIBREF11 . To prevent looping we propose to use a coverage term that counts the number of frames that have received a cumulative attention greater than INLINEFORM0 : DISPLAYFORM0"
],
"extractive_spans": [],
"free_form_answer": "label smoothing, use of coverage",
"highlighted_evidence": [
"A elegant solution to model overconfidence was problem proposed for the Inception image recognition architecture BIBREF15 . For the purpose of computing the training cost the ground-truth label distribution is smoothed, with some fraction of the probability mass assigned to classes other than the correct one. ",
"To prevent looping we propose to use a coverage term that counts the number of frames that have received a cumulative attention greater than INLINEFORM0 : DISPLAYFORM0"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"What type of attention is used in the recognition system?",
"What are the solutions proposed for the seq2seq shortcomings?"
],
"question_id": [
"bb7c80ab28c2aebfdd0bd90b22a55dbdf3a8ed5b",
"6c4e1a1ccc0c5c48115864a6928385c248f4d8ad"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Influence of beam width and SoftMax temperature on decoding accuracy. In the baseline case (no label smoothing) increasing the temperature reduces the error rate. When label smoothing is used the next-character prediction improves, as witnessed by WER for beam size=1, and tuning the temperature does not bring additional benefits.",
"Figure 2: Impact of using techniques that prevent incomplete transcripts when a trigram language models is used on the dev93 WSJ subset. Results are averaged across two networks",
"Table 1: Example of model failure on validation ’4k0c030n’",
"Table 3: Results withextended trigram language model on WSJ.",
"Table 2: Results without separate language model on WSJ."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Table1-1.png",
"4-Table3-1.png",
"4-Table2-1.png"
]
} | [
"What are the solutions proposed for the seq2seq shortcomings?"
] | [
[
"1612.02695-Solutions to Partial Transcripts Problem-2",
"1612.02695-Label Smoothing Prevents Overconfidence-0"
]
] | [
"label smoothing, use of coverage"
] | 377 |
2002.04745 | On Layer Normalization in the Transformer Architecture | The Transformer is widely used in natural language processing tasks. To train a Transformer however, one usually needs a carefully designed learning rate warm-up stage, which is shown to be crucial to the final performance but will slow down the optimization and bring more hyper-parameter tunings. In this paper, we first study theoretically why the learning rate warm-up stage is essential and show that the location of layer normalization matters. Specifically, we prove with mean field theory that at initialization, for the original-designed Post-LN Transformer, which places the layer normalization between the residual blocks, the expected gradients of the parameters near the output layer are large. Therefore, using a large learning rate on those gradients makes the training unstable. The warm-up stage is practically helpful for avoiding this problem. On the other hand, our theory also shows that if the layer normalization is put inside the residual blocks (recently proposed as Pre-LN Transformer), the gradients are well-behaved at initialization. This motivates us to remove the warm-up stage for the training of Pre-LN Transformers. We show in our experiments that Pre-LN Transformers without the warm-up stage can reach comparable results with baselines while requiring significantly less training time and hyper-parameter tuning on a wide range of applications. | {
"paragraphs": [
[
"The Transformer BIBREF0 is one of the most commonly used neural network architectures in natural language processing. Layer normalization BIBREF1 plays a key role in Transformer's success. The originally designed Transformer places the layer normalization between the residual blocks, which is usually referred to as the Transformer with Post-Layer Normalization (Post-LN) BIBREF2. This architecture has achieved state-of-the-art performance in many tasks including language modeling BIBREF3, BIBREF4 and machine translation BIBREF5, BIBREF6. Unsupervised pre-trained models based on the Post-LN Transformer architecture also show impressive performance in many downstream tasks BIBREF7, BIBREF8, BIBREF9.",
"Despite its great success, people usually need to deal with the optimization of the Post-LN Transformer more carefully than convolutional networks or other sequence-to-sequence models BIBREF10. In particular, to train the model from scratch, any gradient-based optimization approach requires a learning rate warm-up stage BIBREF0, BIBREF11: the optimization starts with an extremely small learning rate, and then gradually increases it to a pre-defined maximum value in a pre-defined number of iterations. Such a warm-up stage not only slows down the optimization process but also brings more hyperparameter tunings. BIBREF10 has shown that the final model performance is quite sensitive to the value of the maximum learning rate and the number of warm-up iterations. Tuning such sensitive hyper-parameters is costly in training large-scale models, e.g., BERT BIBREF8 or XLNet BIBREF9.",
"In this paper, we try to alleviate this problem by finding ways to safely remove the learning rate warm-up stage. As the warm-up stage happens in the first several iterations, we investigate the optimization behavior at initialization using mean field theory BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17. According to our theoretical analysis, when putting the layer normalization between the residual blocks, the expected gradients of the parameters near the output layer are large. Therefore, without the warm-up stage, directly using a large learning rate to those parameters can make the optimization process unstable. Using a warm-up stage and training the model with small learning rates practically avoid this problem. Extensive experiments are provided to support our theoretical findings.",
"Our theory also shows that the layer normalization plays a crucial role in controlling the gradient scales. This motivates us to investigate whether there are some other ways of positioning the layer normalization that lead to well-behaved gradients. In particular, we study another variant, the Transformer with Pre-Layer Normalization (Pre-LN) BIBREF18, BIBREF19, BIBREF2. The Pre-LN Transformer puts the layer normalization inside the residual connection and equips with an additional final-layer normalization before prediction (Please see Figure SECREF1 for the differences between the two variants of the Transformer architectures). We show that at initialization, the gradients are well-behaved without any exploding or vanishing for the Pre-LN Transformer both theoretically and empirically.",
"Given the gradients are well-behaved in the Pre-LN Transformer, it is natural to consider removing the learning rate warm-up stage during training. We conduct a variety of experiments, including IWSLT14 German-English translation, WMT14 English-German translation, and BERT pre-training tasks. We show that, in all tasks, the learning rate warm-up stage can be safely removed, and thus, the number of hyper-parameter is reduced. Furthermore, we observe that the loss decays faster for the Pre-LN Transformer model. It can achieve comparable final performances but use much less training time. This is particularly important for training large-scale models on large-scale datasets.",
"Our contributions are summarized as follows:",
"$\\bullet $ We investigate two Transformer variants, the Post-LN Transformer and the Pre-LN Transformer, using mean field theory. By studying the gradients at initialization, we provide evidence to show why the learning rate warm-up stage is essential in training the Post-LN Transformer.",
"$\\bullet $ We are the first to show that the learning-rate warm-up stage can be removed for the Pre-LN Transformer, which eases the hyperparameter tuning. We further show that by using proper learning rate schedulers, the training time can be largely reduced on a wide range of applications."
],
[
"Gradient descent-based methods BIBREF20, BIBREF21, BIBREF22, BIBREF23 are popularly used in optimizing deep neural networks. For convolutional neural networks and recurrent neural networks, a relatively large learning rate is usually set in the beginning, and then decreased along with the optimization process BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. The learning rate warm-up stage has only been shown essential in dealing with some very specific problems, e.g., the large-batch training. BIBREF29, BIBREF28, BIBREF30 showed that a learning rate warm-up stage is preferred when training neural networks with extremely large batch sizes.",
"However, the learning rate warm-up stage is essential and critical when optimizing the Transformer models in a majority of scenarios BIBREF0, BIBREF8, BIBREF3, BIBREF7, BIBREF31. BIBREF10 investigated the influence of different warm-up strategies for the optimization of the Post-LN Transformer model and found that without or with relatively less warm-up iterations, the optimization diverges. The Pre-LN Transformer has been proposed in several recent works BIBREF18, BIBREF19, BIBREF2 to alleviate some optimization issues when training deeper models, but the troublesome warm-up stage still remains in their training pipelines.",
"In a concurrent and independent work BIBREF11, the authors claimed that the benefit of the warm-up stage comes from reducing the variance for the adaptive learning rate in the Adam optimizer BIBREF20. They proposed to rectify the variance of adaptive learning rate by a new variant of Adam called RAdam. However, we find that not only for Adam, the learning rate warm-up stage also helps quite a lot for other optimizers. This may indicate that Adam is not the prerequisite for the necessity of the warm-up stage. Therefore, we re-identify the problem and find that it highly relates to the architecture, in particular, the position of layer normalization."
],
[
"The Transformer architecture usually consists of stacked Transformer layers BIBREF0, BIBREF8, each of which takes a sequence of vectors as input and outputs a new sequence of vectors with the same shape. A Transformer layer has two sub-layers: the (multi-head) self-attention sub-layer and the position-wise feed-forward network sub-layer. Residual connection BIBREF24 and layer normalization BIBREF1 are applied for both sub-layers individually. We first introduce each component of the Transformer layer and then present the entire architecture."
],
[
"An attention function can be formulated as querying an entry with key-value pairs BIBREF0. The self-attention sub-layer uses scaled dot-product attention, which is defined as: $\\text{Attention}(Q,K,V)=\\text{softmax}(\\frac{QK^T}{\\sqrt{d}})V$, where $d$ is the dimensionality of the hidden representations, and $Q$ (Query), $K$ (Key), $V$ (Value) are specified as the hidden representations of the previous layer. The multi-head variant of the self-attention sub-layer is popularly used which allows the model to jointly attend to information from different representation sub-spaces, and is defined as",
"where $W_k^Q\\in \\mathbb {R}^{d \\times d_K}, W_k^K\\in \\mathbb {R}^{d \\times d_K}, W_k^V\\in \\mathbb {R}^{d\\times d_V},$ and $W^O\\in \\mathbb {R}^{Hd_V \\times d}$ are project parameter matrices, $H$ is the number of heads. $d_K$ and $d_V$ are the dimensionalities of Key and Value. Without any confusion, given a sequence of vectors $(x_1,...,x_n)$, we use $\\text{MultiHeadAtt}(x_{i},[ x_{ 1}, x_{2},\\cdots , x_{n}])$ as the multi-head self-attention mechanism on position $i$ which considers the attention from $x_i$ to the entire sequence, i.e., $\\text{MultiHeadAtt}(x_{i},[ x_{ 1}, x_{2},\\cdots , x_{n}])=\\text{Multi-head}(x_i, [x_1, \\dots , x_n], [x_1, \\dots , x_n])$."
],
[
"In addition to the self-attention sub-layer, each Transformer layer contains a fully connected network, which is applied to each position separately and identically. This sub-layer is a two-layer feed-forward network with a ReLU activation function. Given a sequence of vectors $h_1, ..., h_n$, the computation of a position-wise FFN sub-layer on any $h_i$ is defined as:",
"where $W^1$, $W^2$, $b^1$ and $b^2$ are parameters."
],
[
"Besides the two sub-layers described above, the residual connection and layer normalization are also key components to the Transformer. For any vector $v$, the layer normalization is computed as $\\text{LayerNorm}(v) = \\gamma \\frac{v - \\mu }{\\sigma } + \\beta $, in which $\\mu , \\sigma $ are the mean and standard deviation of the elements in $v$, i.e., $\\mu = \\frac{1}{d} \\sum _{k=1}^d v_{k}$ and $\\sigma ^2 = \\frac{1}{d}\\sum _{k=1}^d(v_{k} -\\mu )^2$. Scale $\\gamma $ and bias vector $\\beta $ are parameters.",
"Different orders of the sub-layers, residual connection and layer normalization in a Transformer layer lead to variants of Transformer architectures. One of the original and most popularly used architecture for the Transformer and BERT BIBREF0, BIBREF8 follows “self-attention (FFN) sub-layer $\\rightarrow $ residual connection $\\rightarrow $ layer normalization”, which we call the Transformer with Post-Layer normalization (Post-LN Transformer), as illustrated in Figure SECREF1."
],
[
"Denote $x_{l, i}$ as the input of the $l$-th Transformer layer at position $i$, where $x_{l,i}$ is a real-valued vector of dimension $d$, $i=1,2,...,n$, $l=1,2,...,L$. $n$ is the length of the sequence and $L$ is the number of layers. For completeness, we define $x_{0,i}$ as the input embedding at position $i$ which is usually a combination of word embedding and positional embedding. The computations inside the $l$-th layer are composed of several steps, and we use super-scripts on $x$ to present the input(output) of different steps as in Table 1 (left), where $W^{1,l}$, $W^{2,l}$, $b^{1,l}$ and $b^{2,l}$ are parameters of the FFN sub-layer in the $l$-th layer."
],
[
"We are interested in the learning rate warm-up stage in the optimization of the Post-LN Transformer. Different from the optimization of many other architectures in which the learning rate starts from a relatively large value and then decays BIBREF32, BIBREF33, a learning rate warm-up stage for the Post-LN Transformer seems critical BIBREF10. We denote the learning rate of the $t$-th iteration as $\\text{lr}(t)$ and the maximum learning rate during training as $\\text{lr}_{max}$. Given a predefined time frame $T_{\\text{warmup}}$, the learning rate scheduler for the first $T_{\\text{warmup}}$ iterations BIBREF34 is defined as",
"After this warm-up stage, the learning rate will be set by classical learning rate schedulers, such as the linear decay, the inverse square-root decay, or forced decay at particular iterations. We conduct experiments to show that this learning rate warm-up stage is essential for training Post-LN Transformer models."
],
[
"We conduct experiments on the IWSLT14 German-to-English (De-En) machine translation task. We mainly investigate two aspects: whether the learning rate warm-up stage is essential and whether the final model performance is sensitive to the value of $T_{\\text{warmup}}$. To study the first aspect, we train the model with the Adam optimizer BIBREF20 and the vanilla SGD optimizer BIBREF35 respectively. For both optimziers, we check whether the warm-up stage can be removed. We follow BIBREF0 to set hyper-parameter $\\beta $ to be $(0.9,0.98)$ in Adam. We also test different $\\text{lr}_{max}$ for both optimizers. For Adam, we set $\\text{lr}_{max}=5e^{-4}$ or $1e^{-3}$, and for SGD, we set $\\text{lr}_{max}=5e^{-3}$ or $1e^{-3}$. When the warm-up stage is used, we set $T_{\\text{warmup}}=4000$ as suggested by the original paper BIBREF0. To study the second aspect, we set $T_{\\text{warmup}}$ to be 1/500/4000 (“1” refers to the no warm-up setting) and use $\\text{lr}_{max}=5e^{-4}$ or $1e^{-3}$ with Adam. For all experiments, a same inverse square root learning rate scheduler is used after the warm-up stage. We use both validation loss and BLEU BIBREF36 as the evaluation measure of the model performance. All other details can be found in the supplementary material."
],
[
"We record the model checkpoints for every epoch during training and calculate the validation loss and BLEU score. The performance of the models are plotted in Figure FIGREF13 and Figure FIGREF14. The x-axis is the epoch number and the y-axis is the BLEU score/validation loss. \"w/o warm-up\" indicates “without the warm-up stage” while \"w/ warm-up\" indicates “with the warm-up stage”.",
"First, we can see that for both optimizers, the learning rate warm-up stage is essential. Without the warm-up stage, the BLEU score of the model trained with Adam optimizer can only achieve 8.45. As a comparison, the model trained using the warm-up stage can achieve around 34 in terms of BLEU score. The same trend can also be observed on the validation loss curves. Although the performance of the model trained with SGD is significantly worse than Adam, we can still see similar phenomena as Adam. The BLEU score is just above zero in 15 epochs without using the warm-up stage.",
"Second, we can see that the optimization process is sensitive to the value of $T_{\\text{warmup}}$, which means $T_{\\text{warmup}}$ is an important hyper-parameter in training the Post-LN Transformer. For example, when setting $T_{\\text{warmup}}=500$, the learned models with Adam achieve only 31.16 and 2.77 in term of BLEU score for $lr_{max}=5e^{-4}$ and $1e^{-3}$ respectively.",
"Such a warm-up stage has several disadvantages. First, its configuration significantly affects the final performance. The practitioners need a careful hyper-parameter tuning, which is computationally expensive for large-scale NLP tasks. Second, the warm-up stage could slow down the optimization. Standard optimization algorithms usually start with a large learning rate for fast convergence. However, when using the warm-up stage, the learning rate has to gradually increase from zero, which may make the training inefficient. BIBREF11 suggests that the warm-up stage plays a role in reducing the undesirably significant variance in Adam in the early stage of model training. However, according to our results, the warm-up stage also helps the training of SGD. This suggests that the benefit of the warm-up stage may be not for a particular optimizer."
],
[
"We can see that the Post-LN Transformer cannot be trained with a large learning rate from scratch. This motivates us to investigate what happens at the model initialization. We first introduce the parameter initialization setting for our theoretical analysis and then present our theoretical findings."
],
[
"We denote $\\mathcal {L}(\\cdot )$ as the loss function of one position, $\\tilde{\\mathcal {L}}(\\cdot )$ as the loss function of the whole sequence, $\\Vert \\cdot \\Vert _2$ and $\\Vert \\cdot \\Vert _F$ as the $l_2$ norm (spectral norm) and the Frobenius norm, $\\text{LN}(x)$ as the standard layer normalization with scale $\\gamma =1$ and bias $\\beta =0$, and $\\mathbf {J}_{LN}(x)=\\frac{\\partial \\text{LN}(x)}{\\partial x}$ as the Jacobian matrix of $\\text{LN}(x)$. Let $\\mathcal {O}(\\cdot )$ denote standard Big-O notation that suppress multiplicative constants."
],
[
"The parameter matrices in each Transformer layer are usually initialized by the Xavier initialization BIBREF37. Given a matrix of size $n_{in}\\times n_{out}$, the Xavier initialization sets the value of each element by independently sampling from Gaussian distribution $N(0, \\frac{2}{n_{in}+n_{out}})$. The bias vectors are usually initialized as zero vectors. The scale $\\gamma $ in the layer normalization is set to one.",
"For theoretical analysis, we study a simpler setting. First, we focus on single-head attention instead of the multi-head variant and for all layers, we set the shape of $W^{Q,l}$, $W^{K,l}$, $W^{V,l}$, $W^{1,l}$,$W^{2,l}$ to be $d\\times d$. Second, we initialize the parameter matrices in the self-attention sub-layer $W^{Q,l}$ and $W^{K,l}$ to be zero matrices. In this setting, the attention is a uniform distribution at initialization and $\\text{MultiHeadAtt}(x_{l, i}^1,[ x_{l, 1}^1, x_{l, 2}^1,\\cdots , x_{l, n}^1])$ can be simplified as $\\frac{1}{n}\\sum _{j=1}^{n}x_{l, j} W^{V, l}$. Third, we assume the input vectors are also sampled from the same Gaussian distribution. This is reasonable since the inputs are linear combinations of word embeddings and learnable positional embeddings, both of which are initialized by Gaussian distributions."
],
[
"We compare the Post-LN Transformer with another variant of the Transformer architecture, the Transformer with Pre-Layer Normalization (Pre-LN). The Pre-LN Transformer was implemented in several systems BIBREF34, BIBREF38, BIBREF39. BIBREF2 suggested that the Pre-LN Transformer outperforms the Post-LN Transformer when the number of layers increases. Different from the Post-LN Transformer that puts the layer normalization between the residual blocks, the Pre-LN Transformer puts the layer normalization inside the residual connection and places it before all other non-linear transformations. Additionally, the Pre-LN Transformer uses a final layer normalization right before the prediction. We provide the mathematical formulations and visualizations of the Post-LN/Pre-LN Transformer in Table TABREF9 and Figure SECREF1.",
"For both architectures, each $x_{L,i}$ passes through a softmax layer to produce a distribution over the dictionary $V$. The loss function is defined on the softmax distribution. For example, in sequence prediction, the loss function is defined as $\\mathcal {L}(x_{L+1,i}^{post})=-\\log (\\text{softmax}_{y_i}(W^{emb}x_{L+1,i}^{post}))$ for the Post-LN Transformer and $\\mathcal {L}(x_{Final,i}^{pre})=-\\log (\\text{softmax}_{y_i}(W^{emb}x_{Final,i}^{pre}))$ for the Pre-LN Transformer, where $\\text{softmax}_{y_i}$ is the probability of ground truth token $y_i$ outputted by the softmax distribution and $W^{emb}$ is the word embedding matrix. The loss of the whole sequence is an average of the loss on each position. Without loss of generality, we assume that all the derivatives are bounded. We introduce the following concentration property of random variables which will be further used in the theorem.",
"Definition 1 A random variable $Z\\ge 0$ is called $(\\epsilon ,\\delta )$-bounded if with probability at least $1-\\delta $, $\\frac{Z-\\mathbb {E}Z}{\\mathbb {E}Z}\\le \\epsilon $, where $\\epsilon >0$ and $0<\\delta <1$.",
"Intuitively, if the random variable $Z$ is $(\\epsilon ,\\delta )$-bounded, then with a high probability its realization will not get too far away from its expectation. For example, if $Y$ is a $d$-dimensional standard Gaussian random vector, then $Z=\\Vert Y\\Vert _2^2$ is $(\\epsilon ,\\delta )$-bounded with $\\delta =\\exp (-d\\epsilon ^2/8)$, $0<\\epsilon <1$ (see supplementary material for details). As parameter matrices in self-attention sub-layers and FFN sub-layers are initialized by Gaussian distributions, if the norm of the hidden states in the Transformer satisfies the concentrated condition above, we have the following theorem to characterize the scale of the gradients.",
"Theorem 1 (Gradients of the last layer in the Transformer) Assume that $\\Vert x^{post,5}_{L,i}\\Vert _2^2$ and $\\Vert x^{pre}_{L+1,i}\\Vert _2^2$ are $(\\epsilon ,\\delta )$-bounded for all $i$, where $\\epsilon $ and $\\delta =\\delta (\\epsilon )$ are small numbers. Then with probability at least $0.99-\\delta -\\frac{\\epsilon }{0.9+\\epsilon }$, for the Post-LN Transformer with $L$ layers, the gradient of the parameters of the last layer satisfies",
"and for the Pre-LN Transformer with $L$ layers,",
"From Theorem 1, we can see that for the Post-LN Transformer, the scale of the gradients to the last FFN layer is of order $\\mathcal {O}(d\\sqrt{\\ln {d}})$ which is independent of $L$. For the Pre-LN Transformer, the scale of the gradients is much smaller. We first study the forward propagation of the Post-LN Transformer and the Pre-LN Transformer. Lemma 1 will be served as a basic tool to prove the main theorem and other lemmas.",
"Lemma 1 If $X\\in \\mathbb {R}^d$ is a Gaussian vector, $X\\sim N(0,\\sigma ^2 \\mathbf {I}_d)$, then $\\mathbb {E}(\\Vert \\text{ReLU}(X)\\Vert _2^2)=\\frac{1}{2}\\sigma ^2 d$.",
"Based on Lemma 1, we have the following lemma to estimate the scale of the hidden states in different layers for the Post-LN Transformer and the Pre-LN Transformer.",
"Lemma 2 At initialization, for the Post-LN Transformer, $\\mathbb {E}(\\Vert x_{l,i}^{post,5}\\Vert _2^2)=\\frac{3}{2}d$ for all $l>0$ and $i$. For the Pre-LN Transformer, $(1+\\frac{l}{2})d\\le \\mathbb {E}(\\Vert x^{pre}_{l,i}\\Vert _2^2)\\le (1+\\frac{3l}{2})d$ for all $l>0$ and $i$. Expectations are taken over the input and the randomness of initialization.",
"Lemma 2 studies the expected norm of the hidden states in both Post-LN/Pre-LN Transformer. It is obviously that in the Post-LN Transformer, the norm of $x_{l,i}^{post}$ is $\\sqrt{d}$ and thus we study the norm of $x^{post,5}_{l,i}$ instead. As we can see from Lemma 2, the scale of the hidden states in the Post-LN Transformer keeps to be the same in expectation while the scale of the hidden states in the Pre-LN Transformer grows linearly along with the depth. The next lemma shows that the scale of the hidden states highly relates to the scale of the gradient in the architectures using layer normalization.",
"Lemma 3 For $x\\in \\mathbb {R}^d$, we have $\\Vert \\mathbf {J}_{LN}(x)\\Vert _2=\\mathcal {O}(\\frac{\\sqrt{d}}{\\Vert x\\Vert _2})$ in which $\\mathbf {J}_{LN}(x)=\\frac{\\partial \\text{LN}(x)}{\\partial x}$.",
"The proof of Lemma 1, Lemma 2, Lemma 3, and Theorem 1 can be found in the supplementary material. The main idea is that the layer normalization will normalize the gradients. In the Post-LN Transformer, the scale of the inputs to the layer normalization is independent of $L$, and thus the gradients of parameters in the last layer are independent of $L$. While in the Pre-LN Transformer, the scale of the input to the final layer normalization is linear in $L$, and thus the gradients of all parameters will be normalized by $\\sqrt{L}$."
],
[
"We have provided a formal proof on the gradients of the last FFN sub-layer as above. In order to fully understand the optimization, we also make some preliminary analysis for other layers and other parameters. Our main result is that the gradient norm in the Post-LN Transformer is large for the parameters near the output and will be likely to decay as the layer index $l$ decreases. On the contrary, the gradient norm in the Pre- Transformer will be likely to stay the same for any layer $l$. All the preliminary theoretical results are provided in the supplementary material."
],
[
"As our theory is derived based on several simplifications of the problem, we conduct experiments to study whether our theoretical insights are consistent with what we observe in real scenarios. The general model and training configuration exactly follow Section 3.2. The experiments are repeated ten times using different random seeds."
],
[
"Given an initialized model, we record the hidden states in the Post-LN/Pre-LN Transformer across batches and find that the norm of the hidden states satisfies the property ((0.1,0.125)-bounded)."
],
[
"Theorem 1 suggests that for any sizes of the Post-LN Transformer, the scale of the gradient norm in the last FFN sub-layer remains the same. On the contrary, that of the Pre-LN Transformer decreases as the size of the model grows. We calculate and record the gradient norm in the last FFN sub-layer in 6-6/8-8/10-10/12-12/14-14 Post-LN/Pre-LN Transformer models at initialization. The results are plotted in Figure FIGREF27 and FIGREF28. The x-axis is the size of the model, and the y-axis is the value of the gradient norm of $W^2$ in the final FFN sub-layer. The figures show when the number of layers grows, the gradient norm remains in the Post-LN Transformer (around 1.6) and decreases in the Pre-LN Transformer. This observation is consistent with our theory."
],
[
"We calculate the gradient norm of each paramter matrix in 6-6 Post-LN/Pre-LN Transformer. We record the gradient for each parameter for different mini-batches. For elements in a parameter matrix, we calculate their expected gradients and use the Frobenius norm of those values as the scale of the expected gradient of the matrix. Figure FIGREF25 and FIGREF26 shows those statistics for FFN sub-layers. The x-axis indexes different Transformer layers. It can be seen from the figure, the scale of the expected gradients grows along with the layer index for the Post-LN Transformer. On the contrary, the scale almost keeps the same for different layers in the Pre-LN Transformer. These observations are consistent with our theoretical findings."
],
[
"Given the analysis above, we hypothesize that the gradient scale is one of the reasons that the Post-LN Transformer needs a careful learning rate scheduling. Since the gradients are large for some layers, using a large learning rate without warm-up may make the training unstable.",
"To verify this argument, first, we study the gradient statistics for the Post-LN Transformer after the warm-up stage with Adam. It can be seen from Figure FIGREF25 and FIGREF26 that the scale of the gradients are very small, and the model can be trained with large learning rates. Second, we conduct an experiment to train the Post-LN Transformer from scratch using a fixed small learning rate, i.e., $1e^{-4}$, to verify whether using small-step updates mitigates the issue. The details are provided in the supplementary material. In general, using a very small and fixed learning rate can mitigate the problem and optimize the Post-LN Transformer to a certain extent but the convergence is significantly slower. Both experiments above are supportive to our claim."
],
[
"We find in the previous section that the gradients at initialization for Pre-LN Transformer are well-behaved. Given this observation, we deduce that the learning rate warm-up stage can be safely removed when training Pre-LN Transformer. In this section, we empirically verify it on two main tasks in NLP, machine translation and unsupervised pre-training."
],
[
"We conduct our experiments on two widely used tasks: the IWSLT14 German-to-English (De-En) task and the WMT14 English-to-German (En-De) task. For the IWSLT14 De-En task, we use the same model configuration as in Section 3. For the WMT14 En-De task, we use the Transformer base setting. More details can be found in the supplementary material.",
"For training the Pre-LN Transformer, we remove the learning rate warm-up stage. On the IWSLT14 De-En task, we set the initial learning rate to be $5e^{-4}$ and decay the learning rate at the 8-th epoch by 0.1. On the WMT14 En-De task, we run two experiments in which the initial learning rates are set to be $7e^{-4}/1.5e^{-3}$ respectively. Both learning rates are decayed at the 6-th epoch followed by the inverse square root learning rate scheduler.",
"We train the Post-LN Transformer using the learning rate warm-up stage as the baseline. In both IWSLT14 De-En task and WMT14 En-De task, we set the number of the warm-up stage to be 4000 following BIBREF0 and then use the inverse square root learning rate scheduler. For all experiments above, we use the Adam optimizer and set the hyper-parameter $\\beta $ to be $(0.9,0.98)$. We set $lr_{max}$ as same as the initial learning rates of the Pre-LN Transformer in each corresponding experiment. Since BIBREF11 suggests that the learning rate warm-up stage can be removed using RAdam, we try this optimizer on the IWSLT14 De-En task. We use linear learning rate decay suggested by BIBREF11 and keep all other hyper-parameters to be the same as in other experiments."
],
[
"We follow BIBREF8 to use English Wikipedia corpus and BookCorpus for pre-training. As the dataset BookCorpus BIBREF40 is no longer freely distributed. We follow the suggestions from BIBREF8 to crawl and collect BookCorpus on our own. The concatenation of two datasets contains roughly 3.4B words in total, which is comparable with the data corpus used in BIBREF8. We randomly split documents into one training set and one validation set. The training-validation ratio for pre-training is 199:1.",
"We use base model configuration in our experiments. Similar to the translation task, we train the Pre-LN BERT without the warm-up stage and compare it with the Post-LN BERT. We follow the same hyper-parameter configuration in BIBREF8 to train the Post-LN BERT using 10k warm-up steps with $\\text{lr}_{max}=1e^{-4}$. For the Pre-LN BERT, we use linear learning rate decay starting from $3e^{-4}$ without the warm-up stage. We have tried to use a larger learning rate (such as $3e^{-4}$) for the Post-LN BERT but found the optimization diverged."
],
[
"We record the model checkpoints for every epoch during training and calculate the validation loss and BLEU score. The performance of the models at different checkpoints are plotted in Figure FIGREF41 - FIGREF44.",
"First, as we can see from the figure, the learning rate warm-up stage is not critical anymore for training the Pre-LN Transformer and the performance of the learned model is competitive. For example, on the IWSLT14 De-En task, the BLEU score and validation loss of the Pre-LN Transformer can achieve around 34 and 4, which are comparable with the performance of the Post-LN Transformer.",
"Second, the Pre-LN Transformer converges faster than the Post-LN Transformer using the same $\\text{lr}_{max}$. On the IWSLT14 De-En task, the 9-th checkpoint of the Pre-LN Transformer achieves nearly the same performance (validation loss/BLEU score) as 15-th checkpoint of the Post-LN Transformer. Similar observations can be found in the WMT14 En-De task. Third, compared with RAdam, we find that the change of the position of layer normalization “dominates” the change of the optimizer. According to our experiments on the IWSLT14 De-En task, we can see that although RAdam trains the Post-LN Transformer well without the warm-up stage, it has little difference with Adam when training the Pre-LN Transformer."
],
[
"We record validation loss of the model checkpoints and plot them in Figure FIGREF47. Similar to the machine translation tasks, the learning rate warm-up stage can be removed for the Pre-LN model. The Pre-LN model can be trained faster. For example, the Post-LN model achieves 1.69 validation loss at 500k updates while the Pre-LN model achieves similar validation loss at 700k updates, which suggests there is a 40% speed-up rate. Note that $T_{warmup}$ (10k) is far less than the acceleration (200k) which suggests the Pre-LN Transformer is easier to optimize using larger learning rates. We also evaluate different model checkpoints on the downstream task MRPC and RTE (more details can be found in the supplementary material). The experiments results are plotted in Figure FIGREF48 and FIGREF49. We can see that the Pre-LN model also converges faster on the downstream tasks.",
"As a summary, all the experiments on different tasks show that training the Pre-LN Transformer does not rely on the learning rate warm-up stage and can be trained much faster than the Post-LN Transformer."
],
[
"In this paper, we study why the learning rate warm-up stage is important in training the Transformer and show that the location of layer normalization matters. We show that in the original Transformer, which locates the layer normalization outside the residual blocks, the expected gradients of the parameters near the output layer are large at initialization. This leads to an unstable training when using a large learning rate. We further show that the Transformer which locates the layer normalization inside the residual blocks, can be trained without the warm-up stage and converges much faster. In the future, we will investigate other strategies of positioning the layer normalization and understand the optimization of Transformer from a theoretical perspective."
],
[
"The training/validation/test sets of the IWSLT14 German-to-English (De-En) task contain about 153K/7K/7K sentence pairs, respectively. We use a vocabulary of 10K tokens based on a joint source and target byte pair encoding (BPE) BIBREF44. All of our experiments use a Transformer architecture with a 6-layer encoder and 6-layer decoder. The size of embedding is set to 512, the size of hidden nodes in attention sub-layer and position-wise feed-forward network sub-layer are set to 512 and 1024, and the number of heads is set to 4. Label smoothed cross entropy is used as the objective function by setting $\\epsilon = 0.1$ BIBREF46, and we apply dropout with a ratio 0.1. The batch size is set to be 4096 tokens. When we decode translation results from the model during inference, we set beam size as 5 and the length penalty as 1.2."
],
[
"The configuration of IWLST14 De-En task is the same as in Section 3. For the WMT14 En-De task, we replicate the setup of BIBREF0, which consists of about 4.5M training parallel sentence pairs, and uses a 37K vocabulary based on a joint source and target BPE. Newstest2013 is used as the validation set, and Newstest2014 is used as the test set. One of the basic configurations of the Transformer architecture is the base setting, which consists of a 6-layer encoder and 6-layer decoder. The size of the hidden nodes and embeddings are set to 512. The number of heads is 8. Label smoothed cross entropy is used as the objective function by setting $\\epsilon = 0.1$. The batch size is set to be 8192 tokens per GPU on 16 NVIDIA Tesla P40 GPUs."
],
[
"We follow BIBREF8 to use English Wikipedia corpus and BookCorpus for the pre-training. As the dataset BookCorpus BIBREF40 is no longer freely distributed. We follow the suggestions from BIBREF8 to crawl and collect BookCorpus on our own. The concatenation of two datasets includes roughly 3.4B words in total, which is comparable with the data corpus used in BIBREF8. We first segment documents into sentences with Spacy; Then, we normalize, lower-case, and tokenize texts using Moses BIBREF43 and apply BPEBIBREF45. We randomly split documents into one training set and one validation set. The training-validation ratio for pre-training is 199:1. All experiments are conducted on 32 NVIDIA Tesla P40 GPUs.",
"The base model in BIBREF8 consists of 12 Transformer layers. The size of hidden nodes and embeddings are set to 768, and the number of heads is set to 12."
],
[
"The Microsoft Research Paraphrase Corpus BIBREF42 is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent, and the task is to predict the equivalence. The performance is evaluated by the accuracy."
],
[
"The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges BIBREF41. The task is to predict whether sentences in a sentence pair are entailment. The performance is evaluated by the accuracy."
],
[
"We use the validation set for evaluation. To fine-tune the models, following BIBREF8, BIBREF39, we search the optimization hyper-parameters in a search space including different batch sizes (16/32), learning rates ($1e^{-5}$ - $1e^{-4}$) and number of epochs (3-8). We find that the validation accuracy are sensitive to random seeds, so we repeat fine-tuning on each task for 6 times using different random seeds and compute the 95% confidence interval of validation accuracy."
],
[
"Denote $X=(X_1,X_2,...,X_d)$ in which $X_i$ are i.i.d. Gaussian random variables with distribution $N(0,\\sigma ^2)$. Denote $\\rho _X(x)$ as the probability density function of $X_1$. Then $\\mathbb {E}(\\Vert \\text{ReLU}(X)\\Vert _2^2)=\\sum _{i=1}^d \\mathbb {E}[\\text{ReLU}(X_i)^2] =\\sum _{i=1}^d \\mathbb {E}[\\text{ReLU}(X_i)^2|X_i\\ge 0]\\mathbb {P}(X_i\\ge 0) =\\frac{d}{2}\\mathbb {E}[\\text{ReLU}(X_1)^2|X_1\\ge 0]=\\frac{d}{2}\\mathbb {E}[X_1^2|X_1\\ge 0] =\\frac{d}{2}\\int _{-\\infty }^{+\\infty } x^2 \\rho _{X|X>0}(x)dx =\\frac{d}{2}\\int _{0}^{+\\infty } x^2 2\\rho _{X}(x)dx =\\frac{1}{2}\\sigma ^2 d$."
],
[
"At initialization, the layer normalization is computed as $\\text{LN}(v) = \\frac{v - \\mu }{\\sigma }$. It is easy to see that layer normalization at initialization projects any vector $v$ onto the $d-1$-sphere of radius $\\sqrt{d}$ since $\\Vert \\text{LN}(v)\\Vert _2^2=\\Vert \\frac{v - \\mu }{\\sigma }\\Vert _2^2=\\frac{\\sum _{k=1}^d(v_{k} -\\mu )^2}{\\sigma ^2}=d$.",
"We first estimate the expected $l_2$ norm of each intermediate output $x^{post,1}_{l,i},\\cdots ,x^{post,5}_{l,i}$ for $l>0$. Using Xavier initialization, the elements in $W^{V,l}$ are i.i.d. Gaussian random variables sampled from $N(0,1/d)$. Since $\\Vert x_{l,i}^{post}\\Vert _2^2=d$ by the definition of Layer Normalization when $l>0$, we have",
"and $\\mathbb {E}(\\Vert x_{l,i}^{post,2}\\Vert _2^2)=\\mathbb {E}(\\Vert x_{l,i}^{post}\\Vert _2^2)+\\mathbb {E}(\\Vert x_{l,i}^{post,1}\\Vert _2^2)=\\mathbb {E}(\\Vert x_{l,i}^{post}\\Vert _2^2)+\\mathbb {E}(\\Vert \\frac{1}{n}\\sum _{i=1}^n x_{l,i}^{post}\\Vert _2^2)\\ge \\mathbb {E}(\\Vert x_{l,i}^{post}\\Vert _2^2)=d$.",
"Similarly, we have $\\Vert x_{l,i}^{post,3}\\Vert _2^2=d$ by the definition of Layer Normalization. Again, for the ReLU activation function, the elements in $W^{1,l}$ and $W^{2,l}$ are i.i.d. Gaussian random variables sampled from $N(0,1/d)$. According to Lemma 1, we have",
"Based on this, we can estimate the scale of $\\mathbb {E}(\\Vert x_{l,i}^{post,5}\\Vert _2^2)$ as follows.",
"Using similar technique we can bound $\\mathbb {E}(\\Vert x_{l,i}^{pre}\\Vert _2^2)$ for the Pre-LN Transformer.",
"It is easy to see that we have $\\mathbb {E}(\\Vert x_{l,i}^{pre}\\Vert _2^2) \\le \\mathbb {E}(\\Vert x_{l,i}^{pre,3}\\Vert _2^2) \\le \\mathbb {E}(\\Vert x_{l,i}^{pre}\\Vert _2^2)+d$. And similar to (10)-(12),",
"Combining both, we have $\\mathbb {E}(\\Vert x_{l,i}^{pre}\\Vert _2^2) +\\frac{1}{2}d\\le \\mathbb {E}(\\Vert x_{l+1,i}^{pre}\\Vert _2^2)\\le \\mathbb {E}(\\Vert x_{l,i}^{pre}\\Vert _2^2)+\\frac{3}{2}d$. Then we have $(1+\\frac{l}{2})d\\le \\mathbb {E}(\\Vert x_{l,i}^{pre}\\Vert _2^2)\\le (1+\\frac{3l}{2})d$ by induction."
],
[
"The proof of Lemma 3 is based on Lemma 4.1:",
"Lemma 4 Let $\\alpha \\in \\mathbb {R}^d$ be a vector such that $\\Vert \\alpha \\Vert _2=1$, then the eigenvalue of $I-\\alpha ^{\\top }\\alpha $ is either 1 or 0.",
"Let $\\lbrace e_1,...,e_d\\rbrace $ be unit vectors such that $e_1=\\alpha $ and $e_i\\bot e_j$ for all $(i,j)$. Then we have $e_1(I-\\alpha ^{\\top }\\alpha )=e_1-e_1\\alpha ^{\\top }\\alpha =e_1-\\alpha =0$ and $e_i(I-\\alpha ^{\\top }\\alpha )=e_i-e_i\\alpha ^{\\top }\\alpha =e_i$ for $i\\ne 1$. So $e_i$ are all the eigenvectors of $I-\\alpha ^{\\top }\\alpha $, and their corresponding eigenvalues are $(0,1,1,...,1)$. Hence we complete our proof.",
"[Proof of Lemma 3] Denote $y=x(I-\\frac{1}{d}\\textbf {1}^{\\top }\\textbf {1})$, where $\\textbf {1}=(1,1,...,1)\\in \\mathbb {R}^d$, then the layer normalization can be rewritten as",
"We explicitly calculate the Jacobian of layer normalization as",
"where $\\delta _{ij}=1$ when $i=j$ and $\\delta _{ij}=0$ when $i\\ne j$. In the matrix form,",
"and",
"Since the eigenvalue of the matrix $(I-\\frac{y^{\\top }y}{\\Vert y\\Vert _2^2})$ and $(I-\\frac{1}{d}\\textbf {1}^{\\top }\\textbf {1})$ are either 1 or 0 (by Lemma 4.1), we have $\\Vert (I-\\frac{y^{\\top }y}{\\Vert y\\Vert _2^2})\\Vert _2=\\mathcal {O}(1)$ and $\\Vert (I-\\frac{1}{d}\\textbf {1}^{\\top }\\textbf {1})\\Vert _2=\\mathcal {O}(1)$. So the spectral norm of $\\mathbf {J}_{LN}(x)$ is"
],
[
"The proof of Theorem 1 is based on Lemma 4.2:",
"Lemma 5 Let $Y$ be a random variable that is never larger than B. Then for all $a<B$,",
"Let $X=B-Y$, then $X\\ge 0$ and Markov's inequality tells us that",
"Hence",
"[Proof of Theorem 1] We prove Theorem 1 by estimating each element of the gradient matrix. Namely, we will analyze $\\frac{\\partial \\tilde{\\mathcal {L}}}{\\partial W^{2,L}_{pq}}$ for $p,q \\in \\lbrace 1,...,d\\rbrace $. The loss of the post-LN Transformer can be written as",
"Through back propagation, for each $i\\in \\lbrace 1,2,...,n\\rbrace $ the gradient of $\\mathcal {L}(x_{L+1,i})$ with respect to the last layer's parameter $W^{2,L}$ in the post-LN setting can be written as:",
"Here $[\\text{ReLU}(x_{L,i}^{post,3}W^{1,L})]_p$ means the $p$-th element of $\\text{ReLU}(x_{L,i}^{post,3}W^{1,L})$. So the absolute value of $\\frac{\\partial \\mathcal {L}(x_{L+1,i}^{post})}{\\partial W^{2,L}_{pq}}$ can be bounded by",
"which implies",
"Since all the derivatives are bounded, we have $\\Vert \\frac{\\partial \\mathcal {L}(x_{L+1,i}^{post})}{\\partial x_{L+1,i}^{post}}\\Vert ^2_2=\\mathcal {O}(1)$. So",
"Since $\\Vert x_{L,i}^{post,3}\\Vert _2^2=d$, $[x_{L,i}^{post,3}W^{1,L}]_p$ has distribution $N(0,1)$, using Chernoff bound we have",
"So",
"Thus with probability at least $0.99$, for all $p=1,2,...,d$ we have $\\text{ReLU}([x_{L,i}^{post,3}W^{1,L}]_p)^2\\le 2\\ln {100d}$.",
"Since with probability $1-\\delta (\\epsilon )$, $\\frac{|\\Vert x^{post,5}_{L,i}\\Vert _2^2-\\mathbb {E}\\Vert x^{post,5}_{L,i}\\Vert _2^2|}{\\mathbb {E}\\Vert x^{post,5}_{L,i}\\Vert _2^2}\\le \\epsilon $, we have $\\Vert x^{post,5}_{L,i}\\Vert _2^2\\le (1+\\epsilon )\\mathbb {E}\\Vert x^{post,5}_{L,i}\\Vert _2^2$. Using Lemma 4.2, we have",
"for an arbitrary constant $\\alpha _0>0$, which equals",
"So according to union bound, with probability at least $0.99-\\delta (\\epsilon )-\\frac{\\epsilon }{1+\\epsilon -\\alpha _0}$ we have $|\\frac{\\partial \\mathcal {L}(x_{L+1,i}^{post})}{\\partial W^{2,L}_{pq}}|^2=\\mathcal {O}(\\left[\\Vert \\mathbf {J}_{LN}(x_{L,i}^{post,5})\\Vert ^2_2|[\\text{ReLU}(x_{L,i}^{post,3}W^{1,L})]_p|^2\\right])\\le \\mathcal {O}(\\frac{2d\\ln {100d}}{\\Vert x_{L,i}^{post,5}\\Vert _2^2})\\le \\mathcal {O}(\\frac{d\\ln {d}}{\\alpha _0\\mathbb {E}\\Vert x_{L,i}^{post,5}\\Vert _2^2})=\\mathcal {O}(\\frac{\\ln {d}}{\\alpha _0})$. So we have",
"and",
".",
"The loss of the pre-LN Transformer can be written as",
"Using the same technique, in the pre-LN setting the gradient of $\\mathcal {L}(x_{Final,i}^{pre})$ with respect to the last layer's parameter $W^{2,L}$ can be written as",
"So the absolute value of each component of the gradient is bounded by",
"Since $\\Vert x_{L,i}^{pre,4}\\Vert _2^2=d$ and $[ x_{L,i}^{pre,4}W^{1,L}]_p$ obeys distribution $N(0,1)$, using Chernoff bound we have",
"So",
"So with probability at least $0.99$, for all $p=1,2,...,d$ we have $\\text{ReLU}([ x_{L,i}^{pre,4}W^{1,L}]_p)^2\\le 2\\ln {100d}$.",
"Since with probability $1-\\delta (\\epsilon )$, $\\frac{|\\Vert x^{pre}_{L+1,i}\\Vert _2^2-\\mathbb {E}\\Vert x^{pre}_{L+1,i}\\Vert _2^2|}{\\mathbb {E}\\Vert x^{pre}_{L+1,i}\\Vert _2^2}\\le \\epsilon $, we have $\\Vert x^{pre}_{L+1,i}\\Vert _2^2\\le (1+\\epsilon )\\mathbb {E}\\Vert x^{pre}_{L+1,i}\\Vert _2^2$. Using Lemma 5, we have",
"which equals",
"According to union bound, with probability $0.99-\\delta (\\epsilon )-\\frac{\\epsilon }{1+\\epsilon -\\alpha _0}$ we have $|\\frac{\\partial \\mathcal {L}(x_{Final,i}^{pre})}{\\partial W^{2,L}_{pq}}|^2=\\mathcal {O}(\\left[\\Vert \\mathbf {J}_{LN}(x_{L+1,i}^{pre})\\Vert ^2_2|[\\text{ReLU}(x_{L,i}^{pre,4}W^{1,L})]_p|^2\\right])\\le \\mathcal {O}(\\frac{2d\\ln {100d}}{\\Vert x_{L+1,i}^{pre}\\Vert _2^2})\\le \\mathcal {O}(\\frac{d\\ln {d}}{\\alpha _0\\mathbb {E}\\Vert x_{L+1,i}^{pre}\\Vert _2^2})=\\mathcal {O}(\\frac{\\ln {d}}{\\alpha _0 L})$. So we have",
"Thus $\\Vert \\frac{\\partial \\tilde{\\mathcal {L}}}{\\partial W^{2,L}}\\Vert _F=\\sqrt{\\sum _{p,q=1}^d|\\frac{\\partial \\tilde{\\mathcal {L}}}{\\partial W^{2,L}_{pq}}|^2}\\le \\mathcal {O}(\\sqrt{\\frac{d^2\\ln {d}}{\\alpha _0L}})$.",
"Take $\\alpha _0=\\frac{1}{10}$, we have that with probability at least $0.99-\\delta (\\epsilon )-\\frac{\\epsilon }{0.9+\\epsilon }$, for the Post-LN Transformer we have $\\Vert \\frac{\\partial \\tilde{\\mathcal {L}}}{\\partial W^{2,L}}\\Vert _F\\le \\mathcal {O}(d\\sqrt{\\ln {d}})$ and for the Pre-LN Transformer we have $\\Vert \\frac{\\partial \\tilde{\\mathcal {L}}}{\\partial W^{2,L}}\\Vert _F\\le \\mathcal {O}(d\\sqrt{\\frac{\\ln {d}}{L}})$"
],
[
"For simplicity, we denote $x_l=\\text{Concat}(x_{l,1},...,x_{l,n})\\in \\mathbb {R}^{nd}$ and $x_l^k=\\text{Concat}(x_{l,1}^k,...,x_{l,n}^k)\\in \\mathbb {R}^{nd}$ for $k=\\lbrace 1,2,3,4,5\\rbrace $. Then in the Post-LN Transformer, the gradient of the parameters in the $l$-th layer (take $W^{2,l}$ as an example) can be written as",
"where",
"The Jacobian matrices of the Post-LN Transformer layers are:",
"where Jji = diag(( xj,ipost,3(w11,j)), ...,( xj,ipost,3(wd1,j))) Rd d",
"Using Hölder's inequality, we have",
"Since $\\frac{\\partial x_{j+1}}{\\partial x_j^{post,5}}=diag(\\mathbf {J}_{LN}(x_{j,1}^{post,5}),...,\\mathbf {J}_{LN}(x_{j,n}^{post,5}))$, we have $\\sqrt{\\mathbb {E}\\left[\\Vert \\frac{\\partial x_{j+1}^{post}}{\\partial x_j^{post,5}}\\Vert _2^2\\right]} \\approx \\sqrt{\\mathbb {E}\\frac{d}{\\Vert x_{j,1}^{post,5}\\Vert ^2_2}} \\approx \\sqrt{\\frac{2}{3}}$ when $\\Vert x_{j,1}^{post,5}\\Vert ^2_2$ concentrates around its expectation $\\mathbb {E}\\Vert x_{j,1}^{post,5}\\Vert ^2_2$ which equals $\\frac{3}{2}d$ according to Lemma 2. Therefore, when we estimate the norm of $\\frac{\\partial \\tilde{\\mathcal {L}}}{\\partial W^{2,l}}$ for post-LN transformer, there exists a term $\\mathcal {O}(\\frac{2}{3}^{(L-l)/2})$, which exponentially decreases as $l$ goes smaller. Similarly, in the pre-LN Transformer, the gradient can be written as",
"where",
"The Jacobian matrices of the Pre-LN Transformer layers are:",
"If $l$ is sufficiently large, the norm of $\\mathbf {J}_{LN}(x_{j,i}^{pre})$ and $\\mathbf {J}_{LN}(x^{pre,3}_{j,i})$ are very small (of order $\\mathcal {O}(\\frac{1}{\\sqrt{j}})$) as $j$ is between $l+1$ and $L$, which means the eigenvalues of matrix $\\frac{\\partial x_{j+1}^{pre}}{\\partial x_j^{pre,3}}$ and $\\frac{\\partial x_{j}^{pre,3}}{\\partial x_j^{pre}}$ are close to 1. Then we can see that $\\mathbb {E}\\Vert \\frac{\\partial x^{pre}_{j+1}}{\\partial x_j^{pre,3}}\\Vert _2$ and $\\mathbb {E}\\Vert \\frac{\\partial x_{j}^{pre,3}}{\\partial x_j^{pre}}\\Vert _2$ are nearly 1, and the norm of $\\frac{\\partial \\tilde{\\mathcal {L}}}{\\partial W^{2,l}}$ for pre-LN transformer is independent of $l$ when $l$ is large."
],
[
"In this section we give an example of $(\\epsilon ,\\delta )$-bounded random variable. This example comes from Example 2.5 in BIBREF47 and we give a short description below.",
"If $Z=(Z_1,...,Z_n)$ is a Gaussian vector with distribution $N(0,I_n)$, then $Y=\\Vert Z\\Vert ^2_2=\\sum _{k=1}^n Z_k^2$ has distribution $\\chi ^2_n$. And $\\mathbb {E}Y=\\sum _{k=1}^n \\mathbb {E}Z_k^2=n$",
"A random variable $X$ with mean $\\mu =\\mathbb {E}[X]$ is called sub-exponential if there are non-negative parameters $(\\nu ,\\alpha )$ such that $\\mathbb {E}[\\exp (\\lambda (X-\\mu ))]\\le \\exp (\\frac{\\nu ^2\\lambda ^2}{2})$ for all $|\\lambda |<\\frac{1}{\\alpha }$. The next proposition comes from Proposition 2.2 in BIBREF47.",
"Proposition 1 (Sub-exponential tail bound) Suppose that $X$ is sub-exponential with parameters $(\\nu ,\\alpha )$. Then",
"and from Example 2.5 in BIBREF47, the $\\chi ^2$ variable $Y$ is sub-exponential with parameters $(\\nu , \\alpha )=(2 \\sqrt{n}, 4)$. So we can derive the one-sided bound",
"So $Y$ is $(\\epsilon ,\\delta )$-bounded with $\\epsilon \\in (0,1)$ and $\\delta =\\exp (-n\\epsilon ^2/8)$."
],
[
"Theoretically, we find that the gradients of the parameters near the output layers are very large for the Post-LN Transformer and suggest using large learning rates to those parameters makes the training unstable. To verify whether using small-step updates mitigates the issue, we use a very small but fixed learning rate and check whether it can optimize the Post-LN Transformer (without the learning rate warm-up step) to a certain extent. In detail, we use a fixed learning rate of $1e^{-4}$ at the beginning of the optimization, which is much smaller than the $\\text{lr}_{max}= 1e^{-3}$ in the paper. Please note that as the learning rates during training are small, the training converges slowly, and this setting is not very practical in real large-scale tasks. We plot the validation curve together with other baseline approaches in Figure 6. We can see from the figure, the validation loss (pink curve) is around 4.3 in 27 epochs. This loss is much lower than that of the Post-LN Transformer trained using a large learning rate (blue curve). But it is still worse than the SOTA performance (green curve)."
]
],
"section_name": [
"Introduction",
"Related work",
"Optimization for the Transformer ::: Transformer with Post-Layer Normalization",
"Optimization for the Transformer ::: Transformer with Post-Layer Normalization ::: Self-attention sub-layer",
"Optimization for the Transformer ::: Transformer with Post-Layer Normalization ::: Position-wise FFN sub-layer",
"Optimization for the Transformer ::: Transformer with Post-Layer Normalization ::: Residual connection and layer normalization",
"Optimization for the Transformer ::: Transformer with Post-Layer Normalization ::: Post-LN Transformer",
"Optimization for the Transformer ::: The learning rate warm-up stage",
"Optimization for the Transformer ::: The learning rate warm-up stage ::: Experimental setting",
"Optimization for the Transformer ::: The learning rate warm-up stage ::: Results and discussions",
"Optimization for the Transformer ::: Understanding the Transformer at initialization",
"Optimization for the Transformer ::: Understanding the Transformer at initialization ::: Notations",
"Optimization for the Transformer ::: Understanding the Transformer at initialization ::: Parameter Initialization",
"Optimization for the Transformer ::: Understanding the Transformer at initialization ::: Post-LN Transformer v.s. Pre-LN Transformer",
"Optimization for the Transformer ::: Understanding the Transformer at initialization ::: Extended theory to other layers/parameters",
"Optimization for the Transformer ::: Empirical verification of the theory and discussion",
"Optimization for the Transformer ::: Empirical verification of the theory and discussion ::: On the concentration property",
"Optimization for the Transformer ::: Empirical verification of the theory and discussion ::: On Theorem 1",
"Optimization for the Transformer ::: Empirical verification of the theory and discussion ::: On the extended theory",
"Optimization for the Transformer ::: Empirical verification of the theory and discussion ::: The critical warm-up stage for Post-LN Transformer",
"Experiments",
"Experiments ::: Experiment Settings ::: Machine Translation",
"Experiments ::: Experiment Settings ::: Unsupervised Pre-training (BERT)",
"Experiments ::: Experiment Results ::: Machine Translation",
"Experiments ::: Experiment Results ::: Unsupervised Pre-training (BERT)",
"Conclusion and Future Work",
"Experimental Settings ::: Machine Translation ::: Experiment on Section 3",
"Experimental Settings ::: Machine Translation ::: Experiment on Section 4",
"Experimental Settings ::: Unsupervised Pretraining",
"Experimental Settings ::: GLUE Dataset ::: MRPC",
"Experimental Settings ::: GLUE Dataset ::: RTE",
"Experimental Settings ::: GLUE Dataset ::: Fine-tuning on GLUE tasks",
"Proof of Lemma 1",
"Proof of Lemma 2",
"Proof of Lemma 3",
"Proof of Theorem 1",
"Extension to other layers",
"Examples of @!START@$(\\epsilon ,\\delta )$@!END@-bounded random variables",
"Small learning rate experiment"
]
} | {
"answers": [
{
"annotation_id": [
"7de04e6caed80b1342de0d76c04907e4f15cba86",
"c91a9e7561de0ba98d445479ed693367bd922265"
],
"answer": [
{
"evidence": [
"We record validation loss of the model checkpoints and plot them in Figure FIGREF47. Similar to the machine translation tasks, the learning rate warm-up stage can be removed for the Pre-LN model. The Pre-LN model can be trained faster. For example, the Post-LN model achieves 1.69 validation loss at 500k updates while the Pre-LN model achieves similar validation loss at 700k updates, which suggests there is a 40% speed-up rate. Note that $T_{warmup}$ (10k) is far less than the acceleration (200k) which suggests the Pre-LN Transformer is easier to optimize using larger learning rates. We also evaluate different model checkpoints on the downstream task MRPC and RTE (more details can be found in the supplementary material). The experiments results are plotted in Figure FIGREF48 and FIGREF49. We can see that the Pre-LN model also converges faster on the downstream tasks."
],
"extractive_spans": [
"40% speed-up rate"
],
"free_form_answer": "",
"highlighted_evidence": [
"For example, the Post-LN model achieves 1.69 validation loss at 500k updates while the Pre-LN model achieves similar validation loss at 700k updates, which suggests there is a 40% speed-up rate."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We record validation loss of the model checkpoints and plot them in Figure FIGREF47. Similar to the machine translation tasks, the learning rate warm-up stage can be removed for the Pre-LN model. The Pre-LN model can be trained faster. For example, the Post-LN model achieves 1.69 validation loss at 500k updates while the Pre-LN model achieves similar validation loss at 700k updates, which suggests there is a 40% speed-up rate. Note that $T_{warmup}$ (10k) is far less than the acceleration (200k) which suggests the Pre-LN Transformer is easier to optimize using larger learning rates. We also evaluate different model checkpoints on the downstream task MRPC and RTE (more details can be found in the supplementary material). The experiments results are plotted in Figure FIGREF48 and FIGREF49. We can see that the Pre-LN model also converges faster on the downstream tasks."
],
"extractive_spans": [
"40%"
],
"free_form_answer": "",
"highlighted_evidence": [
"For example, the Post-LN model achieves 1.69 validation loss at 500k updates while the Pre-LN model achieves similar validation loss at 700k updates, which suggests there is a 40% speed-up rate. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"1f13f253bb2091de9344fd54ae5e9c80e1fb72ac",
"1f6557ccbbf97c564c7fd071270159eb40dbb01e"
],
"answer": [
{
"evidence": [
"Experiments ::: Experiment Settings ::: Machine Translation",
"We conduct our experiments on two widely used tasks: the IWSLT14 German-to-English (De-En) task and the WMT14 English-to-German (En-De) task. For the IWSLT14 De-En task, we use the same model configuration as in Section 3. For the WMT14 En-De task, we use the Transformer base setting. More details can be found in the supplementary material.",
"For training the Pre-LN Transformer, we remove the learning rate warm-up stage. On the IWSLT14 De-En task, we set the initial learning rate to be $5e^{-4}$ and decay the learning rate at the 8-th epoch by 0.1. On the WMT14 En-De task, we run two experiments in which the initial learning rates are set to be $7e^{-4}/1.5e^{-3}$ respectively. Both learning rates are decayed at the 6-th epoch followed by the inverse square root learning rate scheduler.",
"Experiments ::: Experiment Settings ::: Unsupervised Pre-training (BERT)",
"We follow BIBREF8 to use English Wikipedia corpus and BookCorpus for pre-training. As the dataset BookCorpus BIBREF40 is no longer freely distributed. We follow the suggestions from BIBREF8 to crawl and collect BookCorpus on our own. The concatenation of two datasets contains roughly 3.4B words in total, which is comparable with the data corpus used in BIBREF8. We randomly split documents into one training set and one validation set. The training-validation ratio for pre-training is 199:1.",
"We use base model configuration in our experiments. Similar to the translation task, we train the Pre-LN BERT without the warm-up stage and compare it with the Post-LN BERT. We follow the same hyper-parameter configuration in BIBREF8 to train the Post-LN BERT using 10k warm-up steps with $\\text{lr}_{max}=1e^{-4}$. For the Pre-LN BERT, we use linear learning rate decay starting from $3e^{-4}$ without the warm-up stage. We have tried to use a larger learning rate (such as $3e^{-4}$) for the Post-LN BERT but found the optimization diverged."
],
"extractive_spans": [
" experiments on two widely used tasks: the IWSLT14 German-to-English (De-En) task and the WMT14 English-to-German (En-De) task",
"we train the Pre-LN BERT without the warm-up stage and compare it with the Post-LN BERT"
],
"free_form_answer": "",
"highlighted_evidence": [
"Experiments ::: Experiment Settings ::: Machine Translation\nWe conduct our experiments on two widely used tasks: the IWSLT14 German-to-English (De-En) task and the WMT14 English-to-German (En-De) task.",
"For training the Pre-LN Transformer, we remove the learning rate warm-up stage.",
"Experiments ::: Experiment Settings ::: Unsupervised Pre-training (BERT)\nWe follow BIBREF8 to use English Wikipedia corpus and BookCorpus for pre-training.",
"We use base model configuration in our experiments. Similar to the translation task, we train the Pre-LN BERT without the warm-up stage and compare it with the Post-LN BERT."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We conduct experiments on the IWSLT14 German-to-English (De-En) machine translation task. We mainly investigate two aspects: whether the learning rate warm-up stage is essential and whether the final model performance is sensitive to the value of $T_{\\text{warmup}}$. To study the first aspect, we train the model with the Adam optimizer BIBREF20 and the vanilla SGD optimizer BIBREF35 respectively. For both optimziers, we check whether the warm-up stage can be removed. We follow BIBREF0 to set hyper-parameter $\\beta $ to be $(0.9,0.98)$ in Adam. We also test different $\\text{lr}_{max}$ for both optimizers. For Adam, we set $\\text{lr}_{max}=5e^{-4}$ or $1e^{-3}$, and for SGD, we set $\\text{lr}_{max}=5e^{-3}$ or $1e^{-3}$. When the warm-up stage is used, we set $T_{\\text{warmup}}=4000$ as suggested by the original paper BIBREF0. To study the second aspect, we set $T_{\\text{warmup}}$ to be 1/500/4000 (“1” refers to the no warm-up setting) and use $\\text{lr}_{max}=5e^{-4}$ or $1e^{-3}$ with Adam. For all experiments, a same inverse square root learning rate scheduler is used after the warm-up stage. We use both validation loss and BLEU BIBREF36 as the evaluation measure of the model performance. All other details can be found in the supplementary material."
],
"extractive_spans": [],
"free_form_answer": "whether the learning rate warm-up stage is essential, whether the final model performance is sensitive to the value of Twarmup.",
"highlighted_evidence": [
"We conduct experiments on the IWSLT14 German-to-English (De-En) machine translation task. We mainly investigate two aspects: whether the learning rate warm-up stage is essential and whether the final model performance is sensitive to the value of $T_{\\text{warmup}}$."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"23557323a3d64de20d95d335209c42e8a5334ffa",
"9251cd4162d80da2d7be96ac0c237549d3133868"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How much is training speeded up?",
"What experiments do they perform?",
"What is mean field theory?"
],
"question_id": [
"55bde89fc5822572f794614df3130d23537f7cf2",
"523bc4e3482e1c9a8e0cb92cfe51eea92c20e8fd",
"6073be8b88f0378cd0c4ffcad87e1327bc98b991"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1. (a) Post-LN Transformer layer; (b) Pre-LN Transformer layer.",
"Table 1. Post-LN Transformer v.s. Pre-LN Transformer",
"Figure 2. Performances of the models optimized by Adam and SGD on the IWSLT14 De-En task.",
"Figure 3. The norm of gradients of 1. different layers in the 6-6 Transformer (a,b). 2. W 2,L in different size of the Transformer (c,d).",
"Figure 4. Performances of the models on the IWSLT14 De-En task and WMT14 En-De task",
"Figure 5. Performances of the models on unsupervised pre-training (BERT) and downstream tasks",
"Figure 6. Performances of the models on the IWSLT14 De-En task."
],
"file": [
"2-Figure1-1.png",
"4-Table1-1.png",
"4-Figure2-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png",
"8-Figure5-1.png",
"17-Figure6-1.png"
]
} | [
"What experiments do they perform?"
] | [
[
"2002.04745-Experiments ::: Experiment Settings ::: Unsupervised Pre-training (BERT)-0",
"2002.04745-Optimization for the Transformer ::: The learning rate warm-up stage ::: Experimental setting-0",
"2002.04745-Experiments ::: Experiment Settings ::: Machine Translation-1",
"2002.04745-Experiments ::: Experiment Settings ::: Unsupervised Pre-training (BERT)-1",
"2002.04745-Experiments ::: Experiment Settings ::: Machine Translation-0"
]
] | [
"whether the learning rate warm-up stage is essential, whether the final model performance is sensitive to the value of Twarmup."
] | 378 |
1907.12984 | DuTongChuan: Context-aware Translation Model for Simultaneous Interpreting | In this paper, we present DuTongChuan, a novel context-aware translation model for simultaneous interpreting. This model allows to constantly read streaming text from the Automatic Speech Recognition (ASR) model and simultaneously determine the boundaries of Information Units (IUs) one after another. The detected IU is then translated into a fluent translation with two simple yet effective decoding strategies: partial decoding and context-aware decoding. In practice, by controlling the granularity of IUs and the size of the context, we can get a good trade-off between latency and translation quality easily. Elaborate evaluation from human translators reveals that our system achieves promising translation quality (85.71% for Chinese-English, and 86.36% for English-Chinese), specially in the sense of surprisingly good discourse coherence. According to an End-to-End (speech-to-speech simultaneous interpreting) evaluation, this model presents impressive performance in reducing latency (to less than 3 seconds at most times). Furthermore, we successfully deploy this model in a variety of Baidu's products which have hundreds of millions of users, and we release it as a service in our AI platform. | {
"paragraphs": [
[
"Recent progress in Automatic Speech Recognition (ASR) and Neural Machine Translation (NMT), has facilitated the research on automatic speech translation with applications to live and streaming scenarios such as Simultaneous Interpreting (SI). In contrast to non-real time speech translation, simultaneous interpreting involves starting translating source speech, before the speaker finishes speaking (translating the on-going speech while listening to it). Because of this distinguishing feature, simultaneous interpreting is widely used by multilateral organizations (UN/EU), international summits (APEC/G-20), legal proceedings, and press conferences. Despite of recent advance BIBREF0 , BIBREF1 , the research on simultaneous interpreting is notoriously difficult BIBREF0 due to well known challenging requirements: high-quality translation and low latency.",
"Many studies present methods to improve the translation quality by enhancing the robustness of translation model against ASR errors BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . On the other hand, to reduce latency, some researchers propose models that start translating after reading a few source tokens BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF1 . As one representative work related to this topic, recently, we present a translation model using prefix-to-prefix framework with INLINEFORM0 policy BIBREF0 . This model is simple yet effective in practice, achieving impressive performance both on translation quality and latency.",
"However, existing work pays less attention to the fluency of translation, which is extremely important in the context of simultaneous translation. For example, we have a sub-sentence NMT model that starts to translate after reading a sub-sentence rather than waiting until the end of a sentence like the full-sentence models does. This will definitely reduce the time waiting for the source language speech. However, as shown in the Figure FIGREF2 , the translation for each sub-sentence is barely adequate, whereas the translation of the entire source sentence lacks coherence and fluency. Moreover, it is clear that the model produces an inappropriate translation “your own” for the source token “自己” due to the absence of the preceding sub-sentence.",
"To make the simultaneous machine translation more accessible and producible, we borrow SI strategies used by human interpreters to create our model. As shown in Figure FIGREF3 , this model is able to constantly read streaming text from the ASR model, and simultaneously determine the boundaries of Information Units (IUs) one after another. Each detected IU is then translated into a fluent translation with two simple yet effective decoding strategies: partial decoding and context-aware decoding. Specifically, IUs at the beginning of each sentence are sent to the partial decoding module. Other information units, either appearing in the middle or at the end of a sentence, are translated into target language by the context-aware decoding module. Notice that this module is able to exploit additional context from the history so that the model can generate coherent translation. This method is derived from the “salami technique” BIBREF13 , BIBREF14 , or “chunking”, one of the most commonly used strategies by human interpreters to cope with the linearity constraint in simultaneous interpreting. Having severely limited access to source speech structure in SI, interpreters tend to slice up the incoming speech into smaller meaningful pieces that can be directly rendered or locally reformulated without having to wait for the entire sentence to unfold.",
"In general, there are several remarkable novel advantages that differ our model from the previous work:",
"For a comprehensive evaluation of our system, we use two evaluation metrics: translation quality and latency. According to the automatic evaluation metric, our system presents excellent performance both in translation quality and latency. In the speech-to-speech scenario, our model achieves an acceptability of 85.71% for Chinese-English translation, and 86.36% for English-Chinese translation in human evaluation. Moreover, the output speech lags behind the source speech by an average of less than 3 seconds, which presents surprisingly good experience for machine translation users BIBREF15 , BIBREF16 , BIBREF17 . We also ask three interpreters with SI experience to simultaneously interpret the test speech in a mock conference setting. However, the target texts transcribed from human SI obtain worse BLEU scores as the reference in the test set are actually from written translating rather than simultaneous interpreting. More importantly, when evaluated by human translators, the performance of NMT model is comparable to the professional human interpreter.",
"The contributions of this paper can be concluded into the following aspects:"
],
[
"As shown in Figure FIGREF7 , our model consists of two key modules: an information unit boundary detector and a tailored NMT model. In the process of translation, the IU detector will determine the boundary for each IU while constantly reading the steaming input from the ASR model. Then, different decoding strategies are applied to translate IUs at the different positions.",
"In this section, we use “IU” to denote one sub-sentence for better description. But in effect, our translation model is a general solution for simultaneous interpreting, and is compatible to IUs at arbitrary granularity, i.e., clause-level, phrase-level, and word-level, etc.",
"For example, by treating a full-sentence as an IU, the model is reduced to the standard translation model. When the IU is one segment, it is reduced to the segment-to-segment translation model BIBREF18 , BIBREF12 . Moreover, if we treat one token as an IU, it is reduced to our previous wait-k model BIBREF0 . The key point of our model is to train the IU detector to recognize the IU boundary at the corresponding granularity.",
"In the remain of this section, we will introduce above two components in details."
],
[
"Recent success on pre-training indicates that a pre-trained language representation is beneficial to downstream natural language processing tasks including classification and sequence labeling problems BIBREF19 , BIBREF20 , BIBREF21 . We thus formulate the IU boundary detection as a classification problem, and fine-tune the pre-trained model on a small size training corpus. Fine-tuned in several iterations, the model learns to recognize the boundaries of information units correctly.",
"As shown in Figure FIGREF13 , the model tries to predict the potential class for the current position. Once the position is assigned to a definitely positive class, its preceding sequence is labeled as one information unit. One distinguishing feature of this model is that we allow it to wait for more context so that it can make a reliable prediction. We call this model a dynamic context based information unit boundary detector.",
"Definition 1 Assuming the model has already read a sequence INLINEFORM0 with INLINEFORM1 tokens, we denote INLINEFORM2 as the anchor, and the subsequence INLINEFORM3 with INLINEFORM4 tokens as dynamic context.",
"For example, in Figure FIGREF13 , the anchor in both cases is “姬”, and the dynamic context in the left side case is “这”, and in the right side case is “这个”.",
"Definition 2 If the normalized probability INLINEFORM0 for the prediction of the current anchor INLINEFORM1 is larger than a threshold INLINEFORM2 , then the sequence INLINEFORM3 is a complete sequence, and if INLINEFORM4 is smaller than a threshold INLINEFORM5 ( INLINEFORM6 ), it is an incomplete sequence, otherwise it is an undetermined sequence.",
"For a complete sequence INLINEFORM0 , we will send it to the corresponding translation model . Afterwards, the detector will continue to recognize boundaries in the rest of the sequence ( INLINEFORM1 ). For an incomplete sequence, we will take the INLINEFORM2 as the new anchor for further detection. For an undetermined sequence, which is as shown in Figure FIGREF13 , the model will wait for a new token INLINEFORM3 , and take ( INLINEFORM4 ) as dynamic context for further prediction.",
"In the training stage, for one common sentence including two sub-sequences, INLINEFORM0 and INLINEFORM1 . We collect INLINEFORM2 plus any token in INLINEFORM3 as positive training samples, and the other sub-sequences in INLINEFORM4 as negative training samples. We refer readers to Appendix for more details.",
"In the decoding stage, we begin with setting the size of the dynamic context to 0, and then determine whether to read more context according to the principle defined in definition SECREF15 ."
],
[
"Traditional NMT models are usually trained on bilingual corpora containing only complete sentences. However in our context-aware translation model, information units usually are sub-sentences. Intuitively, the discrepancy between the training and the decoding will lead to a problematic translation, if we use the conventional NMT model to translate such information units. On the other hand, conventional NMT models rarely do anticipation. Whereas in simultaneous interpreting, human interpreters often have to anticipate the up-coming input and render a constituent at the same time or even before it is uttered by the speaker.",
"In our previous work BIBREF0 , training a wait-k policy slightly differs from the traditional method. When predicting the first target token, we mask the source content behind the INLINEFORM0 token, in order to make the model learn to anticipate. The prediction of other tokens can also be obtained by moving the mask-window token-by-token from position INLINEFORM1 to the end of the line. According to our practical experiments, this training strategy do help the model anticipate correctly most of the time.",
"Following our previous work, we propose the partial decoding model, a tailored NMT model for translating the IUs that appear at the beginning of each sentence. As depicted in Figure FIGREF17 , in the training stage, we mask the second sub-sentence both in the source and target side. While translating the first sub-sentence, the model learns to anticipate the content after the comma, and produces a temporary translation that can be further completed with more source context. Clearly, this method relies on the associated sub-sentence pairs in the training data (black text in Figure FIGREF17 ). In this paper, we propose an automatic method to acquire such sub-sentence pairs.",
"Definition 3 Given a source sentence INLINEFORM0 with INLINEFORM1 tokens, a target sentence INLINEFORM2 with INLINEFORM3 tokens, and a word alignment set INLINEFORM4 where each alignment INLINEFORM5 is a tuple indicating a word alignment existed between the source token INLINEFORM6 and target token INLINEFORM7 , a sub-sentence pair INLINEFORM8 holds if satisfying the following conditions: DISPLAYFORM0 ",
"To acquire the word alignment, we run the open source toolkit fast_align , and use a variety of standard symmetrization heuristics to generate the alignment matrix. In the training stage, we perform training by firstly tuning the model on a normal bilingual corpus, and then fine-tune the model on a special training corpus containing sub-sentence pairs."
],
[
"For IUs that have one preceding sub-sentence, the context-aware decoding model is applied to translate them based on the pre-generated translations. The requirements of this model are obvious:",
"The model is required to exploit more context to continue the translation.",
"The model is required to generate the coherent translation given partial pre-generated translations.",
"Intuitively, the above requirements can be easily satisfied using a force decoding strategy. For example, when translating the second sub-sentence in “这点也是以前让我非常地诧异,也是非常纠结的地方”, given the already-produced translation of the first sub-sentence “It also surprised me very much before .”, the model finishes the translation by adding “It's also a very surprising , tangled place .”. Clearly, translation is not that accurate and fluent with the redundant constituent “surprising”. We ascribe this to the discrepancy between training and decoding. In the training stage, the model learns to predict the translation based on the full source sentence. In the decoding stage, the source contexts for translating the first-subsentence and the second-subsentence are different. Forcing the model to generate identical translation of the first sub-sentence is very likely to cause under-translation or over-translation.",
"To produce more adequate and coherent translation, we make the following refinements:",
"During training, we force the model to focus on learning how to continue the translation without over-translation and under-translation.",
"During decoding, we discard a few previously generated translations, in order to make more fluent translations.",
"As shown in Figure FIGREF19 , during training, we do not mask the source input, instead we mask the target sequence aligned to the first sub-sentence. This strategy will force the model to learn to complete the half-way done translation, rather than to concentrate on generating a translation of the full sentence.",
"Moreover, in the decoding stage, as shown in Figure FIGREF28 , we propose to discard the last INLINEFORM0 tokens from the generated partial translation (at most times, discarding the last token brings promising result). Then the context-aware decoding model will complete the rest of the translation. The motivation is that the translation of the tail of a sub-sentence is largely influenced by the content of the succeeding sub-sentence. By discarding a few tokens from previously generated translation, the model is able to generate a more appropriate translation. In the practical experiment, this slight modification is proved to be effective in generating fluent translation."
],
[
"In the work of DBLP:journals/corr/abs-1810-08398 and arivazhagan2019monotonic, they used the average lagging as the metric for evaluating the latency. However, there are two major flaws of this metric:",
"1) This metric is unsuitable for evaluating the sub-sentence model. Take the sentence in Figure FIGREF3 for example. As the model reads four tokens “她说 我 错了 那个”, and generates six target tokens “She said I was wrong ,”, the lag of the last target token is one negative value ( INLINEFORM0 ) according to its original definition.",
"2) This metric is unsuitable for evaluating latency in the scenario of speech-to-speech translation. DBLP:journals/corr/abs-1810-08398 considered that the target token generated after the cut-off point doesn't cause any lag. However, this assumption is only supported in the speech-to-text scenario. In the speech-to-speech scenario, it is necessary to consider the time for playing the last synthesized speech.",
"Therefore, we instead propose a novel metric, Equilibrium Efficiency (EE), which measures the efficiency of equilibrium strategy.",
"Definition 4 Consider a sentence with INLINEFORM0 subsequences, and let INLINEFORM1 be the length of INLINEFORM2 source subsequence that emits a target subsequence with INLINEFORM3 tokens. Then the equilibrium efficiency is: INLINEFORM4 , where INLINEFORM5 is defined as: DISPLAYFORM0 ",
"and INLINEFORM0 , INLINEFORM1 is an empirical factor.",
"In practice, we set INLINEFORM0 to 0.3 for Chinese-English translation (reading about 200 English tokens in one minute). The motivation of EE is that one good model should equilibrate the time for playing the target speech to the time for listening to the speaker. Assuming playing one word takes one second, the EE actually measures the latency from the audience hearing the final target word to the speaker finishing the speech. For example, the EE of the sentence in Figure FIGREF7 is equal to INLINEFORM1 , since the time for playing the sequence “She said I was wrong” is equilibrated to the time for speaker speaking the second sub-sentence “那个 叫 什么 什么 呃 妖姬”."
],
[
"We conduct multiple experiments to evaluate the effectiveness of our system in many ways."
],
[
"We use a subset of the data available for NIST OpenMT08 task . The parallel training corpus contains approximate 2 million sentence pairs. We choose NIST 2006 (NIST06) dataset as our development set, and the NIST 2002 (NIST02), 2003 (NIST03), 2004 (NIST04) 2005 (NIST05), and 2008 (NIST08) datasets as our test sets. We will use this dataset to evaluate the performance of our partial decoding and context-aware decoding strategy from the perspective of translation quality and latency.",
"Recently, we release Baidu Speech Translation Corpus (BSTC) for open research . This dataset covers speeches in a wide range of domains, including IT, economy, culture, biology, arts, etc. We transcribe the talks carefully, and have professional translators to produce the English translations. This procedure is extremely difficult due to the large number of domain-specific terminologies, speech redundancies and speakers' accents. We expect that this dataset will help the researchers to develop robust NMT models on the speech translation. In summary, there are many features that distinguish this dataset to the previously related resources:",
"Speech irregularities are kept in transcription while omitted in translation (eg. filler words like “嗯, 呃, 啊”, and unconscious repetitions like “这个这个呢”), which can be used to evaluate the robustness of the NMT model dealing with spoken language.",
"Each talk's transcription is translated into English by a single translator, and then segmented into bilingual sentence pairs according to the sentence boundaries in the English translations. Therefore, every sentence is translated based on the understanding of the entire talk and is translated faithfully and coherently in global sense.",
"We use the streaming multi-layer truncated attention model (SMLTA) trained on the large-scale speech corpus (more than 10,000 hours) and fine-tuned on a number of talk related corpora (more than 1,000 hours), to generate the 5-best automatic recognized text for each acoustic speech.",
"The test dataset includes interpretations produced by simultaneous interpreters with professional experience. This dataset contributes an essential resource for the comparison between translation and interpretation.",
"We randomly extract several talks from the dataset, and divide them into the development and test set. In Table TABREF34 , we summarize the statistics of our dataset. The average number of utterances per talk is 152.6 in the training set, 59.75 in the dev set, and 162.5 in the test set.",
"We firstly run the standard Transformer model on the NIST dataset. Then we evaluate the quality of the pre-trained model on our proposed speech translation dataset, and propose effective methods to improve the performance of the baseline. In that the testing data in this dataset contains ASR errors and speech irregularities, it can be used to evaluate the robustness of novel methods.",
"In the final deployment, we train our model using a corpus containing approximately 200 million bilingual pairs both in Chinese-English and English-Chinese translation tasks."
],
[
"To preprocess the Chinese and the English texts, we use an open source Chinese Segmenter and Moses Tokenizer . After tokenization, we convert all English letters into lower case. And we use the “multi-bleu.pl” script to calculate BLEU scores. Except in the large-scale experiments, we conduct byte-pair encoding (BPE) BIBREF22 for both Chinese and English by setting the vocabulary size to 20K and 18K for Chinese and English, respectively. But in the large-scale experiments, we utilize a joint vocabulary for both Chinese-English and English-Chinese translation tasks, with a vocabulary size of 40K."
],
[
"We implement our models using PaddlePaddle , an end-to-end open source deep learning platform developed by Baidu. It provides a complete suite of deep learning libraries, tools and service platforms to make the research and development of deep learning simple and reliable. For training our dynamic context sequence boundary detector, we use ERNIE BIBREF20 as our pre-trained model.",
"For fair comparison, we implement the following models:",
"baseline: A standard Transformer based model with big version of hyper parameters.",
"sub-sentence: We split a full sentence into multiple sub-sentences by comma, and translate them using the baseline model. To evaluate the translation quality, we concatenate the translation of each sub-sentence into one sentence.",
"wait-k: This is our previous work BIBREF0 .",
"context-aware: This is our proposed model using context-aware decoding strategy, without fine-tuning on partial decoding model.",
"partial decoding: This is our proposed model using partial decoding.",
"discard INLINEFORM0 tokens: The previously generated INLINEFORM1 tokens are removed to complete the rest of the translation by the context-aware decoding model."
],
[
"We firstly conduct our experiments on the NIST Chinese-English translation task.",
"To validate the effectiveness of our translation model, we run two baseline models, baseline and sub-sentence. We also compare the translation quality as well as latency of our models with the wait-k model.",
"Effectiveness on Translation Quality. As shown in Table TABREF49 , there is a great deal of difference between the sub-sentence and the baseline model. On an average the sub-sentence shows weaker performance by a 3.08 drop in BLEU score (40.39 INLINEFORM0 37.31). Similarly, the wait-k model also brings an obvious decrease in translation quality, even with the best wait-15 policy, its performance is still worse than the baseline system, with a 2.15 drop, averagely, in BLEU (40.39 INLINEFORM1 38.24). For a machine translation product, a large degradation in translation quality will largely affect the use experience even if it has low latency.",
"Unsurprisingly, when treating sub-sentences as IUs, our proposed model significantly improves the translation quality by an average of 2.35 increase in BLEU score (37.31 INLINEFORM0 39.66), and its performance is slightly lower than the baseline system with a 0.73 lower average BLEU score (40.39 INLINEFORM1 39.66). Moreover, as we allow the model to discard a few previously generated tokens, the performance can be further improved to 39.82 ( INLINEFORM2 0.16), at a small cost of longer latency (see Figure FIGREF58 ). It is consistent with our intuition that our novel partial decoding strategy can bring stable improvement on each testing dataset. It achieves an average improvement of 0.44 BLEU score (39.22 INLINEFORM3 39.66) compared to the context-aware system in which we do not fine-tune the trained model when using partial decoding strategy. An interesting finding is that our translation model performs better than the baseline system on the NIST08 testing set. We analyze the translation results and find that the sentences in NIST08 are extremely long, which affect the standard Transformer to learn better representation BIBREF23 . Using context-aware decoding strategy to generate consistent and coherent translation, our model performs better by focusing on generating translation for relatively shorter sub-sentences.",
"Investigation on Decoding Based on Segment. Intuitively, treating one segment as an IU will reduce the latency in waiting for more input to come. Therefore, we split the testing data into segments according to the principle in Definition SECREF20 (if INLINEFORM0 in Definition SECREF20 is a comma, then the data is sub-sentence pair, otherwise it is a segment-pair.) .",
"As Table TABREF49 shows, although the translation quality of discard 1 token based on segment is worse than that based on sub-sentence (37.96 vs. 39.66), the performance can be significantly improved by allowing the model discarding more previously generated tokens. Lastly, the discard 6 tokens obtains an impressive result, with an average improvement of 1.76 BLEU score (37.96 INLINEFORM0 39.72).",
"Effects of Discarding Preceding Generated Tokens. As mentioned and depicted in Figure FIGREF28 , we discard one token in the previously generated translation in our context-aware NMT model. One may be interested in whether discarding more generated translation leads to better translation quality. However, when decoding on the sub-sentence, even the best discard 4 tokens model brings no significant improvement (39.66 INLINEFORM0 39.82) but a slight cost of latency (see in Figure FIGREF58 for visualized latency). While decoding on the segment, even discarding two tokens can bring significant improvement (37.96 INLINEFORM1 39.00). This finding proves that our partial decoding model is able to generate accurate translation by anticipating the future content. It also indicates that the anticipation based on a larger context presents more robust performance than the aggressive anticipation in the wait-k model, as well as in the segment based decoding model.",
"Effectiveness on latency. As latency in simultaneous machine translation is essential and is worth to be intensively investigated, we compare the latency of our models with that of the previous work using our Equilibrium Efficiency metric. As shown in Figure FIGREF58 , we plot the translation quality and INLINEFORM0 on the NIST06 dev set. Clearly, compared to the baseline system, our model significantly reduce the time delay while remains a competitive translation quality. When treating segments as IUs, the latency can be further reduced by approximate 20% (23.13 INLINEFORM1 18.65), with a slight decrease in BLEU score (47.61 INLINEFORM2 47.27). One interesting finding is that the granularity of information units largely affects both the translation quality and latency. It is clear the decoding based on sub-sentence and based on segment present different performance in two metrics. For the former model, the increase of discarded tokens results in an obvious decrease in translation quality, but no definite improvement in latency. The latter model can benefit from the increasing of discarding tokens both in translation quality and latency.",
"The latency of the wait-k models are competitive, their translation quality, however, is still worse than context-aware model. Improving the translation quality for the wait-k will clearly brings a large cost of latency (36.53 INLINEFORM0 46.14 vs. 10.94 INLINEFORM1 22.63). Even with a best k-20 policy, its performance is still worse than most context-aware models. More importantly, the intermediately generated target token in the wait-k policy is unsuitable for TTS due to the fact that the generated token is often a unit in BPE, typically is an incomplete word. One can certainly wait more target tokens to synthesize the target speech, however, this method will reduce to the baseline model. In general, experienced human interpreters lag approximately 5 seconds (15 INLINEFORM2 25 words) behind the speaker BIBREF15 , BIBREF16 , BIBREF17 , which indicates that the latency of our model is accessible and practicable ( INLINEFORM3 = 25 indicates lagging 25 words).",
"In our context-sensitive model, the dynamic context based information unit boundary detector is essential to determine the IU boundaries in the steaming input. To measure the effectiveness of this model, we compare its precision as well as latency against the traditional language model based methods, a 5-gram language model trained by KenLM toolkit , and an in-house implemented RNN based model. Both of two contrastive models are trained on approximate 2 million monolingual Chinese sentences. As shown in Table TABREF60 , it is clear that our model beats the previous work with an absolute improvement of more than 15 points in term of F-score (62.79 INLINEFORM0 78.26) and no obvious burden in latency (average latency). This observation indicates that with bidirectional context, the model can learn better representation to help the downstream tasks. In the next experiments, we will evaluate models given testing data with IU boundaries detected by our detector.",
"To our knowledge, almost all of the previous related work on simultaneous translation evaluate their models upon the clean testing data without ASR errors and with explicit sentence boundaries annotated by human translators. Certainly, testing data with real ASR errors and without explicit sentence boundaries is beneficial to evaluate the robustness of translation models. To this end, we perform experiments on our proposed BSTC dataset.",
"The testing data in BSTC corpus consists of six talks. We firstly employ our ASR model to recognize the acoustic waves into Chinese text, which will be further segmented into small pieces of sub-sentences by our IU detector. To evaluate the contribution of our proposed BSTC dataset, we firstly train all models on the NIST dataset, and then check whether the performance can be further improved by fine-tuning them on the BSTC dataset.",
"From the results shown in Table TABREF64 , we conclude the following observations:",
"Due to the relatively lower CER in ASR errors (10.32 %), the distinction between the clean input and the noisy input results in a BLEU score difference smaller than 2 points (15.85 vs. 14.60 for pre-train, and 21.98 vs. 19.91 for fine-tune).",
"Despite the small size of the training data in BSTC, fine-tuning on this data is essential to improve the performance of all models.",
"In all settings, the best system in context-aware model beats the wait-15 model.",
"Pre-trained models are not sensitive to errors from Auto IU, while fine-tuned models are.",
"Another interesting work is to compare machine translation with human interpretation. We request three simultaneous interpreters (S, A and B) with years of interpreting experience ranging from three to seven years, to interpret the talks in BSTC testing dataset, in a mock conference setting .",
"We concatenate the translation of each talk into one big sentence, and then evaluate it by BLEU score. From Table TABREF69 , we find that machine translation beats the human interpreters significantly. Moreover, the length of interpretations are relatively short, and results in a high length penalty provided by the evaluation script. The result is unsurprising, because human interpreters often deliberately skip non-primary information to keep a reasonable ear-voice span, which may bring a loss of adequacy and yet a shorter lag time, whereas the machine translation model translates the content adequately. We also use human interpreting results as references. As Table TABREF69 indicates, our model achieves a higher BLEU score, 28.08.",
"Furthermore, we ask human translators to evaluate the quality between interpreting and machine translation. To evaluate the performance of our final system, we select one Chinese talk as well as one English talk consisting of about 110 sentences, and have human translators to assess the translation from multiple aspects: adequacy, fluency and correctness. The detailed measurements are:",
"Bad: Typically, the mark Bad indicates that the translation is incorrect and unacceptable.",
"OK: If a translation is comprehensible and adequate, but with minor errors such as incorrect function words and less fluent phrases, then it will be marked as OK.",
"Good: A translation will be marked as Good if it contains no obvious errors.",
"As shown in Table TABREF70 , the performance of our model is comparable to the interpreting. It is worth mentioning that both automatic and human evaluation criteria are designed for evaluating written translation and have a special emphasis on adequacy and faithfulness. But in simultaneous interpreting, human interpreters routinely omit less-important information to overcome their limitations in working memory. As the last column in Table 6 shows, human interpreters' oral translations have more omissions than machine's and receive lower acceptability. The evaluation results do not mean that machines have exceeded human interpreters in simultaneous interpreting. Instead, it means we need machine translation criteria that suit simultaneous interpreting. We also find that the BSTC dataset is extremely difficult as the best human interpreter obtains a lower Acceptability 73.04%. Although the NMT model obtains impressive translation quality, we do not compare the latency of machine translation and human interpreting in this paper, and leave it to the future work.",
"To better understand the contribution of our model on generating coherent translation, we select one representative running example for analysis. As the red text in Figure FIGREF73 demonstrates that machine translation model generates coherent translation “its own grid” for the sub-sentence “这个网络”, and “corresponds actually to” for the subsequence “...对应的,就是每个...”. Compared to the human interpretation, our model presents comparable translation quality. In details, our model treats segments as IUs, and generates translation for each IU consecutively. While the human interpreter splits the entire source text into two sub-sentences, and produces the translation respectively.",
"In the final deployment, we train DuTongChuan on the large-scale training corpus. We also utilize techniques to enhance the robustness of the translation model, such as normalization of the speech irregularities, dealing with abnormal ASR errors, and content censorship, etc (see Appendix). We successfully deploy DuTongChuan in the Baidu Create 2019 (Baidu AI Developer Conference) .",
"As shown in Table TABREF74 , it is clear that DuTongChuan achieves promising acceptability on both translation tasks (85.71% for Chinese-English, and 86.36 % for English-Chinese). We also elaborately analyze the error types in the final translations, and we find that apart from errors occurring in translation and ASR, a majority of errors come from IU boundary detection, which account for nearly a half of errors. In the future, we should concentrate on improving the translation quality by enhancing the robustness of our IU boundary detector. We also evaluate the latency of our model in an End-to-End manner (speech-to-speech), and we find that the target speech slightly lags behind the source speech in less than 3 seconds at most times. The overall performance both on translation quality and latency reveals that DuTongChuan is accessible and practicable in an industrial scenario."
],
[
"The existing research on speech translation can be divided into two types: the End-to-End model BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 and the cascaded model. The former approach directly translates the acoustic speech in one language, into text in another language without generating the intermediate transcription for the source language. Depending on the complexity of the translation task as well as the scarce training data, previous literatures explore effective techniques to boost the performance. For example pre-training BIBREF29 , multi-task learning BIBREF24 , BIBREF27 , attention-passing, BIBREF30 , and knowledge distillation BIBREF28 etc.,. However, the cascaded model remains the dominant approach and presents superior performance practically, since the ASR and NMT model can be optimized separately training on the large-scale corpus.",
"Many studies have proposed to synthesize realistic ASR errors, and augment them with translation training data, to enhance the robustness of the NMT model towards ASR errors BIBREF2 , BIBREF3 , BIBREF4 . However, most of these approaches depend on simple heuristic rules and only evaluate on artificially noisy test set, which do not always reflect the real noises distribution on training and inference BIBREF5 , BIBREF6 , BIBREF7 .",
"Beyond the research on translation models, there are many research on the other relevant problems, such as sentence boundary detection for realtime speech translation BIBREF31 , BIBREF18 , BIBREF32 , BIBREF33 , BIBREF34 , low-latency simultaneous interpreting BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF35 , BIBREF36 , automatic punctuation annotation for speech transcription BIBREF37 , BIBREF38 , and discussion about human and machine in simultaneous interpreting BIBREF39 .",
"Focus on the simultaneous translation task, there are some work referring to the construction of the simultaneous interpreting corpus BIBREF40 , BIBREF41 , BIBREF42 . Particularly, BIBREF42 deliver a collection of a simultaneous translation corpus for comparative analysis on Japanese-English and English-Japanese speech translation. This work analyze the difference between the translation and the interpretations, using the interpretations from human simultaneous interpreters.",
"For better generation of coherent translations, gong2011cache propose a memory based approach to capture contextual information to make the statistical translation model generate discourse coherent translations. kuang2017cache,tu2018learning,P18-1118 extend similar memory based approach to the NMT framework. wang2017exploiting present a novel document RNN to learn the representation of the entire text, and treated the external context as the auxiliary context which will be retrieved by the hidden state in the decoder. tiedemann2017neural and P18-1117 propose to encode global context through extending the current sentence with one preceding adjacent sentence. Notably, the former is conducted on the recurrent based models while the latter is implemented on the Transformer model. Recently, we also propose a reinforcement learning strategy to deliberate the translation so that the model can generate more coherent translations BIBREF43 ."
],
[
"In this paper, we propose DuTongChuan, a novel context-aware translation model for simultaneous interpreting. This model is able to constantly read streaming text from the ASR model, and simultaneously determine the boundaries of information units one after another. The detected IU is then translated into a fluent translation with two simple yet effective decoding strategies: partial decoding and context-aware decoding. We also release a novel speech translation corpus, BSTC, to boost the research on robust speech translation task.",
"With elaborate comparison, our model obtains superior translation quality against the wait-k model, but also presents competitive performance in latency. Assessment from human translators reveals that our system achieves promising translation quality (85.71% for Chinese-English, and 86.36% for English-Chinese), specially in the sense of surprisingly good discourse coherence. Our system also presents superior performance in latency (delayed in less 3 seconds at most times) in a speech-to-speech simultaneous translation. We also deploy our simultaneous machine translation model in our AI platform, and welcome the other users to enjoy it.",
"In the future, we will conduct research on novel method to evaluate the interpreting."
],
[
"We thank Ying Chen for improving the written of this paper. We thank Yutao Qu for developing partial modules of DuTongChuan. We thank colleagues in Baidu for their efforts on construction of the BSTC. They are Zhi Li, Ying Chen, Xuesi Song, Na Chen, Qingfei Li, Xin Hua, Can Jin, Lin Su, Lin Gao, Yang Luo, Xing Wan, Qiaoqiao She, Jingxuan Zhao, Can Jin, Wei Jin, Xiao Yang, Shuo Liu, Yang Zhang, Jing Ma, Junjin Zhao, Yan Xie, Minyang Zhang, Niandong Du, etc.",
"We also thank tndao.com and zaojiu.com for contributing their speech corpora."
],
[
"For example, for a sentence “她说我错了,那个叫什么什么呃妖姬。”, there are some representative training samples:"
],
[
"To develop an industrial simultaneous machine translation system, it is necessary to deal with problems that affect the translation quality in practice such as large number of speech irregularities, ASR errors, and topics that allude to violence, religion, sex and politics."
],
[
"In the real talk, the speaker tends to express his opinion using irregularities rather than regular written language utilized to train prevalent machine translation relevant models. For example, as depicted in Figure FIGREF3 , the spoken language in the real talk often contains unconscious repetitions (i.e., “什么(shénmē) 什么(shénmē)), and filler words (“呃”, “啊”), which will inevitably affects the downstream models, especially the NMT model. The discrepancy between training and decoding is not only existed in the corpus, but also occurs due to the error propagation from ASR model (e.g. recognize the “饿 (è)” into filler word “呃 (è) ” erroneously), which is related to the field of robust speech NMT research.",
"In the study of robust speech translation, there are many methods can be applied to alleviate the discrepancy mostly arising from the ASR errors such as disfluency detection, fine-tuning on the noisy training data BIBREF2 , BIBREF3 , complex lattice input BIBREF4 , etc. For spoken language normalization, it is mostly related to the work of sentence simplification. However, the traditional methods for sentence simplification rely large-scale training corpus and will enhance the model complexity by incorporating an End-to-End model to transform the original input.",
"In our system, to resolve problems both on speech irregularities and ASR errors, we propose a simple rule heuristic method to normalize both spoken language and ASR errors, mostly focus on removing noisy inputs, including filler words, unconscious repetitions, and ASR error that is easy to be detected. Although faithfulness and adequacy is essential in the period of the simultaneous interpreting, however, in a conference, users can understand the majority of the content by discarding some unimportant words.",
"To remove unconscious repetitions, the problem can be formulated as the Longest Continuous Substring (LCS) problem, which can be solved by an efficient suffix-array based algorithm in INLINEFORM0 time complexity empirically. Unfortunately, this simple solution is problematic in some cases. For example, “他 必须 分成 很多 个 小格 , 一个 小格 一个 小格 完成”, in this case, the unconscious repetitions “一个 小格 一个 小格” can not be normalized to “一个 小格”. To resolve this drawback, we collect unconscious repetitions appearing more than 5 times in a large-scale corpus consisting of written expressions, resulting in a white list containing more than 7,000 unconscious repetitions. In practice, we will firstly retrieve this white list and prevent the candidates existed in it from being normalized.",
"According to our previous study, many ASR errors are caused by disambiguating homophone. In some cases, such error will lead to serious problem. For example, both “食油 (cooking oil)” and “石油 (oil)” have similar Chinese phonetic alphabet (shí yóu), but with distinct semantics. The simplest method to resolve this problem is to enhance the ASR model by utilizing a domain-specific language model to generate the correct sequence. However, this method requires an insatiably difficult requirement, a customized ASR model. To reduce the cost of deploying a customized ASR model, as well as to alleviate the propagation of ASR errors, we propose a language model based identifier to remove the abnormal contents.",
"Definition 5 For a given sequence INLINEFORM0 , if the value of INLINEFORM1 is lower than a threshold INLINEFORM2 , then we denote the token INLINEFORM3 as an abnormal content.",
"In the above definition, the value of INLINEFORM0 and INLINEFORM1 can be efficiently computed by a language model. In our final system, we firstly train a language model on the domain-specific monolingual corpus, and then identify the abnormal content before the context-aware translation model. For the detected abnormal content, we simply discard it rather than finding an alternative, which will lead to additional errors potentially. Actually, human interpreters often routinely omit source content due to the limited memory."
],
[
"For an industrial product, it is extremely important to control the content that will be presented to the audience. Additionally, it is also important to make a consistent translation for the domain-specific entities and terminologies. This two demands lead to two associate problems: content censorship and constrained decoding, where the former aims to avoid producing some translation while the latter has the opposite target, generating pre-specified translation.",
"Recently, post2018fast proposed a Dynamic Beam Allocation (DBA) strategy, a beam search algorithm that forces the inclusion of pre-specified words and phrases in the output. In the DBA strategy, there are many manually annotated constraints, to force the beam search generating the pre-specified translation. To satisfy the requirement of content censorship, we extend this algorithm to prevent the model from generating the pre-specified forbidden content, a collection that contains words and phrases alluding to violence, religion, sex and politics. Specially, during the beam search, we punish the candidate beam that matches a constraint of pre-specified forbidden content, to prevent it from being selected as the final translation."
]
],
"section_name": [
"Introduction",
"Context-aware Translation Model",
"Dynamic Context Based Information Unit Boundary Detector",
"Partial Decoding",
"Context-aware Decoding",
"Latency Metric: Equilibrium Efficiency",
"Evaluation",
"Data Description",
"Data Preprocess",
"Model Settings",
"Experiments",
"Related Work",
"Conclusion and Future Work",
"Acknowledgement",
"Training Samples for Information Unit Detector",
"Techniques for Robust Translation",
"Speech Irregularities Normalization",
"Constrained Decoding and Content Censorship"
]
} | {
"answers": [
{
"annotation_id": [
"43889e492b0a0576442f3f33538981e7e9cbdbf1",
"5d025eecd561d49893ea589f69c5a861146b3c5f"
],
"answer": [
{
"evidence": [
"We use a subset of the data available for NIST OpenMT08 task . The parallel training corpus contains approximate 2 million sentence pairs. We choose NIST 2006 (NIST06) dataset as our development set, and the NIST 2002 (NIST02), 2003 (NIST03), 2004 (NIST04) 2005 (NIST05), and 2008 (NIST08) datasets as our test sets. We will use this dataset to evaluate the performance of our partial decoding and context-aware decoding strategy from the perspective of translation quality and latency."
],
"extractive_spans": [
"NIST02",
"NIST03",
"NIST04",
"NIST05",
"NIST08"
],
"free_form_answer": "",
"highlighted_evidence": [
" We choose NIST 2006 (NIST06) dataset as our development set, and the NIST 2002 (NIST02), 2003 (NIST03), 2004 (NIST04) 2005 (NIST05), and 2008 (NIST08) datasets as our test sets."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use a subset of the data available for NIST OpenMT08 task . The parallel training corpus contains approximate 2 million sentence pairs. We choose NIST 2006 (NIST06) dataset as our development set, and the NIST 2002 (NIST02), 2003 (NIST03), 2004 (NIST04) 2005 (NIST05), and 2008 (NIST08) datasets as our test sets. We will use this dataset to evaluate the performance of our partial decoding and context-aware decoding strategy from the perspective of translation quality and latency.",
"Recently, we release Baidu Speech Translation Corpus (BSTC) for open research . This dataset covers speeches in a wide range of domains, including IT, economy, culture, biology, arts, etc. We transcribe the talks carefully, and have professional translators to produce the English translations. This procedure is extremely difficult due to the large number of domain-specific terminologies, speech redundancies and speakers' accents. We expect that this dataset will help the researchers to develop robust NMT models on the speech translation. In summary, there are many features that distinguish this dataset to the previously related resources:",
"The test dataset includes interpretations produced by simultaneous interpreters with professional experience. This dataset contributes an essential resource for the comparison between translation and interpretation.",
"We randomly extract several talks from the dataset, and divide them into the development and test set. In Table TABREF34 , we summarize the statistics of our dataset. The average number of utterances per talk is 152.6 in the training set, 59.75 in the dev set, and 162.5 in the test set."
],
"extractive_spans": [
"2008 (NIST08) datasets",
"Baidu Speech Translation Corpus (BSTC)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use a subset of the data available for NIST OpenMT08 task . The parallel training corpus contains approximate 2 million sentence pairs. We choose NIST 2006 (NIST06) dataset as our development set, and the NIST 2002 (NIST02), 2003 (NIST03), 2004 (NIST04) 2005 (NIST05), and 2008 (NIST08) datasets as our test sets.",
"Recently, we release Baidu Speech Translation Corpus (BSTC) for open research .",
"The test dataset includes interpretations produced by simultaneous interpreters with professional experience.",
"We randomly extract several talks from the dataset, and divide them into the development and test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"476a5a5994a1f4bd8520f586b464f6250d71fff7",
"eafd3a8bc85195503884497c7ec7dc451fa2775f"
],
"answer": [
{
"evidence": [
"For fair comparison, we implement the following models:",
"baseline: A standard Transformer based model with big version of hyper parameters.",
"sub-sentence: We split a full sentence into multiple sub-sentences by comma, and translate them using the baseline model. To evaluate the translation quality, we concatenate the translation of each sub-sentence into one sentence."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"For fair comparison, we implement the following models:\n\nbaseline: A standard Transformer based model with big version of hyper parameters.\n\nsub-sentence: We split a full sentence into multiple sub-sentences by comma, and translate them using the baseline model. To evaluate the translation quality, we concatenate the translation of each sub-sentence into one sentence."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"In our context-sensitive model, the dynamic context based information unit boundary detector is essential to determine the IU boundaries in the steaming input. To measure the effectiveness of this model, we compare its precision as well as latency against the traditional language model based methods, a 5-gram language model trained by KenLM toolkit , and an in-house implemented RNN based model. Both of two contrastive models are trained on approximate 2 million monolingual Chinese sentences. As shown in Table TABREF60 , it is clear that our model beats the previous work with an absolute improvement of more than 15 points in term of F-score (62.79 INLINEFORM0 78.26) and no obvious burden in latency (average latency). This observation indicates that with bidirectional context, the model can learn better representation to help the downstream tasks. In the next experiments, we will evaluate models given testing data with IU boundaries detected by our detector."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To measure the effectiveness of this model, we compare its precision as well as latency against the traditional language model based methods, a 5-gram language model trained by KenLM toolkit , and an in-house implemented RNN based model. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"b62246bc6fb50b1af93196677d42373f6ff2ddf5",
"d8d06ac462f09162acc570c551680fc1e4462370"
],
"answer": [
{
"evidence": [
"Effectiveness on latency. As latency in simultaneous machine translation is essential and is worth to be intensively investigated, we compare the latency of our models with that of the previous work using our Equilibrium Efficiency metric. As shown in Figure FIGREF58 , we plot the translation quality and INLINEFORM0 on the NIST06 dev set. Clearly, compared to the baseline system, our model significantly reduce the time delay while remains a competitive translation quality. When treating segments as IUs, the latency can be further reduced by approximate 20% (23.13 INLINEFORM1 18.65), with a slight decrease in BLEU score (47.61 INLINEFORM2 47.27). One interesting finding is that the granularity of information units largely affects both the translation quality and latency. It is clear the decoding based on sub-sentence and based on segment present different performance in two metrics. For the former model, the increase of discarded tokens results in an obvious decrease in translation quality, but no definite improvement in latency. The latter model can benefit from the increasing of discarding tokens both in translation quality and latency."
],
"extractive_spans": [],
"free_form_answer": "It depends on the model used.",
"highlighted_evidence": [
"For the former model, the increase of discarded tokens results in an obvious decrease in translation quality, but no definite improvement in latency. The latter model can benefit from the increasing of discarding tokens both in translation quality and latency."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Unsurprisingly, when treating sub-sentences as IUs, our proposed model significantly improves the translation quality by an average of 2.35 increase in BLEU score (37.31 INLINEFORM0 39.66), and its performance is slightly lower than the baseline system with a 0.73 lower average BLEU score (40.39 INLINEFORM1 39.66). Moreover, as we allow the model to discard a few previously generated tokens, the performance can be further improved to 39.82 ( INLINEFORM2 0.16), at a small cost of longer latency (see Figure FIGREF58 ). It is consistent with our intuition that our novel partial decoding strategy can bring stable improvement on each testing dataset. It achieves an average improvement of 0.44 BLEU score (39.22 INLINEFORM3 39.66) compared to the context-aware system in which we do not fine-tune the trained model when using partial decoding strategy. An interesting finding is that our translation model performs better than the baseline system on the NIST08 testing set. We analyze the translation results and find that the sentences in NIST08 are extremely long, which affect the standard Transformer to learn better representation BIBREF23 . Using context-aware decoding strategy to generate consistent and coherent translation, our model performs better by focusing on generating translation for relatively shorter sub-sentences."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Unsurprisingly, when treating sub-sentences as IUs, our proposed model significantly improves the translation quality by an average of 2.35 increase in BLEU score (37.31 INLINEFORM0 39.66), and its performance is slightly lower than the baseline system with a 0.73 lower average BLEU score (40.39 INLINEFORM1 39.66). Moreover, as we allow the model to discard a few previously generated tokens, the performance can be further improved to 39.82 ( INLINEFORM2 0.16), at a small cost of longer latency (see Figure FIGREF58 ). It is consistent with our intuition that our novel partial decoding strategy can bring stable improvement on each testing dataset. It achieves an average improvement of 0.44 BLEU score (39.22 INLINEFORM3 39.66) compared to the context-aware system in which we do not fine-tune the trained model when using partial decoding strategy. An interesting finding is that our translation model performs better than the baseline system on the NIST08 testing set. We analyze the translation results and find that the sentences in NIST08 are extremely long, which affect the standard Transformer to learn better representation BIBREF23 . Using context-aware decoding strategy to generate consistent and coherent translation, our model performs better by focusing on generating translation for relatively shorter sub-sentences."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"Which datasets do they evaluate on?",
"Do they compare against a system that does not use streaming text, but has the entire text at disposal?",
"Does larger granularity lead to better translation quality?"
],
"question_id": [
"f3b4e52ba962a0004064132d123fd9b78d9e12e2",
"ea6edf45f094586caf4684463287254d44b00e95",
"ba406e07c33a9161e29c75d292c82a15503beae5"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: For this sentence, a full-sentence NMT model produces an appropriate translation with, however, a long latency in the context of simultaneous translation, as it needs to wait until the end of the full sentence to start translating. In contrast, a sub-sentence NMT model outputs a translation with less coherence and fluency, although it has a relatively short latency as it starts translating after reading the comma in the source text.",
"Figure 2: This example shows a special case using sub-sentences as our information units. The blue solid squares indicate the scope of source context we use for translation. The text in red are incrementally generated translations. We discard a preceding generated token to make a coherent translation.",
"Figure 3: In our context-aware translation model, the boundaries of information units in streaming ASR input are determined by a novel IU boundary detector. IUs at different positions are translated using different NMT models, if an IU stands at the beginning of a sentence, then it will be translated by the partial decoding module. Otherwise, context-aware decoding is applied to translate the IU into a coherent translation. Notice that the dashed squares in the first line denote the anchor to determine the IU boundary.",
"Figure 4: A running example of our dynamic context based IU boundary detector. In this example, the model learns to determine the classification of the current anchor, “姬” (we insert an additional symbol, SEP to be consistent with the training format in the work of Devlin et al. (2019)). If the probability (0.4 in left side case) of decision for a boundary at the present anchor is smaller than a threshold, i.e., 0.7, then it is necessary to consider more context (additional context: “这个”) to make a reliable decision (0.8 in right side case).",
"Figure 5: Source and target representation for training partial decoding model, where we mask the second subsentence by summing a negative infinite number when training the partial decoding model. For simplicity, we omit the embeddings for the target side.",
"Figure 6: Source and target representation for training incremental decoding model. We do not mask the source input, but mask the target sequence aligned to the first sub-sentence.",
"Figure 7: In the decoding stage, the context-aware decoding model will discard the last k tokens (in this example, k = 1) from the generated partial translation to produce a fluent translation.",
"Table 1: The summary of our proposed speech translation data. The volume of transcription is counted by characters, the volume of translation is counted by tokens, and the audio duration is counted by hours.",
"Table 2: The overall results on NIST Chinese-English translation task.",
"Figure 8: We show the latency for our proposed model (Left), and wait − k model (Right). For better under-",
"Table 3: The comparison between our sequence detector and previous work. The latency represents the words requiring to make an explicit decision.",
"Table 4: The overall results on BSTC Chinese-English translation task (Pre-train represents training on the NIST dataset, and fine-tune represents fine-tuning on the BSTC dataset.). Clean input indicates the input is from human annotated transcription, while the ASR input represents the input contains ASR errors. ASR + Auto IU indicates that the sentence boundary as well as sub-sentence is detected by our IU detector. Therefore, this data basically reflects the real environment in practical product.",
"Table 5: Comparison between machine translation and human interpretation. The interpretation reference consists of a collection of interpretations from S, A and B. Our model is trained on the large-scale corpus.",
"Table 6: Results of human evaluation for interpreting and machine translation. Missing Translation indicates the proportion of missing translation in all translation errors. Notice that inadequate translations are marked as BAD by the human translator.",
"Figure 9: This is a representative case that indicates our model can generate coherent translation.",
"Table 7: Results of DuTongChuan. C→E represents Chinese-English translation task, and E→C represents the English-Chinese translation task."
],
"file": [
"1-Figure1-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"5-Figure5-1.png",
"6-Figure6-1.png",
"6-Figure7-1.png",
"8-Table1-1.png",
"9-Table2-1.png",
"10-Figure8-1.png",
"11-Table3-1.png",
"12-Table4-1.png",
"12-Table5-1.png",
"12-Table6-1.png",
"13-Figure9-1.png",
"13-Table7-1.png"
]
} | [
"Does larger granularity lead to better translation quality?"
] | [
[
"1907.12984-Experiments-7",
"1907.12984-Experiments-3"
]
] | [
"It depends on the model used."
] | 379 |
1911.09247 | How to Ask Better Questions? A Large-Scale Multi-Domain Dataset for Rewriting Ill-Formed Questions | We present a large-scale dataset for the task of rewriting an ill-formed natural language question to a well-formed one. Our multi-domain question rewriting MQR dataset is constructed from human contributed Stack Exchange question edit histories. The dataset contains 427,719 question pairs which come from 303 domains. We provide human annotations for a subset of the dataset as a quality estimate. When moving from ill-formed to well-formed questions, the question quality improves by an average of 45 points across three aspects. We train sequence-to-sequence neural models on the constructed dataset and obtain an improvement of 13.2% in BLEU-4 over baseline methods built from other data resources. We release the MQR dataset to encourage research on the problem of question rewriting. | {
"paragraphs": [
[
"Understanding text and voice questions from users is a difficult task as it involves dealing with “word salad” and ill-formed text. Ill-formed questions may arise from imperfect speech recognition systems, search engines, dialogue histories, inputs from low bandwidth devices such as mobile phones, or second language learners, among other sources. However, most downstream applications involving questions, such as question answering and semantic parsing, are trained on well-formed natural language. In this work, we focus on rewriting textual ill-formed questions, which could improve the performance of such downstream applications.",
"BIBREF0 introduced the task of identifying well-formed natural language questions. In this paper, we take a step further to investigate methods to rewrite ill-formed questions into well-formed ones without changing their semantics. We create a multi-domain question rewriting dataset (MQR) from human contributed Stack Exchange question edit histories. This dataset provides pairs of questions: the original ill-formed question and a well-formed question rewritten by the author or community contributors. The dataset contains 427,719 question pairs which come from 303 domains. The MQR dataset is further split into TRAIN and DEV/TEST, where question pairs in DEV/TEST have less $n$-gram overlap but better semantic preservation after rewriting. Table TABREF2 shows some example question pairs from the MQR DEV split.",
"Our dataset enables us to train models directly for the task of question rewriting. We train neural generation models on our dataset, including Long-Short Term Memory networks (LSTM; BIBREF1) with attention BIBREF2 and transformers BIBREF3. We show that these models consistently improve the well-formedness of questions although sometimes at the expense of semantic drift. We compare to approaches that do not use our training dataset, including general-purpose sentence paraphrasing, grammatical error correction (GEC) systems, and round trip neural machine translation. Methods trained on our dataset greatly outperform those developed from other resources. Augmenting our training set with additional question pairs such as Quora or Paralex question pairs BIBREF4 has mixed impact on this task. Our findings from the benchmarked methods suggest potential research directions to improve question quality.",
"To summarize our contributions:",
"We propose the task of question rewriting: converting textual ill-formed questions to well-formed ones while preserving their semantics.",
"We construct a large-scale multi-domain question rewriting dataset MQR from human generated Stack Exchange question edit histories. The development and test sets are of high quality according to human annotation. The training set is of large-scale. We release the MQR dataset to encourage research on the question rewriting task.",
"We benchmark a variety of neural models trained on the MQR dataset, neural models trained with other question rewriting datasets, and other paraphrasing techniques. We find that models trained on the MQR and Quora datasets combined followed by grammatical error correction perform the best in the MQR question rewriting task."
],
[
"Methods have been developed to reformulate or expand search queries BIBREF5. Sometimes query rewriting is performed for sponsored search BIBREF6, BIBREF7. This work differs from our goal as we rewrite ill-formed questions to be well-formed.",
"Some work rewrites queries by searching through a database of query logs to find a semantically similar query to replace the original query. BIBREF8 compute query similarities for query ranking based on user click information. BIBREF9 learn paraphrases of questions to improve question answering systems. BIBREF10 translate queries from search engines into natural language questions. They used Bing's search logs and their corresponding clicked question page as a query-to-question dataset. We work on question rewriting without any database of question logs.",
"Actively rewriting questions with reinforcement learning has been shown to improve QA systems BIBREF11. This work proposes to rewrite questions to fulfill more general quality criteria."
],
[
"A variety of paraphrase generation techniques have been proposed and studied BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17. Recently, BIBREF18 use a variational autoencoder to generate paraphrases from sentences and BIBREF19 use deep reinforcement learning to generate paraphrases. Several have generated paraphrases by separately modeling syntax and semantics BIBREF20, BIBREF21.",
"Paraphrase generation has been used in several applications. BIBREF22 use paraphrase generation as a data augmentation technique for natural language understanding. BIBREF20 and BIBREF23 generate adversarial paraphrases with surface form variations to measure and improve model robustness. BIBREF24 generate paraphrases using machine translation on parallel text and use the resulting sentential paraphrase pairs to learn sentence embeddings for semantic textual similarity.",
"Our work focuses on question rewriting to improve question qualities, which is different from general sentence paraphrasing."
],
[
"Text normalization BIBREF25 is the task of converting non-canonical language to “standard” writing. Non-canonical language frequently appears in informal domains such as social media postings or other conversational text, user-generated content, such as search queries or product reviews, speech transcriptions, and low-bandwidth input settings such as those found with mobile devices. Text normalization is difficult to define precisely and therefore difficult to provide gold standard annotations and evaluate systems for BIBREF26. In our setting, rewriting questions is defined implicitly through the choices made by the Stack Exchange community with the goals of helpfulness, clarity, and utility."
],
[
"Given a question $q_i$, potentially ill-formed, the question rewriting task is to convert it to a well-formed natural language question $q_w$ while preserving its semantics and intention. Following BIBREF0, we define a well-formed question as one satisfying the following constraints:",
"The question is grammatically correct. Common grammatical errors include misuse of third person singular or verb tense.",
"The question does not contain spelling errors. Spelling errors refer specifically to typos and other misspellings, but not to grammatical errors such as third person singular or tense misuse in verbs.",
"The question is explicit. A well-formed question must be explicit and end with a question mark. A command or search query-like fragment is not well-formed."
],
[
"We construct our Multi-Domain Question Rewriting (MQR) dataset from human contributed Stack Exchange question edit histories. Stack Exchange is a question answering platform where users post and answer questions as a community. Stack Exchange has its own standard of good questions, and their standard aligns well with our definition of well-formed questions. If questions on Stack Exchange do not meet their quality standards, members of the community often volunteer to edit the questions. Such edits typically correct spelling and grammatical errors while making the question more explicit and easier to understand.",
"We use 303 sub areas from Stack Exchange data dumps. The full list of area names is in the appendix. We do not include Stack Overflow because it is too specific to programming related questions. We also exclude all questions under the following language sub areas: Chinese, German, Spanish, Russian, Japanese, Korean, Latin, Ukrainian. This ensures that the questions in MQR are mostly English sentences. Having questions from 303 Stack Exchange sites makes the MQR dataset cover a broad range of domains.",
"We use “PostHistory.xml” and “Posts.xml” tables of each Stack Exchange site data dump. If a question appears in both “PostHistory.xml” and “Posts.xml”, it means the question was modified. We treat the most up-to-date Stack Exchange questions as a well formed-question and treat its version from “PostHistory.xml” as ill-formed. “PostHistory.xml” only keeps one edit for each question, so the MQR dataset does not contain duplicated questions.",
"The questions in the Stack Exchange raw data dumps do not always fulfill our data quality requirements. For example, some questions after rewriting are still not explicit. Sometimes rewriting introduces or deletes new information and cannot be done correctly without more context or the question description. We thus perform the following steps to filter the question pairs:",
"All well-formed questions in the pairs must start with “how”, “why”, “when”, “what”, “which”, “who”, “whose”, “do”, “where”, “does”, “is”, “are”, “must”, “may”, “need”, “did”, “was”, “were”, “can”, “has”, “have”, “are”. This step is performed to make sure the questions are explicit questions but not statements or commands.",
"To ensure there are no sentences written in non-English languages, we keep questions that contain 80% or more of valid English characters, including punctuation.",
"This yields the MQR dataset. We use the following heuristic criteria to split MQR into TRAIN, DEV, and TEST sets:",
"The BLEU scores between well-formed and ill-formed questions (excluding punctuation) are lower than 0.3 in DEV and TEST to ensure large variations after rewriting.",
"The lists of verbs and nouns between well-formed and ill-formed questions have a Jaccard similarity greater than 0.8 in DEV and TEST. We split DEV and TEST randomly and equally. This yields 2,112 instances in DEV and 2,113 instances in TEST.",
"The rest of the question edit pairs (423,495 instances) are placed in the TRAIN set.",
"Examples are shown in Table TABREF2. We release our TRAIN/DEV/TEST splits of the MQR dataset to encourage research in question rewriting."
],
[
"To understand the quality of the question rewriting examples in the MQR dataset, we ask human annotators to judge the quality of the questions in the DEV and TEST splits (abbreviated as DEVTEST onward). Specifically, we take both ill-formed and well-formed questions in DEVTEST and ask human annotators to annotate the following three aspects regarding each question BIBREF0:",
"Is the question grammatically correct?",
"Is the spelling correct? Misuse of third person singular or past tense in verbs are considered grammatical errors instead of spelling errors. Missing question mark in the end of a question is also considered as spelling errors.",
"Is the question an explicit question, rather than a search query, a command, or a statement?",
"The annotators were asked to annotate each aspect with a binary (0/1) answer. Examples of questions provided to the annotators are in Table TABREF13. We consider all “How to” questions (“How to unlock GT90 in Gran Turismo 2?”) as grammatical. Although it is not a complete sentence, this kind of question is quite common in our dataset and therefore we choose to treat it as grammatically correct.",
"The ill-formed and well-formed questions are shuffled so the annotators do not have any prior knowledge or bias regarding these questions during annotation. We randomly sample 300 questions from the shuffled DEVTEST questions, among which 145 examples are well-formed and 155 are ill-formed. Two annotators produce a judgment for each of the three aspects for all 300 questions.",
"The above annotation task considers a single question at a time. We also consider an annotation task related to the quality of a question pair, specifically whether the two questions in the pair are semantically equivalent. If rewriting introduces additional information, then the question rewriting task may require additional context to be performed, even for a human writer. This may happen when a user changes the question content or the question title is modified based on the additional description about the question. In the MQR dataset, we focus on question rewriting tasks that can be performed without extra information.",
"We randomly sample 100 question pairs from DEVTEST for annotation of semantic equivalence. Two annotators produced binary judgments for all 100 pairs. Example pairs are shown in Table TABREF14.",
"Table TABREF15 summarizes the human annotations of the quality of the DEVTEST portion of the MQR dataset. We summed up the binary scores from two annotators. There are clear differences between ill-formed and well-formed questions. Ill-formed question are indeed ill-formed and well-formed questions are generally of high quality. The average score over three aspects improves by 45 points from ill-formed to well-formed questions. Over 90% of the question pairs possess semantic equivalence, i.e., they do not introduce or delete information. Therefore, the vast majority of rewrites can be performed without extra information.",
"The Cohen's Kappa inter-rater reliability scores BIBREF27 are 0.83, 0.77, and 0.89 respectively for the question quality annotations, and 0.86 for question semantic equivalence. These values show good inter-rater agreement on the annotations of the qualities and semantic equivalences of the MQR question pairs."
],
[
"As the MQR dataset is constructed from 303 sub areas of the Stack Exchange networks, it covers a wide range of question domains. Table TABREF16 summarizes the number of categories in the TRAIN and DEVTEST portions of the MQR dataset, as well as the mean, standard deviation, minimum, and maximum number of instances per categories.",
"The number of questions from each sub area is not evenly distributed due to the fact that some sub areas are more popular and have more questions than the others, but the DEV/TEST splits still cover a reasonably large range of domains.",
"The most common categories in DEV and TEST are “diy”(295), “askubuntu”(288), “math”(250), “gaming”(189), and “physics”(140). The least common categories are mostly “Meta Stack Exchange” websites where people ask questions regarding the policies of posting questions on Stack Exchange sites. The most common categories in TRAIN are “askubuntu”(6237), “math”(5933), “gaming”(3938), “diy”(2791), and “2604”(scifi)."
],
[
"In this section, we describe the models and methods we benchmarked to perform the task of question rewriting.",
"To evaluate model performance, we apply our trained models to rewrite the ill-formed questions in TEST and treat the well-formed question in each pair as the reference sentence. We then compute BLEU-4 BIBREF28, ROUGE-1, ROUGE-2, ROUGE-L BIBREF29, and METEOR BIBREF30 scores. As a baseline, we also evaluate the original ill-formed question using the automatic metrics."
],
[
"We use the Tensor2Tensor BIBREF31 implementation of the transformer model BIBREF3. We use their “transformer_base” hyperparameter setting. The details are as follows: batch size 4096, hidden size 512, 8 attention heads, 6 transformer encoder and decoder layers, learning rate 0.1 and 4000 warm-up steps. We train the model for 250,000 steps and perform early stopping using the loss values on the DEV set.",
"In following sections, when a transformer model is used, we follow the same setting as described above."
],
[
"We use the attention mechanism proposed by BIBREF2. We use the Tensor2Tensor implementation BIBREF31 with their provided Luong Attention hyperparameter settings. We set batch size to 4096. The hidden size is 1000 and we use 4 LSTM hidden layers following BIBREF2."
],
[
"We also benchmark other methods involving different training datasets and models. All the methods in this subsection use transformer models."
],
[
"Round trip neural machine translation is an effective approach for question or sentence paraphrasing BIBREF32, BIBREF9, BIBREF20. It first translates a sentence to another pivot language, then translates it back to the original language. We consider the use of both German (De) and French (Fr) as the pivot language, so we require translation systems for En$\\leftrightarrow $De and En$\\leftrightarrow $Fr.",
"The English-German translation models are trained on WMT datasets, including News Commentary 13, Europarl v7, and Common Crawl, and evaluated on newstest2013 for early stopping. On the newstest2013 dev set, the En$\\rightarrow $De model reaches a BLEU-4 score of 19.6, and the De$\\rightarrow $En model reaches a BLEU-4 score of 24.6.",
"The English-French models are trained on Common Crawl 13, Europarl v7, News Commentary v9, Giga release 2, and UN doc 2000. On the newstest2013 dev set, the En$\\rightarrow $Fr model reaches a BLEU-4 score of 25.6, and the Fr$\\rightarrow $En model reaches a BLEU-4 score of 26.1."
],
[
"As some ill-formed questions are not grammatical, we benchmark a state-of-the-art grammatical error correction system on this task. We use the system of BIBREF33, a GEC ensemble model trained from Wikipedia edit histories and round trip translations."
],
[
"We also train a paraphrase generation model on a subset of the ParaNMT dataset BIBREF24, which was created automatically by using neural machine translation to translate the Czech side of a large Czech-English parallel corpus. We use the filtered subset of 5M pairs provided by the authors. For each pair of paraphrases (S1 and S2) in the dataset, we train the model to rewrite from S1 to S2 and also rewrite from S2 to S1. We use the MQR DEV set for early stopping during training."
],
[
"Table TABREF30 shows the performance of the models and methods described above. Among these methods models trained on MQR work best. GEC corrects grammatical errors and spelling errors, so it also improves the question quality in rewriting. Round trip neural machine translation is a faithful rewrite of the questions, and it naturally corrects some spelling and grammatical errors during both rounds of translation due to the strong language models present in the NMT models. However, it fails in converting commands and statements into questions.",
"The paraphrase generator trained on ParaNMT does not perform well, likely because of domain difference (there are not many questions in ParaNMT). It also is unlikely to convert non-question sentences into explicit questions."
],
[
"We consider two additional data resources to improve question rewriting models.",
"The first resource is the Quora Question Pairs dataset. This dataset contains question pairs from Quora, an online question answering community. Some question pairs are marked as duplicate by human annotators and other are not. We consider all Quora Question Pairs (Q1 and Q2) marked as duplicate as additional training data. We train the model to rewrite from Q1 to Q2 and also from Q2 to Q1. This gives us 298,364 more question pairs for training.",
"The second resource is the Paralex dataset BIBREF4. The questions in Paralex are scraped from WikiAnswers, where questions with similar content are clustered. As questions in the Paralex dataset may be noisy, we use the annotation from BIBREF0. Following their standard, we treat all questions with scores higher than 0.8 as well-formed questions. For each well-formed question, we take all questions in the same Paralex question cluster and construct pairs to rewrite from other questions in the cluster to the single well-formed question. This gives us 169,682 extra question pairs for training.",
"We also tried adding “identity” training examples in which the well-formed questions from the MQR TRAIN set are repeated to form a question pair.",
"The results of adding training data are summarized in Table TABREF31. Adding the identity pairs improves the ROUGE and METEOR scores, which are focused more on recall, while harming BLEU, which is focused on precision. We hypothesize that adding auto-encoding data improves semantic preservation, which is expected to help the recall-oriented metrics. Adding Quora Question Pairs improves performance on TEST but adding Paralex pairs does not. The reason may stem from domain differences: WikiAnswers (used in Paralex) is focused on factoid questions answered by encyclopedic knowledge while Quora and Stack Exchange questions are mainly answered by community contributors. Semantic drift occurs more often in Paralex question pairs as Paralex is constructed from question clusters, and a cluster often contains more than 5 questions with significant variation."
],
[
"In addition to the aforementioned methods, we also try combining multiple approaches. Table TABREF32 shows results when combining GEC and the Quora-augmented transformer model. We find that combining GEC and a transformer question rewriting model achieves better results than each alone. In particular, it is best to first rewrite the question using the transformer trained on MQR + Quora, then run GEC on the output.",
"We also tried applying the transformer (trained on MQR) twice, but it hurts the performance compared to applying it only once (see Table TABREF32)."
],
[
"To better evaluate model performance, we conduct a human evaluation on the model rewritten questions following the same guidelines from the “Dataset Quality” subsection. Among the 300 questions annotated earlier, we chose the ill-formed questions from the TEST split, which yields 75 questions. We evaluate questions rewritten by three methods (Transformer (MQR + Quora), GEC, and Transformer (MQR + Quora) $\\rightarrow $ GEC), and ask annotators to determine the qualities of the rewritten questions. To understand if question meanings change after rewriting, we also annotate whether a model rewritten question is semantically equivalent to the ill-formed question or equivalent to the well-formed one.",
"Table TABREF33 shows the annotations from two annotators. When the two annotators disagree, a judge makes a final decision. Note that the examples annotated here are a subset of those annotated in Table TABREF15, so the first row is different from the ill-formed questions in Table TABREF15. According to the annotations, the GEC method slightly improves the question quality scores. Although Table TABREF30 shows that GEC improves the question quality by some automatic metrics, it simply corrects a few grammatical errors and the rewritten questions still do not meet the standards of human annotators. However, the GEC model is good at preserving question semantics.",
"The Transformer (MQR + Quora) model and Transformer (MQR + Quora) $\\rightarrow $ GEC excel at improving question quality in all three aspects, but they suffer from semantic drift. This suggests that future work should focus on solving the problem of semantic drift when building question rewriting models.",
"Table TABREF34 shows two example questions rewritten by different methods. The questions rewritten by GEC remain unchanged but are still of low quality, whereas ParaNMT and round trip NMT make a variety of changes, resulting in large variations in question quality and semantics. Methods trained on MQR excel at converting ill-formed questions into explicit ones (e.g., adding “What is” in the first example and “How to” in the second example), but sometimes make grammatical errors (e.g., Trans. (MQR + Quora) misses “a” in the second example). According to Table TABREF32, combining neural models trained on MQR and GEC achieves the best results in automatic metrics. However, they still suffer from semantic drift. In the first example of Table TABREF34, the last two rewrites show significant semantic mistakes, generating non-existent words “widebutcherblock” and “widebitcherblock”."
],
[
"We proposed the task of question rewriting and produced a novel dataset MQR to target it. Our evaluation shows consistent gains in metric scores when using our dataset compared to systems derived from previous resources. A key challenge for future work is to design better models to rewrite ill-formed questions without changing their semantics. Alternatively, we could attempt to model the process whereby question content changes. Sometimes community members do change the content of questions in online forums. Such rewrites typically require extra context information, such as the question description. Additional work will be needed to address this context-sensitive question rewriting task."
],
[
"We thank Shankar Kumar, Zi Yang, Yiran Zhang, Rahul Gupta, Dekang Lin, Yuchen Lin, Guan-lin Chao, Llion Jones, and Amarnag Subramanya for their helpful discussions and suggestions."
]
],
"section_name": [
"Introduction",
"Related Work ::: Query and Question Rewriting",
"Related Work ::: Paraphrase Generation",
"Related Work ::: Text Normalization",
"Task Definition: Question Rewriting",
"MQR Dataset Construction and Analysis",
"MQR Dataset Construction and Analysis ::: Dataset Quality",
"MQR Dataset Construction and Analysis ::: Dataset Domains",
"Models and Experiments",
"Models and Experiments ::: Models Trained on MQR ::: Transformer.",
"Models and Experiments ::: Models Trained on MQR ::: LSTM Sequence to Sequence Model with Attention.",
"Models and Experiments ::: Methods Built from Other Resources",
"Models and Experiments ::: Methods Built from Other Resources ::: Round Trip Neural Machine Translation.",
"Models and Experiments ::: Methods Built from Other Resources ::: Grammatical Error Correction (GEC).",
"Models and Experiments ::: Methods Built from Other Resources ::: Paraphrase Generator Trained on ParaNMT.",
"Models and Experiments ::: Results",
"Models and Experiments ::: Additional Training Data",
"Models and Experiments ::: Combining Methods",
"Models and Experiments ::: Human Evaluation",
"Conclusion and Future Work",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"9f42cddbbb09e8b052dd10af8dfcd4a8666430af",
"aaedcbd8fc30914fd762e506265cadd71885ae2f"
],
"answer": [
{
"evidence": [
"We use 303 sub areas from Stack Exchange data dumps. The full list of area names is in the appendix. We do not include Stack Overflow because it is too specific to programming related questions. We also exclude all questions under the following language sub areas: Chinese, German, Spanish, Russian, Japanese, Korean, Latin, Ukrainian. This ensures that the questions in MQR are mostly English sentences. Having questions from 303 Stack Exchange sites makes the MQR dataset cover a broad range of domains."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We also exclude all questions under the following language sub areas: Chinese, German, Spanish, Russian, Japanese, Korean, Latin, Ukrainian. This ensures that the questions in MQR are mostly English sentences."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"The English-German translation models are trained on WMT datasets, including News Commentary 13, Europarl v7, and Common Crawl, and evaluated on newstest2013 for early stopping. On the newstest2013 dev set, the En$\\rightarrow $De model reaches a BLEU-4 score of 19.6, and the De$\\rightarrow $En model reaches a BLEU-4 score of 24.6.",
"We use 303 sub areas from Stack Exchange data dumps. The full list of area names is in the appendix. We do not include Stack Overflow because it is too specific to programming related questions. We also exclude all questions under the following language sub areas: Chinese, German, Spanish, Russian, Japanese, Korean, Latin, Ukrainian. This ensures that the questions in MQR are mostly English sentences. Having questions from 303 Stack Exchange sites makes the MQR dataset cover a broad range of domains.",
"All well-formed questions in the pairs must start with “how”, “why”, “when”, “what”, “which”, “who”, “whose”, “do”, “where”, “does”, “is”, “are”, “must”, “may”, “need”, “did”, “was”, “were”, “can”, “has”, “have”, “are”. This step is performed to make sure the questions are explicit questions but not statements or commands.",
"To ensure there are no sentences written in non-English languages, we keep questions that contain 80% or more of valid English characters, including punctuation."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The English-German tra",
"We also exclude all questions under the following language sub areas: Chinese, German, Spanish, Russian, Japanese, Korean, Latin, Ukrainian. This ensures that the questions in MQR are mostly English sentences. Having questions from 303 Stack Exchange sites makes the MQR dataset cover a broad range of domains.",
"All well-formed questions in the pairs must start with “how”, “why”, “when”, “what”, “which”, “who”, “whose”, “do”, “where”, “does”, “is”, “are”, “must”, “may”, “need”, “did”, “was”, “were”, “can”, “has”, “have”, “are”. This step is performed to make sure the questions are explicit questions but not statements or commands.\n\nTo ensure there are no sentences written in non-English languages, we keep questions that contain 80% or more of valid English characters, including punctuation."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"1f8d01c2a04beb03d7dff669b4a6a46a32913429",
"83491cc935d98cfd6a0f0764078169cb6bfb0001"
],
"answer": [
{
"evidence": [
"To evaluate model performance, we apply our trained models to rewrite the ill-formed questions in TEST and treat the well-formed question in each pair as the reference sentence. We then compute BLEU-4 BIBREF28, ROUGE-1, ROUGE-2, ROUGE-L BIBREF29, and METEOR BIBREF30 scores. As a baseline, we also evaluate the original ill-formed question using the automatic metrics."
],
"extractive_spans": [
"evaluate the original ill-formed question using the automatic metrics"
],
"free_form_answer": "",
"highlighted_evidence": [
"As a baseline, we also evaluate the original ill-formed question using the automatic metrics."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To evaluate model performance, we apply our trained models to rewrite the ill-formed questions in TEST and treat the well-formed question in each pair as the reference sentence. We then compute BLEU-4 BIBREF28, ROUGE-1, ROUGE-2, ROUGE-L BIBREF29, and METEOR BIBREF30 scores. As a baseline, we also evaluate the original ill-formed question using the automatic metrics."
],
"extractive_spans": [
"we also evaluate the original ill-formed question using the automatic metrics"
],
"free_form_answer": "",
"highlighted_evidence": [
"To evaluate model performance, we apply our trained models to rewrite the ill-formed questions in TEST and treat the well-formed question in each pair as the reference sentence. We then compute BLEU-4 BIBREF28, ROUGE-1, ROUGE-2, ROUGE-L BIBREF29, and METEOR BIBREF30 scores. As a baseline, we also evaluate the original ill-formed question using the automatic metrics."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"b948e5137a301a44d1bc790da87452cdbfecea20",
"c560e9c7e98e1475a81d1d3a680a2be3d43f78e4"
],
"answer": [
{
"evidence": [
"To understand the quality of the question rewriting examples in the MQR dataset, we ask human annotators to judge the quality of the questions in the DEV and TEST splits (abbreviated as DEVTEST onward). Specifically, we take both ill-formed and well-formed questions in DEVTEST and ask human annotators to annotate the following three aspects regarding each question BIBREF0:",
"Is the question grammatically correct?",
"Is the spelling correct? Misuse of third person singular or past tense in verbs are considered grammatical errors instead of spelling errors. Missing question mark in the end of a question is also considered as spelling errors.",
"Is the question an explicit question, rather than a search query, a command, or a statement?",
"The annotators were asked to annotate each aspect with a binary (0/1) answer. Examples of questions provided to the annotators are in Table TABREF13. We consider all “How to” questions (“How to unlock GT90 in Gran Turismo 2?”) as grammatical. Although it is not a complete sentence, this kind of question is quite common in our dataset and therefore we choose to treat it as grammatically correct."
],
"extractive_spans": [
"Is the question grammatically correct?",
"Is the spelling correct?",
"Is the question an explicit question, rather than a search query, a command, or a statement?"
],
"free_form_answer": "",
"highlighted_evidence": [
"To understand the quality of the question rewriting examples in the MQR dataset, we ask human annotators to judge the quality of the questions in the DEV and TEST splits (abbreviated as DEVTEST onward). Specifically, we take both ill-formed and well-formed questions in DEVTEST and ask human annotators to annotate the following three aspects regarding each question BIBREF0:\n\nIs the question grammatically correct?\n\nIs the spelling correct? Misuse of third person singular or past tense in verbs are considered grammatical errors instead of spelling errors. Missing question mark in the end of a question is also considered as spelling errors.\n\nIs the question an explicit question, rather than a search query, a command, or a statement?\n\nThe annotators were asked to annotate each aspect with a binary (0/1) answer. Examples of questions provided to the annotators are in Table TABREF13. We consider all “How to” questions (“How to unlock GT90 in Gran Turismo 2?”) as grammatical. Although it is not a complete sentence, this kind of question is quite common in our dataset and therefore we choose to treat it as grammatically correct."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To understand the quality of the question rewriting examples in the MQR dataset, we ask human annotators to judge the quality of the questions in the DEV and TEST splits (abbreviated as DEVTEST onward). Specifically, we take both ill-formed and well-formed questions in DEVTEST and ask human annotators to annotate the following three aspects regarding each question BIBREF0:",
"Is the question grammatically correct?",
"Is the spelling correct? Misuse of third person singular or past tense in verbs are considered grammatical errors instead of spelling errors. Missing question mark in the end of a question is also considered as spelling errors.",
"Is the question an explicit question, rather than a search query, a command, or a statement?"
],
"extractive_spans": [
"Is the question grammatically correct?",
"Is the spelling correct?",
"Is the question an explicit question"
],
"free_form_answer": "",
"highlighted_evidence": [
"Specifically, we take both ill-formed and well-formed questions in DEVTEST and ask human annotators to annotate the following three aspects regarding each question BIBREF0:\n\nIs the question grammatically correct?\n\nIs the spelling correct? Misuse of third person singular or past tense in verbs are considered grammatical errors instead of spelling errors. Missing question mark in the end of a question is also considered as spelling errors.\n\nIs the question an explicit question, rather than a search query, a command, or a statement?"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"26cfc4c42d73fd17fdad5ce32a5d6436f5855816",
"8ef5468630bffb75ba35e73d73bc3fe6cf673069"
],
"answer": [
{
"evidence": [
"To understand the quality of the question rewriting examples in the MQR dataset, we ask human annotators to judge the quality of the questions in the DEV and TEST splits (abbreviated as DEVTEST onward). Specifically, we take both ill-formed and well-formed questions in DEVTEST and ask human annotators to annotate the following three aspects regarding each question BIBREF0:",
"Is the question grammatically correct?",
"Is the spelling correct? Misuse of third person singular or past tense in verbs are considered grammatical errors instead of spelling errors. Missing question mark in the end of a question is also considered as spelling errors.",
"Is the question an explicit question, rather than a search query, a command, or a statement?",
"The annotators were asked to annotate each aspect with a binary (0/1) answer. Examples of questions provided to the annotators are in Table TABREF13. We consider all “How to” questions (“How to unlock GT90 in Gran Turismo 2?”) as grammatical. Although it is not a complete sentence, this kind of question is quite common in our dataset and therefore we choose to treat it as grammatically correct."
],
"extractive_spans": [
"Is the question grammatically correct?",
"Is the spelling correct?",
"Is the question an explicit question",
" annotators were asked to annotate each aspect with a binary (0/1) answer"
],
"free_form_answer": "",
"highlighted_evidence": [
"Specifically, we take both ill-formed and well-formed questions in DEVTEST and ask human annotators to annotate the following three aspects regarding each question BIBREF0:\n\nIs the question grammatically correct?\n\nIs the spelling correct? Misuse of third person singular or past tense in verbs are considered grammatical errors instead of spelling errors. Missing question mark in the end of a question is also considered as spelling errors.\n\nIs the question an explicit question, rather than a search query, a command, or a statement?\n\nThe annotators were asked to annotate each aspect with a binary (0/1) answer."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The annotators were asked to annotate each aspect with a binary (0/1) answer. Examples of questions provided to the annotators are in Table TABREF13. We consider all “How to” questions (“How to unlock GT90 in Gran Turismo 2?”) as grammatical. Although it is not a complete sentence, this kind of question is quite common in our dataset and therefore we choose to treat it as grammatically correct.",
"FLOAT SELECTED: Table 2: Examples given to annotators for binary question quality scores."
],
"extractive_spans": [
"annotate each aspect with a binary (0/1) answer. Examples of questions provided to the annotators are in Table TABREF13"
],
"free_form_answer": "",
"highlighted_evidence": [
"The annotators were asked to annotate each aspect with a binary (0/1) answer. Examples of questions provided to the annotators are in Table TABREF13. We consider all “How to” questions (“How to unlock GT90 in Gran Turismo 2?”) as grammatical. Although it is not a complete sentence, this kind of question is quite common in our dataset and therefore we choose to treat it as grammatically correct.",
"FLOAT SELECTED: Table 2: Examples given to annotators for binary question quality scores."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"4d62d7c72bdec72a26fc77ebb60ec8037bda5137",
"95437b1835df53ed28b428ebca1ff6a6928f129d"
],
"answer": [
{
"evidence": [
"We use 303 sub areas from Stack Exchange data dumps. The full list of area names is in the appendix. We do not include Stack Overflow because it is too specific to programming related questions. We also exclude all questions under the following language sub areas: Chinese, German, Spanish, Russian, Japanese, Korean, Latin, Ukrainian. This ensures that the questions in MQR are mostly English sentences. Having questions from 303 Stack Exchange sites makes the MQR dataset cover a broad range of domains."
],
"extractive_spans": [
"sub areas from Stack Exchange data dumps"
],
"free_form_answer": "",
"highlighted_evidence": [
"We use 303 sub areas from Stack Exchange data dumps. The full list of area names is in the appendix. We do not include Stack Overflow because it is too specific to programming related questions. We also exclude all questions under the following language sub areas: Chinese, German, Spanish, Russian, Japanese, Korean, Latin, Ukrainian. This ensures that the questions in MQR are mostly English sentences. Having questions from 303 Stack Exchange sites makes the MQR dataset cover a broad range of domains."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As the MQR dataset is constructed from 303 sub areas of the Stack Exchange networks, it covers a wide range of question domains. Table TABREF16 summarizes the number of categories in the TRAIN and DEVTEST portions of the MQR dataset, as well as the mean, standard deviation, minimum, and maximum number of instances per categories.",
"The most common categories in DEV and TEST are “diy”(295), “askubuntu”(288), “math”(250), “gaming”(189), and “physics”(140). The least common categories are mostly “Meta Stack Exchange” websites where people ask questions regarding the policies of posting questions on Stack Exchange sites. The most common categories in TRAIN are “askubuntu”(6237), “math”(5933), “gaming”(3938), “diy”(2791), and “2604”(scifi)."
],
"extractive_spans": [],
"free_form_answer": "The domains represent different subfields related to the topic of the questions. ",
"highlighted_evidence": [
"As the MQR dataset is constructed from 303 sub areas of the Stack Exchange networks, it covers a wide range of question domains. Table TABREF16 summarizes the number of categories in the TRAIN and DEVTEST portions of the MQR dataset, as well as the mean, standard deviation, minimum, and maximum number of instances per categories.",
"The most common categories in DEV and TEST are “diy”(295), “askubuntu”(288), “math”(250), “gaming”(189), and “physics”(140). The least common categories are mostly “Meta Stack Exchange” websites where people ask questions regarding the policies of posting questions on Stack Exchange sites. The most common categories in TRAIN are “askubuntu”(6237), “math”(5933), “gaming”(3938), “diy”(2791), and “2604”(scifi)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"",
"",
"",
"",
""
],
"question": [
"Do they report results only on English data?",
"What is the baseline method?",
"What aspects are used to judge question quality?",
"What did the human annotations consist of?",
"What characterizes the 303 domains? e.g. is this different subject tags?"
],
"question_id": [
"3d662fb442d5fc332194770aac835f401c2148d9",
"2280ed1e2b3e99921e2bca21231af43b58ca04f0",
"961a97149127e1123c94fbf7e2021eb1aa580ecb",
"1e4f45c956dfb40fadb8e10d4c1bfafa8968be4d",
"627ce8a1db08a732d5a8f7e1f8a72e3de89847e6"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
"",
""
]
} | {
"caption": [
"Table 1: Examples of pairs of ill-formed and well-formed questions from the MQR dataset.",
"Table 2: Examples given to annotators for binary question quality scores.",
"Table 3: Example question pairs given to annotators to judge semantic equivalence.",
"Table 4: Summary of manual annotations for instances sampled from the DEV and TEST portions of the MQR dataset. “Quality” are the average quality scores, broken down into three aspects. “Semantic Equivalence” is the percentage of question pairs in which the ill-formed and well-formed questions are semantically equivalent. The scores are averages of binary scores across both annotators.",
"Table 5: Statistics of question pairs (“instances”) from Stack Exchange categories in the MQR dataset.",
"Table 6: Results on MQR TEST set. The “Ill-formed” shows metric scores for the questions in TEST without rewriting. The next portion shows results for models trained on the TRAIN portion of MQR. The lower portion shows results for methods using other models and/or datasets.",
"Table 7: Results showing how additional training data affects performance for the transformer model.",
"Table 8: Methods combining transformer trained on MQR + Quora with GEC. “A→ B” means running method A followed by method B on method A’s output.",
"Table 9: Results of human evaluation of three models on 75 test examples.",
"Table 10: Examples of ill-formed question rewritten by models with human annotations. (S = Spelling, G = Grammar, and E = Explicit) The last column shows semantic equivalence with the well-formed questions."
],
"file": [
"2-Table1-1.png",
"3-Table2-1.png",
"3-Table3-1.png",
"4-Table4-1.png",
"4-Table5-1.png",
"5-Table6-1.png",
"5-Table7-1.png",
"6-Table8-1.png",
"6-Table9-1.png",
"7-Table10-1.png"
]
} | [
"What characterizes the 303 domains? e.g. is this different subject tags?"
] | [
[
"1911.09247-MQR Dataset Construction and Analysis ::: Dataset Domains-0",
"1911.09247-MQR Dataset Construction and Analysis-1",
"1911.09247-MQR Dataset Construction and Analysis ::: Dataset Domains-2"
]
] | [
"The domains represent different subfields related to the topic of the questions. "
] | 380 |
2003.12660 | Towards Supervised and Unsupervised Neural Machine Translation Baselines for Nigerian Pidgin | Nigerian Pidgin is arguably the most widely spoken language in Nigeria. Variants of this language are also spoken across West and Central Africa, making it a very important language. This work aims to establish supervised and unsupervised neural machine translation (NMT) baselines between English and Nigerian Pidgin. We implement and compare NMT models with different tokenization methods, creating a solid foundation for future works. | {
"paragraphs": [
[
"Over 500 languages are spoken in Nigeria, but Nigerian Pidgin is the uniting language in the country. Between three and five million people are estimated to use this language as a first language in performing their daily activities. Nigerian Pidgin is also considered a second language to up to 75 million people in Nigeria, accounting for about half of the country's population according to BIBREF0.",
"The language is considered an informal lingua franca and offers several benefits to the country. In 2020, 65% of Nigeria's population is estimated to have access to the internet according to BIBREF1. However, over 58.4% of the internet's content is in English language, while Nigerian languages, such as Igbo, Yoruba and Hausa, account for less than 0.1% of internet content according to BIBREF2. For Nigerians to truly harness the advantages the internet offers, it is imperative that English content is able to be translated to Nigerian languages, and vice versa.",
"This work is a first attempt towards using contemporary neural machine translation (NMT) techniques to perform machine translation for Nigerian Pidgin, establishing solid baselines that will ease and spur future work. We evaluate the performance of supervised and unsupervised neural machine translation models using word-level and the subword-level tokenization of BIBREF3."
],
[
"Some work has been done on developing neural machine translation baselines for African languages. BIBREF4 implemented a transformer model which significantly outperformed existing statistical machine translation architectures from English to South-African Setswana. Also, BIBREF5 went further, to train neural machine translation models from English to five South African languages using two different architectures - convolutional sequence-to-sequence and transformer. Their results showed that neural machine translation models are very promising for African languages.",
"The only known natural language processing work done on any variant of Pidgin English is by BIBREF6. The authors provided the largest known Nigerian Pidgin English corpus and trained the first ever translation models between both languages via unsupervised neural machine translation due to the absence of parallel training data at the time."
],
[
"All baseline models were trained using the Transformer architecture of BIBREF7. We experiment with both word-level and Byte Pair Encoding (BPE) subword-level tokenization methods for the supervised models. We learned 4000 byte pair encoding tokens, following the findings of BIBREF5. For the unuspervised model, we experiment with only word-level tokenization."
],
[
"The dataset used for the supervised was obtained from the JW300 large-scale, parallel corpus for Machine Translation (MT) by BIBREF8. The train set contained 20214 sentence pairs, while the validation contained 1000 sentence pairs. Both the supervised and unsupervised models were evaluated on a test set of 2101 sentences preprocessed by the Masakhane group. The model with the highest test BLEU score is selected as the best."
],
[
"Unsupervised model training followed BIBREF6 which used a Transformer of 4 encoder and 4 decoder layers with 10 attention heads. Embedding dimension was set to 300.",
"Supervised model training was performed with the open-source machine translation toolkit JoeyNMT by BIBREF9. For the byte pair encoding, embedding dimension was set to 256, while the embedding dimension was set to 300 for the word-level tokenization. The Transformer used for the byte pair encoding model had 6 encoder and 6 decoder layers, with 4 attention heads. For word-level, the encoder and decoder each had 4 layers with 10 attention heads for fair comparison to the unsupervised model. The models were each trained for 200 epochs on an Amazon EC2 p3.2xlarge instance."
],
[
"English to Pidgin:",
"Pidgin to English:",
"For the word-level tokenization English to Pidgin models, the supervised model outperforms the unsupervised model, achieving a BLEU score of 17.73 in comparison to the BLEU score of 5.18 achieved by the unsupervised model. The supervised model trained with byte pair encoding tokenization outperforms both word-level tokenization models, achieving a BLEU score of 24.29.",
"Taking a look at the results from the word-level tokenization Pidgin to English models, the supervised model outperforms the unsupervised model, achieving a BLEU score of 24.67 in comparison to the BLEU score of 7.93 achieved by the unsupervised model. The supervised model trained with byte pair encoding tokenization achieved a BLEU score of 13.00. One thing that is worthy of note is that word-level tokenization methods seem to perform better on Pidgin to English translation models, in comparison to English to Pidgin translation models."
],
[
"When analyzed by L1 speakers, the translation qualities were rated very well. In particular, the unsupervised model makes many translations that did not exactly match the reference translation, but conveyed the same meaning. More analysis and translation examples are in the Appendix."
],
[
"There is an increasing need to use neural machine translation techniques for African languages. Due to the low-resourced nature of these languages, these techniques can help build useful translation models that could hopefully help with the preservation and discoverability of these languages.",
"Future works include establishing qualitative metrics and the use of pre-trained models to bolster these translation models.",
"Code, data, trained models and result translations are available here - https://github.com/orevaoghene/pidgin-baseline"
],
[
"Special thanks to the Masakhane group for catalysing this work."
],
[
"Unsupervised (Word-Level):",
"Supervised (Word-Level):",
"Supervised (Byte Pair Encoding):"
],
[
"The following insights can be drawn from the example translations shown in the tables above:",
"The unsupervised model performed poorly at some simple translation examples, such as the first translation example.",
"For all translation models, the model makes hypothesis that are grammatically and qualitatively correct, but do not exactly match the reference translation, such as the second translation example.",
"Surprisingly, the unsupervised model performs better at some relatively simple translation examples than both supervised models. The third example is a typical such case.",
"The supervised translation models seem to perform better at longer example translations than the unsupervised example."
],
[
"Unsupervised (Word-Level):",
"Supervised (Word-Level):",
"Supervised (Byte Pair Encoding):"
]
],
"section_name": [
"Introduction",
"Related Work",
"Methodology",
"Methodology ::: Dataset",
"Methodology ::: Models",
"Results ::: Quantitative",
"Results ::: Qualitative",
"Conclusion",
"Conclusion ::: Acknowledgments",
"Appendix ::: English to Pidgin translations",
"Appendix ::: English to Pidgin translations ::: Discussions:",
"Appendix ::: Pidgin to English translations"
]
} | {
"answers": [
{
"annotation_id": [
"6a63288879b719c863f850192f040985095ea9e7",
"9c549e2d144726d2318a45d693b6e894210b5ae2"
],
"answer": [
{
"evidence": [
"The dataset used for the supervised was obtained from the JW300 large-scale, parallel corpus for Machine Translation (MT) by BIBREF8. The train set contained 20214 sentence pairs, while the validation contained 1000 sentence pairs. Both the supervised and unsupervised models were evaluated on a test set of 2101 sentences preprocessed by the Masakhane group. The model with the highest test BLEU score is selected as the best."
],
"extractive_spans": [],
"free_form_answer": "21214",
"highlighted_evidence": [
"The train set contained 20214 sentence pairs, while the validation contained 1000 sentence pairs."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The dataset used for the supervised was obtained from the JW300 large-scale, parallel corpus for Machine Translation (MT) by BIBREF8. The train set contained 20214 sentence pairs, while the validation contained 1000 sentence pairs. Both the supervised and unsupervised models were evaluated on a test set of 2101 sentences preprocessed by the Masakhane group. The model with the highest test BLEU score is selected as the best."
],
"extractive_spans": [],
"free_form_answer": "Data used has total of 23315 sentences.",
"highlighted_evidence": [
"The train set contained 20214 sentence pairs, while the validation contained 1000 sentence pairs. Both the supervised and unsupervised models were evaluated on a test set of 2101 sentences preprocessed by the Masakhane group."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"40297907b432e35aa52922fc60ef1ce1f28d982e",
"a76c455b4af2629440506e8528f9afd5d4f6a248"
],
"answer": [
{
"evidence": [
"The dataset used for the supervised was obtained from the JW300 large-scale, parallel corpus for Machine Translation (MT) by BIBREF8. The train set contained 20214 sentence pairs, while the validation contained 1000 sentence pairs. Both the supervised and unsupervised models were evaluated on a test set of 2101 sentences preprocessed by the Masakhane group. The model with the highest test BLEU score is selected as the best."
],
"extractive_spans": [
"BLEU score"
],
"free_form_answer": "",
"highlighted_evidence": [
"The model with the highest test BLEU score is selected as the best."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The dataset used for the supervised was obtained from the JW300 large-scale, parallel corpus for Machine Translation (MT) by BIBREF8. The train set contained 20214 sentence pairs, while the validation contained 1000 sentence pairs. Both the supervised and unsupervised models were evaluated on a test set of 2101 sentences preprocessed by the Masakhane group. The model with the highest test BLEU score is selected as the best."
],
"extractive_spans": [
"BLEU"
],
"free_form_answer": "",
"highlighted_evidence": [
"The model with the highest test BLEU score is selected as the best."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"9bd0ea633a3afa7a8dcd62ca03bc04e65a0df23e",
"b6452a3321f9abdcf16e56ccbef4372253866125"
],
"answer": [
{
"evidence": [
"For the word-level tokenization English to Pidgin models, the supervised model outperforms the unsupervised model, achieving a BLEU score of 17.73 in comparison to the BLEU score of 5.18 achieved by the unsupervised model. The supervised model trained with byte pair encoding tokenization outperforms both word-level tokenization models, achieving a BLEU score of 24.29.",
"Taking a look at the results from the word-level tokenization Pidgin to English models, the supervised model outperforms the unsupervised model, achieving a BLEU score of 24.67 in comparison to the BLEU score of 7.93 achieved by the unsupervised model. The supervised model trained with byte pair encoding tokenization achieved a BLEU score of 13.00. One thing that is worthy of note is that word-level tokenization methods seem to perform better on Pidgin to English translation models, in comparison to English to Pidgin translation models."
],
"extractive_spans": [],
"free_form_answer": "A supervised model with byte pair encoding was the best for English to Pidgin, while a supervised model with word-level encoding was the best for Pidgin to English.",
"highlighted_evidence": [
"For the word-level tokenization English to Pidgin models, the supervised model outperforms the unsupervised model, achieving a BLEU score of 17.73 in comparison to the BLEU score of 5.18 achieved by the unsupervised model. The supervised model trained with byte pair encoding tokenization outperforms both word-level tokenization models, achieving a BLEU score of 24.29.",
"Taking a look at the results from the word-level tokenization Pidgin to English models, the supervised model outperforms the unsupervised model, achieving a BLEU score of 24.67 in comparison to the BLEU score of 7.93 achieved by the unsupervised model. The supervised model trained with byte pair encoding tokenization achieved a BLEU score of 13.00. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Taking a look at the results from the word-level tokenization Pidgin to English models, the supervised model outperforms the unsupervised model, achieving a BLEU score of 24.67 in comparison to the BLEU score of 7.93 achieved by the unsupervised model. The supervised model trained with byte pair encoding tokenization achieved a BLEU score of 13.00. One thing that is worthy of note is that word-level tokenization methods seem to perform better on Pidgin to English translation models, in comparison to English to Pidgin translation models.",
"For the word-level tokenization English to Pidgin models, the supervised model outperforms the unsupervised model, achieving a BLEU score of 17.73 in comparison to the BLEU score of 5.18 achieved by the unsupervised model. The supervised model trained with byte pair encoding tokenization outperforms both word-level tokenization models, achieving a BLEU score of 24.29."
],
"extractive_spans": [],
"free_form_answer": "In English to Pidgin best was byte pair encoding tokenization superised model, while in Pidgin to English word-level tokenization supervised model was the best.",
"highlighted_evidence": [
"Taking a look at the results from the word-level tokenization Pidgin to English models, the supervised model outperforms the unsupervised model, achieving a BLEU score of 24.67 in comparison to the BLEU score of 7.93 achieved by the unsupervised model. The supervised model trained with byte pair encoding tokenization achieved a BLEU score of 13.00.",
"For the word-level tokenization English to Pidgin models, the supervised model outperforms the unsupervised model, achieving a BLEU score of 17.73 in comparison to the BLEU score of 5.18 achieved by the unsupervised model. The supervised model trained with byte pair encoding tokenization outperforms both word-level tokenization models, achieving a BLEU score of 24.29."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7cd526e05fc6a336c4017b8d7a06511f374ea456",
"925a62d66543bfc134e72a7ee18454bf69e4dcc5"
],
"answer": [
{
"evidence": [
"This work is a first attempt towards using contemporary neural machine translation (NMT) techniques to perform machine translation for Nigerian Pidgin, establishing solid baselines that will ease and spur future work. We evaluate the performance of supervised and unsupervised neural machine translation models using word-level and the subword-level tokenization of BIBREF3."
],
"extractive_spans": [
"word-level ",
"subword-level"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate the performance of supervised and unsupervised neural machine translation models using word-level and the subword-level tokenization of BIBREF3."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"All baseline models were trained using the Transformer architecture of BIBREF7. We experiment with both word-level and Byte Pair Encoding (BPE) subword-level tokenization methods for the supervised models. We learned 4000 byte pair encoding tokens, following the findings of BIBREF5. For the unuspervised model, we experiment with only word-level tokenization."
],
"extractive_spans": [
"word-level",
"Byte Pair Encoding (BPE) subword-level"
],
"free_form_answer": "",
"highlighted_evidence": [
"We experiment with both word-level and Byte Pair Encoding (BPE) subword-level tokenization methods for the supervised models."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"1f9487a46527cf4867f9bcd8dddaa52177996b21",
"dc6d336489951074d403c11886fc8afea40d8c81"
],
"answer": [
{
"evidence": [
"All baseline models were trained using the Transformer architecture of BIBREF7. We experiment with both word-level and Byte Pair Encoding (BPE) subword-level tokenization methods for the supervised models. We learned 4000 byte pair encoding tokens, following the findings of BIBREF5. For the unuspervised model, we experiment with only word-level tokenization."
],
"extractive_spans": [
"Transformer architecture of BIBREF7"
],
"free_form_answer": "",
"highlighted_evidence": [
"All baseline models were trained using the Transformer architecture of BIBREF7. We experiment with both word-level and Byte Pair Encoding (BPE) subword-level tokenization methods for the supervised models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The supervised translation models seem to perform better at longer example translations than the unsupervised example."
],
"extractive_spans": [
"supervised translation models"
],
"free_form_answer": "",
"highlighted_evidence": [
"The supervised translation models seem to perform better at longer example translations than the unsupervised example."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"How long is their dataset?",
"What metrics are used?",
"What is the best performing system?",
"What tokenization methods are used?",
"What baselines do they propose?"
],
"question_id": [
"80bb07e553449bde9ac0ff35fcc718d7c161f2d4",
"c8f8ecac23a991bceb8387e68b3b3f2a5d8cf029",
"28847b20ca63dc56f2545e6f6ec3082d9dbe1b3f",
"2d5d0b0c54105717bf48559b914fefd0c94964a6",
"dd81f58c782169886235c48b8f9a08e0954dd3ae"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Table 3: Unsupervised (Word-Level) Results from English to Nigerian Pidgin",
"Table 4: Supervised (Word-Level) Results from English to Nigerian Pidgin",
"Table 5: Supervised (Byte Pair Encoding) Results from English to Nigerian Pidgin",
"Table 6: Unsupervised (Word-Level) Results from English to Nigerian Pidgin",
"Table 7: Supervised (Word-Level) Results from English to Nigerian Pidgin",
"Table 8: Supervised (Byte Pair Encoding) Results from English to Nigerian Pidgin"
],
"file": [
"5-Table3-1.png",
"5-Table4-1.png",
"5-Table5-1.png",
"6-Table6-1.png",
"6-Table7-1.png",
"6-Table8-1.png"
]
} | [
"How long is their dataset?",
"What is the best performing system?"
] | [
[
"2003.12660-Methodology ::: Dataset-0"
],
[
"2003.12660-Results ::: Quantitative-2",
"2003.12660-Results ::: Quantitative-3"
]
] | [
"Data used has total of 23315 sentences.",
"In English to Pidgin best was byte pair encoding tokenization superised model, while in Pidgin to English word-level tokenization supervised model was the best."
] | 381 |
1610.00479 | Nonsymbolic Text Representation | We introduce the first generic text representation model that is completely nonsymbolic, i.e., it does not require the availability of a segmentation or tokenization method that attempts to identify words or other symbolic units in text. This applies to training the parameters of the model on a training corpus as well as to applying it when computing the representation of a new text. We show that our model performs better than prior work on an information extraction and a text denoising task. | {
"paragraphs": [
[
"Character-level models can be grouped into three classes. (i) End-to-end models learn a separate model on the raw character (or byte) input for each task; these models estimate task-specific parameters, but no representation of text that would be usable across tasks is computed. Throughout this paper, we refer to INLINEFORM0 as the “representation” of INLINEFORM1 only if INLINEFORM2 is a generic rendering of INLINEFORM3 that can be used in a general way, e.g., across tasks and domains. The activation pattern of a hidden layer for a given input sentence in a multilayer perceptron (MLP) is not a representation according to this definition if it is not used outside of the MLP. (ii) Character-level models of words derive a representation of a word INLINEFORM4 from the character string of INLINEFORM5 , but they are symbolic in that they need text segmented into tokens as input. (iii) Bag-of-character-ngram models, bag-of-ngram models for short, use character ngrams to encode sequence-of-character information, but sequence-of-ngram information is lost in the representations they produce.[0]A short version of this paper appears as BIBREF0 .",
"Our premise is that text representations are needed in NLP. A large body of work on word embeddings demonstrates that a generic text representation, trained in an unsupervised fashion on large corpora, is useful. Thus, we take the view that group (i) models, end-to-end learning without any representation learning, is not a good general approach for NLP.",
"We distinguish training and utilization of the text representation model. We use “training” to refer to the method by which the model is learned and “utilization” to refer to the application of the model to a piece of text to compute a representation of the text. In many text representation models, utilization is trivial. For example, for word embedding models, utilization amounts to a simple lookup of a word to get its precomputed embedding. However, for the models we consider, utilization is not trivial and we will discuss different approaches.",
"Both training and utilization can be either symbolic or nonsymbolic. We define a symbolic approach as one that is based on tokenization, i.e., a segmentation of the text into tokens. Symbol identifiers (i.e., tokens) can have internal structure – a tokenizer may recognize tokens like “to and fro” and “London-based” that contain delimiters – and may be morphologically analyzed downstream.",
"We define a nonsymbolic approach as one that is tokenization-free, i.e., no assumption is made that there are segmentation boundaries and that each segment (e.g., a word) should be represented (e.g., by a word embedding) in a way that is independent of the representations (e.g., word embeddings) of neighboring segments. Methods for training text representation models that require tokenized text include word embedding models like word2vec BIBREF1 and most group (ii) methods, i.e., character-level models like fastText skipgram BIBREF2 .",
"Bag-of-ngram models, group (iii) models, are text representation utilization models that typically compute the representation of a text as the sum of the embeddings of all character ngrams occurring in it, e.g., WordSpace BIBREF3 and CHARAGRAM BIBREF4 . WordSpace and CHARAGRAM are examples of mixed training-utilization models: training is performed on tokenized text (words and phrases), utilization is nonsymbolic.",
"We make two contributions in this paper. (i) We propose the first generic method for training text representation models without the need for tokenization and address the challenging sparseness issues that make this difficult. (ii) We propose the first nonsymbolic utilization method that fully represents sequence information – in contrast to utilization methods like bag-of-ngrams that discard sequence information that is not directly encoded in the character ngrams themselves."
],
[
"chung16characternmt give two motivations for their work on character-level models. First, tokenization (or, equivalently, segmentation) algorithms make many mistakes and are brittle: “we do not have a perfect word segmentation algorithm for any one language”. Tokenization errors then propagate throughout the NLP pipeline.",
"Second, there is currently no general solution for morphology in statistical NLP. For many languages, high-coverage and high-quality morphological resources are not available. Even for well resourced languages, problems like ambiguity make morphological processing difficult; e.g., “rung” is either the singular of a noun meaning “part of a ladder” or the past participle of “to ring”. In many languages, e.g., in German, syncretism, a particular type of systematic morphological ambiguity, is pervasive. Thus, there is no simple morphological processing method that would produce a representation in which all inflected forms of “to ring” are marked as having a common lemma; and no such method in which an unseen form like “aromatizing” is reliably analyzed as a form of “aromatize” whereas an unseen form like “antitrafficking” is reliably analyzed as the compound “anti+trafficking”.",
"Of course, it is an open question whether nonsymbolic methods can perform better than morphological analysis, but the foregoing discussion motivates us to investigate them.",
"chung16characternmt focus on problems with the tokens produced by segmentation algorithms. Equally important is the problem that tokenization fails to capture structure across multiple tokens. The job of dealing with cross-token structure is often given to downstream components of the pipeline, e.g., components that recognize multiwords and named entitites in English or in fact any word in a language like Chinese that uses no overt delimiters. However, there is no linguistic or computational reason in principle why we should treat the recognition of a unit like “electromechanical” (containing no space) as fundamentally different from the recognition of a unit like “electrical engineering” (containing a space). Character-level models offer the potential of uniform treatment of such linguistic units."
],
[
"Many text representation learning algorithms can be understood as estimating the parameters of the model from a unit-context matrix INLINEFORM0 where each row corresponds to a unit INLINEFORM1 , each column to a context INLINEFORM2 and each cell INLINEFORM3 measures the degree of association between INLINEFORM4 and INLINEFORM5 . For example, the skipgram model is closely related to an SVD factorization of a pointwise mutual information matrix BIBREF5 ; in this case, both units and contexts are words. Many text representation learning algorithms are formalized as matrix factorization (e.g., BIBREF6 , BIBREF7 , BIBREF8 ), but there may be no big difference between implicit (e.g., BIBREF9 ) and explicit factorization methods; see also BIBREF10 , BIBREF11 .",
"Our goal in this paper is not to develop new matrix factorization methods. Instead, we will focus on defining the unit-context matrix in such a way that no symbolic assumption has to be made. This unit-context matrix can then be processed by any existing or still to be invented algorithm.",
"Definition of units and contexts. How to define units and contexts without relying on segmentation boundaries? In initial experiments, we simply generated all character ngrams of length up to INLINEFORM0 (where INLINEFORM1 is a parameter), including character ngrams that cross token boundaries; i.e., no segmentation is needed. We then used a skipgram-type objective for learning embeddings that attempts to predict, from ngram INLINEFORM2 , an ngram INLINEFORM3 in INLINEFORM4 's context. Results were poor because many training instances consist of pairs INLINEFORM5 in which INLINEFORM6 and INLINEFORM7 overlap, e.g., one is a subsequence of the other. So the objective encourages trivial predictions of ngrams that have high string similarity with the input and nothing interesting is learned.",
"In this paper, we propose an alternative way of defining units and contexts that supports well-performing nonsymbolic text representation learning: multiple random segmentation. A pointer moves through the training corpus. The current position INLINEFORM0 of the pointer defines the left boundary of the next segment. The length INLINEFORM1 of the next move is uniformly sampled from INLINEFORM2 where INLINEFORM3 and INLINEFORM4 are the minimum and maximum segment lengths. The right boundary of the segment is then INLINEFORM5 . Thus, the segment just generated is INLINEFORM6 , the subsequence of the corpus between (and including) positions INLINEFORM7 and INLINEFORM8 . The pointer is positioned at INLINEFORM9 , the next segment is sampled and so on. An example of a random segmentation from our experiments is “@he@had@b egu n@to@show @his@cap acity@f” where space was replaced with “@” and the next segment starts with “or@”.",
"The corpus is segmented this way INLINEFORM0 times (where INLINEFORM1 is a parameter) and the INLINEFORM2 random segmentations are concatenated. The unit-context matrix is derived from this concatenated corpus.",
"Multiple random segmentation has two advantages. First, there is no redundancy since, in any given random segmentation, two ngrams do not overlap and are not subsequences of each other. Second, a single random segmentation would only cover a small part of the space of possible ngrams. For example, a random segmentation of “a rose is a rose is a rose” might be “[a ros][e is a ros][e is][a rose]”. This segmentation does not contain the segment “rose” and this part of the corpus can then not be exploited to learn a good embedding for the fourgram “rose”. However, with multiple random segmentation, it is likely that this part of the corpus does give rise to the segment “rose” in one of the segmentations and can contribute information to learning a good embedding for “rose”.",
"We took the idea of random segmentation from work on biological sequences BIBREF12 , BIBREF13 . Such sequences have no delimiters, so they are a good model if one believes that delimiter-based segmentation is problematic for text.",
"The main text representation model that is based on ngram embeddings similar to ours is the bag-of-ngram model. A sequence of characters is represented by a single vector that is computed as the sum of the embeddings of all ngrams that occur in the sequence. In fact, this is what we did in the entity typing experiment. In most work on bag-of-ngram models, the sequences considered are words or phrases. In a few cases, the model is applied to longer sequences, including sentences and documents; e.g., BIBREF3 , BIBREF4 .",
"The basic assumption of the bag-of-ngram model is that sequence information is encoded in the character ngrams and therefore a “bag-of” approach (which usually throws away all sequence information) is sufficient. The assumption is not implausible: for most bags of character sequences, there is only a single way of stitching them together to one coherent sequence, so in that case information is not necessarily lost (although this is likely when embeddings are added). But the assumption has not been tested experimentally.",
"Here, we propose position embeddings, character-ngram-based embeddings that more fully preserve sequence information. The simple idea is to represent each position as the sum of all ngrams that contain that position. When we set INLINEFORM0 , INLINEFORM1 , this means that the position is the sum of INLINEFORM2 ngram embeddings (if all of these ngrams have embeddings, which generally will be true for some, but not for most positions). A sequence of INLINEFORM3 characters is then represented as a sequence of INLINEFORM4 such position embeddings."
],
[
"Form-meaning homomorphism premise. Nonsymbolic representation learning does not preprocess the training corpus by means of tokenization and considers many ngrams that would be ignored in tokenized approaches because they span token boundaries. As a result, the number of ngrams that occur in a corpus is an order of magnitude larger for tokenization-free approaches than for tokenization-based approaches. See supplementary for details.",
"We will see below that this sparseness impacts performance of nonsymbolic text representation negatively. We address sparseness by defining ngram equivalence classes. All ngrams in an equivalence class receive the same embedding.",
"The relationship between form and meaning is mostly arbitrary, but there are substructures of the ngram space and the embedding space that are systematically related by homomorphism. In this paper, we will assume the following homomorphism: INLINEFORM0 ",
"where INLINEFORM0 iff INLINEFORM1 for string transduction INLINEFORM2 and INLINEFORM3 iff INLINEFORM4 .",
"As a simple example consider a transduction INLINEFORM0 that deletes spaces at the beginning of ngrams, e.g., INLINEFORM1 . This is an example of a meaning-preserving INLINEFORM2 since for, say, English, INLINEFORM3 will not change meaning. We will propose a procedure for learning INLINEFORM4 below.",
"We define INLINEFORM0 as “closeness” – not as identity – because of estimation noise when embeddings are learned. We assume that there are no true synonyms and therefore the direction INLINEFORM1 also holds. For example, “car” and “automobile” are considered synonyms, but we assume that their embeddings are different because only “car” has the literary sense “chariot”. If they were identical, then the homomorphism would not hold since “car” and “automobile” cannot be converted into each other by any plausible meaning-preserving INLINEFORM2 .",
"Learning procedure. To learn INLINEFORM0 , we define three templates that transform one ngram into another: (i) replace character INLINEFORM1 with character INLINEFORM2 , (ii) delete character INLINEFORM3 if its immediate predecessor is character INLINEFORM4 , (iii) delete character INLINEFORM5 if its immediate successor is character INLINEFORM6 . The learning procedure takes a set of ngrams and their embeddings as input. It then exhaustively searches for all pairs of ngrams, for all pairs of characters INLINEFORM7 / INLINEFORM8 , for each of the three templates. (This takes about 10 hours on a multicore server.) When two matching embeddings exist, we compute their cosine. For example, for the operation “delete space before M”, an ngram pair from our embeddings that matches is “@Mercedes” / “Mercedes” and we compute its cosine. As the characteristic statistic of an operation we take the average of all cosines; e.g., for “delete space before M” the average cosine is .7435. We then rank operations according to average cosine and take the first INLINEFORM9 as the definition of INLINEFORM10 where INLINEFORM11 is a parameter. For characters that are replaced by each other (e.g., 1, 2, 3 in Table TABREF7 ), we compute the equivalence class and then replace the learned operations with ones that replace a character by the canonical member of its equivalence class (e.g., 2 INLINEFORM12 1, 3 INLINEFORM13 1).",
"Permutation premise. Tokenization algorithms can be thought of as assigning a particular function or semantics to each character and making tokenization decisions accordingly; e.g., they may disallow that a semicolon, the character “;”, occurs inside a token. If we want to learn representations from the data without imposing such hard constraints, then characters should not have any particular function or semantics. A consequence of this desideratum is that if any two characters are exchanged for each other, this should not affect the representations that are learned. For example, if we interchange space and “A” throughout a corpus, then this should have no effect on learning: what was the representation of “NATO” before, should now be the representation of “N TO”. We can also think of this type of permutation as a sanity check: it ensures we do not inadvertantly make use of text preprocessing heuristics that are pervasive in NLP.",
"Let INLINEFORM0 be the alphabet of a language, i.e., its set of characters, INLINEFORM1 a permutation on INLINEFORM2 , INLINEFORM3 a corpus and INLINEFORM4 the corpus permuted by INLINEFORM5 . For example, if INLINEFORM6 , then all “a” in INLINEFORM7 are replaced with “e” in INLINEFORM8 . The learning procedure should learn identical equivalence classes on INLINEFORM9 and INLINEFORM10 . So, if INLINEFORM11 after running the learning procedure on INLINEFORM12 , then INLINEFORM13 after running the learning procedure on INLINEFORM14 .",
"This premise is motivated by our desire to come up with a general method that does not rely on specific properties of a language or genre; e.g., the premise rules out exploiting the fact through feature engineering that in many languages and genres, “c” and “C” are related. Such a relationship has to be learned from the data."
],
[
"We run experiments on INLINEFORM0 , a 3 gigabyte English Wikipedia corpus, and train word2vec skipgram (W2V, BIBREF1 ) and fastText skipgram (FTX, BIBREF2 ) models on INLINEFORM1 and its derivatives. We randomly generate a permutation INLINEFORM2 on the alphabet and learn a transduction INLINEFORM3 (details below). In Table TABREF8 (left), the columns “method”, INLINEFORM4 and INLINEFORM5 indicate the method used (W2V or FTX) and whether experiments in a row were run on INLINEFORM6 , INLINEFORM7 or INLINEFORM8 . The values of “whitespace” are: (i) ORIGINAL (whitespace as in the original), (ii) SUBSTITUTE (what INLINEFORM9 outputs as whitespace is used as whitespace, i.e., INLINEFORM10 becomes the new whitespace) and (iii) RANDOM (random segmentation with parameters INLINEFORM11 , INLINEFORM12 , INLINEFORM13 ). Before random segmentation, whitespace is replaced with “@” – this character occurs rarely in INLINEFORM14 , so that the effect of conflating two characters (original “@” and whitespace) can be neglected. The random segmenter then indicates boundaries by whitespace – unambiguously since it is applied to text that contains no whitespace.",
"We learn INLINEFORM0 on the embeddings learned by W2V on the random segmentation version of INLINEFORM1 (C-RANDOM in the table) as described in § SECREF4 for INLINEFORM2 . Since the number of equivalence classes is much smaller than the number of ngrams, INLINEFORM3 reduces the number of distinct character ngrams from 758M in the random segmentation version of INLINEFORM4 (C/D-RANDOM) to 96M in the random segmentation version of INLINEFORM5 (E/F-RANDOM).",
"Table TABREF7 shows a selection of the INLINEFORM0 operations. Throughout the paper, if we give examples from INLINEFORM1 or INLINEFORM2 as we do here, we convert characters back to the original for better readability. The two uppercase/lowercase conversions shown in the table (E INLINEFORM3 e, C INLINEFORM4 c) were the only ones that were learned (we had hoped for more). The postdeletion rule ml INLINEFORM5 m usefully rewrites “html” as “htm”, but is likely to do more harm than good. We inspected all 200 rules and, with a few exceptions like ml INLINEFORM6 m, they looked good to us.",
"Evaluation. We evaluate the three models on an entity typing task, similar to BIBREF14 , but based on an entity dataset released by xie16entitydesc2 in which each entity has been assigned one or more types from a set of 50 types. For example, the entity “Harrison Ford” has the types “actor”, “celebrity” and “award winner” among others. We extract mentions from FACC (http://lemurproject.org/clueweb12/FACC1) if an entity has a mention there or we use the Freebase name as the mention otherwise. This gives us a data set of 54,334, 6085 and 6747 mentions in train, dev and test, respectively. Each mention is annotated with the types that its entity has been assigned by xie16entitydesc2. The evaluation has a strong cross-domain aspect because of differences between FACC and Wikipedia, the training corpus for our representations. For example, of the 525 mentions in dev that have a length of at least 5 and do not contain lowercase characters, more than half have 0 or 1 occurrences in the Wikipedia corpus, including many like “JOHNNY CARSON” that are frequent in other case variants.",
"Since our goal in this experiment is to evaluate tokenization-free learning, not tokenization-free utilization, we use a simple utilization baseline, the bag-of-ngram model (see § SECREF1 ). A mention is represented as the sum of all character ngrams that embeddings were learned for. Linear SVMs BIBREF15 are then trained, one for each of the 50 types, on train and applied to dev and test. Our evaluation measure is micro INLINEFORM0 on all typing decisions; e.g., one typing decision is: “Harrison Ford” is a mention of type “actor”. We tune thresholds on dev to optimize INLINEFORM1 and then use these thresholds on test.",
"We again use the embeddings corresponding to A-RANDOM in Table TABREF8 . We randomly selected 2,000,000 contexts of size 40 characters from Wikipedia. We then created a noise context for each of the 2,000,000 contexts by replacing one character at position i ( INLINEFORM0 , uniformly sampled) with space (probability INLINEFORM1 ) or a random character otherwise. Finally, we selected 1000 noise contexts randomly and computed their nearest neighbors among the 4,000,000 contexts (excluding the noise query). We did this in two different conditions: for a bag-of-ngram representation of the context (sum of all character ngrams) and for the concatenation of 11 position embeddings, those between 15 and 25. Our evaluation measure is mean reciprocal rank of the clean context corresponding to the noise context. This simulates a text denoising experiment: if the clean context has rank 1, then the noisy context can be corrected.",
"Table TABREF15 shows that sequence-preserving position embeddings perform better than bag-of-ngram representations.",
"Table TABREF16 shows an example of a context in which position embeddings did better than bag-of-ngrams, demonstrating that sequence information is lost by bag-of-ngram representations, in this case the exact position of “Seahawks”.",
"Table TABREF12 gives further intuition about the type of information position embeddings contain, showing the ngram embeddings closest to selected position embeddings; e.g., “estseller” (the first 9-gram on the line numbered 3 in the table) is closest to the embedding of position 3 (corresponding to the first “s” of “best-selling”). The kNN search space is restricted to alphanumeric ngrams."
],
[
"Results are presented in Table TABREF8 (left). Overall performance of FTX is higher than W2V in all cases. For ORIGINAL, FTX's recall is a lot higher than W2V's whereas precision decreases slightly. This indicates that FTX is stronger in both learning and application: in learning it can generalize better from sparse training data and in application it can produce representations for OOVs and better representations for rare words. For English, prefixes, suffixes and stems are of particular importance, but there often is not a neat correspondence between these traditional linguistic concepts and internal FTX representations; e.g., bojanowski17enriching show that “asphal”, “sphalt” and “phalt” are informative character ngrams of “asphaltic”.",
"Running W2V on random segmentations can be viewed as an alternative to the learning mechanism of FTX, which is based on character ngram cooccurrence; so it is not surprising that for RANDOM, FTX has only a small advantage over W2V.",
"For C/D-SUBSTITUTE, we see a dramatic loss in performance if tokenization heuristics are not used. This is not surprising, but shows how powerful tokenization can be.",
"C/D-ORIGINAL is like C/D-SUBSTITUTE except that we artificially restored the space – so the permutation INLINEFORM0 is applied to all characters except for space. By comparing C/D-ORIGINAL and C/D-SUBSTITUTE, we see that the space is the most important text preprocessing feature employed by W2V and FTX. If space is restored, there is only a small loss of performance compared to A/B-ORIGINAL. So text preprocessing heuristics other than whitespace tokenization in a narrow definition of the term (e.g., downcasing) do not seem to play a big role, at least not for our entity typing task.",
"For tokenization-free embedding learning on random segmentation, there is almost no difference between original data (A/B-RANDOM) and permuted data (C/D-RANDOM). This confirms that our proposed learning method is insensitive to permutations and makes no use of text preprocessing heuristics.",
"We achieve an additional improvement by applying the transduction INLINEFORM0 . In fact, FTX performance for F-RANDOM ( INLINEFORM1 of .582) is better than tokenization-based W2V and FTX performance. Thus, our proposed method seems to be an effective tokenization-free alternative to tokenization-based embedding learning."
],
[
"Table TABREF8 (right) shows nearest neighbors of ten character ngrams, for the A-RANDOM space. Queries were chosen to contain only alphanumeric characters. To highlight the difference to symbol-based representation models, we restricted the search to 9-grams that contained a delimiter at positions 3, 4, 5, 6 or 7.",
"Lines 1–4 show that “delimiter variation”, i.e., cases where a word has two forms, one with a delimiter, one without a delimiter, is handled well: “Abdulaziz” / “Abdul Azi”, “codenamed” / “code name”, “Quarterfinal” / “Quarter-Final”, “worldrecord” / “world-record”.",
"Lines 5–9 are cases of ambiguous or polysemous words that are disambiguated through “character context”. “stem”, “cell”, “rear”, “wheel”, “crash”, “land”, “scripts”, “through”, “downtown” all have several meanings. In contrast, the meanings of “stem cell”, “rear wheel”, “crash land”, “(write) scripts for” and “through downtown” are less ambiguous. A multiword recognizer may find the phrases “stem cell” and “crash land” automatically. But the examples of “scripts for” and “through downtown” show that what is accomplished here is not multiword detection, but a more general use of character context for disambiguation.",
"Line 10 shows that a 9-gram of “face-to-face” is the closest neighbor to a 9-gram of “facilitating”. This demonstrates that form and meaning sometimes interact in surprising ways. Facilitating a meeting is most commonly done face-to-face. It is not inconceivable that form – the shared trigram “fac” or the shared fourgram “faci” in “facilitate” / “facing” – is influencing meaning here in a way that also occurs historically in cases like “ear” `organ of hearing' / “ear” `head of cereal plant', originally unrelated words that many English speakers today intuit as one word."
],
[
"Single vs. multiple segmentation. The motivation for multiple segmentation is exhaustive coverage of the space of possible segmentations. An alternative approach would be to attempt to find a single optimal segmentation.",
"Our intuition is that in many cases overlapping segments contain complementary information. Table TABREF17 gives an example. Historic exchange rates are different from floating exchange rates and this is captured by the low similarity of the ngrams ic@exchang and ing@exchan. Also, the meaning of “historic” and “floating” is noncompositional: these two words take on a specialized meaning in the context of exchange rates. The same is true for “rates”: its meaning is not its general meaning in the compound “exchange rates”. Thus, we need a representation that contains overlapping segments, so that “historic” / “floating” and “exchange” can disambiguate each other in the first part of the compound and “exchange” and “rates” can disambiguate each other in the second part of the compound. A single segmentation cannot capture these overlapping ngrams.",
"What text-type are tokenization-free approaches most promising for? The reviewers thought that language and text-type were badly chosen for this paper. Indeed, a morphologically complex language like Turkish and a noisy text-type like Twitter would seem to be better choices for a paper on robust text representation.",
"However, robust word representation methods like FTX are effective for within-token generalization, in particular, effective for both complex morphology and OOVs. If linguistic variability and noise only occur on the token level, then a tokenization-free approach has fewer advantages.",
"On the other hand, the foregoing discussion of cross-token regularities and disambiguation applies to well-edited English text as much as it does to other languages and other text-types as the example of “exchange” shows (which is disambiguated by prior context and provides disambiguating context to following words) and as is also exemplified by lines 5–9 in Table TABREF8 (right).",
"Still, this paper does not directly evaluate the different contributions that within-token character ngram embeddings vs. cross-token character ngram embeddings make, so this is an open question. One difficulty is that few corpora are available that allow the separate evaluation of whitespace tokenization errors; e.g., OCR corpora generally do not distinguish a separate class of whitespace tokenization errors.",
"Position embeddings vs. phrase/sentence embeddings. Position embeddings may seem to stand in opposition to phrase/sentence embeddings. For many tasks, we need a fixed length representation of a longer sequence; e.g., sentiment analysis models compute a fixed-length representation to classify a sentence as positive / negative.",
"To see that position embeddings are compatible with fixed-length embeddings, observe first that, in principle, there is no difference between word embeddings and position embeddings in this respect. Take a sequence that consists of, say, 6 words and 29 characters. The initial representation of the sentence has length 6 for word embeddings and length 29 for position embeddings. In both cases, we need a model that reduces the variable length sequence into a fixed length vector at some intermediate stage and then classifies this vector as positive or negative. For example, both word and position embeddings can be used as the input to an LSTM whose final hidden unit activations are a fixed length vector of this type.",
"So assessing position embeddings is not a question of variable-length vs. fixed-length representations. Word embeddings give rise to variable-length representations too. The question is solely whether the position-embedding representation is a more effective representation.",
"A more specific form of this argument concerns architectures that compute fixed-length representations of subsequences on intermediate levels, e.g., CNNs. The difference between position-embedding-based CNNs and word-embedding-based CNNs is that the former have access to a vastly increased range of subsequences, including substrings of words (making it easier to learn that “exchange” and “exchanges” are related) and cross-token character strings (making it easier to learn that “exchange rate” is noncompositional). Here, the questions are: (i) how useful are subsequences made available by position embeddings and (ii) is the increased level of noise and decreased efficiency caused by many useless subsequences worth the information gained by adding useful subsequences.",
"Independence of training and utilization. We note that our proposed training and utilization methods are completely independent. Position embeddings can be computed from any set of character-ngram-embeddings (including FTX) and our character ngram learning algorithm could be used for applications other than position embeddings, e.g., for computing word embeddings.",
"Context-free vs. context-sensitive embeddings. Word embeddings are context-free: a given word INLINEFORM0 like “king” is represented by the same embedding independent of the context in which INLINEFORM1 occurs. Position embeddings are context-free as well: if the maximum size of a character ngram is INLINEFORM2 , then the position embedding of the center of a string INLINEFORM3 of length INLINEFORM4 is the same independent of the context in which INLINEFORM5 occurs.",
"It is conceivable that text representations could be context-sensitive. For example, the hidden states of a character language model have been used as a kind of nonsymbolic text representation BIBREF16 , BIBREF17 , BIBREF18 and these states are context-sensitive. However, such models will in general be a second level of representation; e.g., the hidden states of a character language model generally use character embeddings as the first level of representation. Conversely, position embeddings can also be the basis for a context-sensitive second-level text representation. We have to start somewhere when we represent text. Position embeddings are motivated by the desire to provide a representation that can be computed easily and quickly (i.e., without taking context into account), but that on the other hand is much richer than the symbolic alphabet.",
"Processing text vs. speech vs. images. gillick16 write: “It is worth noting that noise is often added ... to images ... and speech where the added noise does not fundamentally alter the input, but rather blurs it. [bytes allow us to achieve] something like blurring with text.” It is not clear to what extent blurring on the byte level is useful; e.g., if we blur the bytes of the word “university” individually, then it is unlikely that the noise generated is helpful in, say, providing good training examples in parts of the space that would otherwise be unexplored. In contrast, the text representation we have introduced in this paper can be blurred in a way that is analogous to images and speech. Each embedding of a position is a vector that can be smoothly changed in every direction. We have showed that the similarity in this space gives rise to natural variation.",
"Prospects for completely tokenization-free processing. We have focused on whitespace tokenization and proposed a whitespace-tokenization-free method that computes embeddings of higher quality than tokenization-based methods. However, there are many properties of edited text beyond whitespace tokenization that a complex rule-based tokenizer exploits. In a small explorative experiment, we replaced all non-alphanumeric characters with whitespace and repeated experiment A-ORIGINAL for this setting. This results in an INLINEFORM0 of .593, better by .01 than the best tokenization-free method. This illustrates that there is still a lot of work to be done before we can obviate the need for tokenization."
],
[
"In the following, we will present an overview of work on character-based models for a variety of tasks from different NLP areas.",
"The history of character-based research in NLP is long and spans a broad array of tasks. Here we make an attempt to categorize the literature of character-level work into three classes based on the way they incorporate character-level information into their computational models. The three classes we identified are: tokenization-based models, bag-of-n-gram models and end-to-end models. However, there are also mixtures possible, such as tokenization-based bag-of-n-gram models or bag-of-n-gram models trained end-to-end.",
"On top of the categorization based on the underlying representation model, we sub-categorize the work within each group into six abstract types of NLP tasks (if possible) to be able to compare them more directly. These task types are the following:"
],
[
"We group character-level models that are based on tokenization as a necessary preprocessing step in the category of tokenization-based approaches. Those can be either models with tokenized text as input or models that operate only on individual tokens (such as studies on morphological inflection of words).",
"In the following paragraphs, we cover a subset of tokenization-based models that are used for representation learning, sequence-to-sequence generation, sequence labeling, language modeling, and sequence classification tasks.",
"Representation learning for character sequences. Creating word representations based on characters has attracted much attention recently. Such representations can model rare words, complex words, out-of-vocabulary words and noisy texts. In comparison to traditional word representation models that learn separate vectors for word types, character-level models are more compact as they only need vector representations for characters as well as a compositional model.",
"Various neural network architectures have been proposed for learning token representations based on characters. Examples of such architectures are averaging character embeddings, (bidirectional) recurrent neural networks (RNNs) (with or without gates) over character embeddings and convolutional neural networks (CNNs) over character embeddings. Studies on the general task of learning word representations from characters include BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . These character-based word representations are often combined with word embeddings and integrated into a hierarchical system, such as hierarchical RNNs or CNNs or combinations of both to solve other task types. We will provide more concrete examples in the following paragraphs.",
"Sequence-to-sequence generation (machine translation). Character-based machine translation is no new topic. Using character-based methods has been a natural way to overcome challenges like rare words or out-of-vocabulary words in machine translation. Traditional machine translation models based on characters or character n-grams have been investigated by BIBREF34 , BIBREF35 , BIBREF36 . Neural machine translation with character-level and subword units has become popular recently BIBREF37 , BIBREF38 , BIBREF39 , BIBREF33 . In such neural models, using a joint attention/translation model makes joint learning of alignment and translation possible BIBREF31 .",
"Both hierarchical RNNs BIBREF31 , BIBREF38 and combinations of CNNs and RNNs have been proposed for neural machine translation BIBREF37 , BIBREF33 .",
"Sequence labeling. Examples of early efforts on sequence labeling using tokenization-based models include: bilingual character-level alignment extraction BIBREF40 ; unsupervised multilingual part-of-speech induction based on characters BIBREF41 ; part-of-speech tagging with subword/character-level information BIBREF42 , BIBREF43 , BIBREF44 ; morphological segmentation and tagging BIBREF45 , BIBREF46 ; and identification of language inclusion with character-based features BIBREF47 .",
"Recently, various hierarchical character-level neural networks have been applied to a variety of sequence labeling tasks.",
"Recurrent neural networks are used for part-of-speech tagging BIBREF48 , BIBREF49 , BIBREF50 , named entity recognition BIBREF51 , BIBREF50 , chunking BIBREF50 and morphological segmentation/inflection generation BIBREF52 , BIBREF53 , BIBREF54 , BIBREF55 , BIBREF56 , BIBREF57 , BIBREF58 , BIBREF59 . Such hierarchical RNNs are also used for dependency parsing BIBREF60 . This work has shown that morphologically rich languages benefit from character-level models in dependency parsing.",
"Convolutional neural networks are used for part-of-speech tagging BIBREF61 and named entity recognition BIBREF62 .",
"The combination of RNNs and CNNs is used, for instance, for named entity recognition.",
"Language modeling. Earlier work on sub-word language modeling has used morpheme-level features for language models BIBREF63 , BIBREF64 , BIBREF65 , BIBREF66 , BIBREF67 . In addition, hybrid word/n-gram language models for out-of-vocabulary words have been applied to speech recognition BIBREF68 , BIBREF69 , BIBREF70 , BIBREF71 . Furthermore, characters and character n-grams have been used as input to restricted boltzmann machine-based language models for machine translation BIBREF72 .",
"More recently, character-level neural language modeling has been proposed by a large body of work BIBREF73 , BIBREF74 , BIBREF75 , BIBREF48 , BIBREF76 , BIBREF66 , BIBREF72 . Although most of this work is using RNNs, there exist architectures that combine CNNs and RNNs BIBREF75 . While most of these studies combine the output of the character model with word embeddings, the authors of BIBREF75 report that this does not help them for their character-aware neural language model. They use convolution over character embeddings followed by a highway network BIBREF77 and feed its output into a long short-term memory network that predicts the next word using a softmax function.",
"Sequence classification. Examples of tokenization-based models that perform sequence classification are CNNs used for sentiment classification BIBREF78 and combinations of RNNs and CNNs used for language identification BIBREF79 ."
],
[
"Character n-grams have a long history as features for specific NLP applications, such as information retrieval. However, there is also work on representing words or larger input units, such as phrases, with character n-gram embeddings. Those embeddings can be within-token or cross-token, i.e., there is no tokenization necessary.",
"Although such models learn/use character n-gram embeddings from tokenized text or short text segments, to represent a piece of text, the occurring character n-grams are usually summed without the need for tokenization. For example, the phrase “Berlin is located in Germany” is represented with character 4-grams as follows: “Berl erli rlin lin_ in_i n_is _is_ is_l s_lo _loc loca ocat cate ated ted_ ed_i d_in _in_ in_G n_Ge _Ger Germ erma rman many any.” Note that the input has not been tokenized and there are n-grams spanning token boundaries. We also include non-embedding approaches using bag-of-n-grams within this group as they go beyond word and token representations.",
"In the following, we explore a subset of bag-of-ngram models that are used for representation learning, information retrieval, and sequence classification tasks.",
"Representation learning for character sequences. An early study in this category of character-based models is BIBREF3 . Its goal is to create corpus-based fixed-length distributed semantic representations for text. To train k-gram embeddings, the top character k-grams are extracted from a corpus along with their cooccurrence counts. Then, singular value decomposition (SVD) is used to create low dimensional k-gram embeddings given their cooccurrence matrix. To apply them to a piece of text, the k-grams of the text are extracted and their corresponding embeddings are summed. The study evaluates the k-gram embeddings in the context of word sense disambiguation.",
"A more recent study BIBREF4 trains character n-gram embeddings in an end-to-end fashion with a neural network. They are evaluated on word similarity, sentence similarity and part-of-speech tagging.",
"Training character n-gram embeddings has also been proposed for biological sequences BIBREF12 , BIBREF13 for a variety of bioinformatics tasks.",
"Information retrieval. As mentioned before, character n-gram features are widely used in the area of information retrieval BIBREF80 , BIBREF81 , BIBREF82 , BIBREF83 , BIBREF84 , BIBREF85 .",
"Sequence classification. Bag-of-n-gram models are used for language identification BIBREF86 , BIBREF87 , topic labeling BIBREF88 , authorship attribution BIBREF89 , word/text similarity BIBREF2 , BIBREF90 , BIBREF4 and word sense disambiguation BIBREF3 ."
],
[
"Similar to bag-of-n-gram models, end-to-end models are tokenization-free. Their input is a sequence of characters or bytes and they are directly optimized on a (task-specific) objective. Thus, they learn their own, task-specific representation of the input sequences. Recently, character-based end-to-end models have gained a lot of popularity due to the success of neural networks.",
"We explore the subset of these models that are used for sequence generation, sequence labeling, language modeling and sequence classification tasks.",
"Sequence-to-sequence generation. In 2011, the authors of BIBREF91 already proposed an end-to-end model for generating text. They train RNNs with multiplicative connections on the task of character-level language modeling. Afterwards, they use the model to generate text and find that the model captures linguistic structure and a large vocabulary. It produces only a few uncapitalized non-words and is able to balance parantheses and quotes even over long distances (e.g., 30 characters). A similar study by BIBREF92 uses a long short-term memory network to create character sequences.",
"Recently, character-based neural network sequence-to-sequence models have been applied to instances of generation tasks like machine translation BIBREF93 , BIBREF94 , BIBREF95 , BIBREF96 , BIBREF97 (which was previously proposed on the token-level BIBREF98 ), question answering BIBREF99 and speech recognition BIBREF100 , BIBREF101 , BIBREF102 , BIBREF103 .",
"Sequence labeling. Character and character n-gram-based features were already proposed in 2003 for named entity recognition in an end-to-end manner using a hidden markov model BIBREF104 . More recently, the authors of BIBREF105 have proposed an end-to-end neural network based model for named entity recognition and part-of-speech tagging. An end-to-end model is also suggested for unsupervised, language-independent identification of phrases or words BIBREF106 .",
"A prominent recent example of neural end-to-end sequence labeling is the paper by BIBREF107 about multilingual language processing from bytes. A window is slid over the input sequence, which is represented by its byte string. Thus, the segments in the window can begin and end mid-word or even mid-character. The authors apply the same model for different languages and evaluate it on part-of-speech tagging and named entity recognition.",
"Language modeling. The authors of BIBREF108 propose a hierarchical multiscale recurrent neural network for language modeling. The model uses different timescales to encode temporal dependencies and is able to discover hierarchical structures in a character sequence without explicit tokenization. Other studies on end-to-end language models include BIBREF94 , BIBREF109 .",
"Sequence classification. Another recent end-to-end model uses character-level inputs for document classification BIBREF110 , BIBREF111 , BIBREF112 . To capture long-term dependencies of the input, the authors combine convolutional layers with recurrent layers. The model is evaluated on sentiment analysis, ontology classification, question type classification and news categorization.",
"End-to-end models are also used for entity typing based on the character sequence of the entity's name BIBREF113 ."
],
[
"We introduced the first generic text representation model that is completely nonsymbolic, i.e., it does not require the availability of a segmentation or tokenization method that identifies words or other symbolic units in text. This is true for the training of the model as well as for applying it when computing the representation of a new text. In contrast to prior work that has assumed that the sequence-of-character information captured by character ngrams is sufficient, position embeddings also capture sequence-of-ngram information. We showed that our model performs better than prior work on entity typing and text denoising.",
"Future work.",
"The most important challenge that we need to address is how to use nonsymbolic text representation for tasks that are word-based like part-of-speech tagging. This may seem like a contradiction at first, but gillick16 have shown how character-based methods can be used for “symbolic” tasks. We are currently working on creating an analogous evaluation for our nonsymbolic text representation."
],
[
"This work was supported by DFG (SCHUE 2246/10-1) and Volkswagenstiftung. We are grateful for their comments to: the anonymous reviewers, Ehsan Asgari, Annemarie Friedrich, Helmut Schmid, Martin Schmitt and Yadollah Yaghoobzadeh."
],
[
"Nonsymbolic representation learning does not preprocess the training corpus by means of tokenization and considers many ngrams that would be ignored in tokenized approaches because they span token boundaries. As a result, the number of ngrams that occur in a corpus is an order of magnitude larger for tokenization-free approaches than for tokenization-based approaches. See Figure FIGREF33 ."
],
[
"W2V hyperparameter settings. size of word vectors: 200, max skip length between words: 5, threshold for occurrence of words: 0, hierarchical softmax: 0, number of negative examples: 5, threads: 50, training iterations: 1, min-count: 5, starting learning rate: .025, classes: 0",
"FTX hyperparameter settings. learning rate: .05, lrUpdateRate: 100, size of word vectors: 200, size of context window: 5, number of epochs: 1, minimal number of word occurrences: 5, number of negatives sampled: 5, max length of word ngram: 1, loss function: ns, number of buckets: 2,000,000, min length of char ngram: 3, max length of char ngram: 6, number of threads: 50, sampling threshold: .0001",
"We ran some experiments with more epochs, but this did not improve the results."
],
[
"We did not tune INLINEFORM0 , but results are highly sensitive to the value of this parameter. If INLINEFORM1 is too small, then beneficial conflations (collapse punctuation marks, replace all digits with one symbol) are not found. If INLINEFORM2 is too large, then precision suffers – in the extreme case all characters are collapsed into one.",
"We also did not tune INLINEFORM0 , but we do not consider results to be very sensitive to the value of INLINEFORM1 if it is reasonably large. Of course, if a larger range of character ngram lengths is chosen, i.e., a larger interval INLINEFORM2 , then at some point INLINEFORM3 will not be sufficient and possible segmentations would not be covered well enough in sampling.",
"The type of segmentation used in multiple segmentation can also be viewed as a hyperparameter. An alternative to random segmentation would be exhaustive segementation, but a naive implementation of that strategy would increase the size of the training corpus by several orders of magnitude. Another alternative is to choose one fixed size, e.g., 4 or 5 (similar to BIBREF3 ). Many of the nice disambiguation effects we see in Table TABREF8 (right) and in Table TABREF17 would not be possible with short ngrams. On the other hand, a fixed ngram size that is larger, e.g., 10, would make it difficult to get 100% coverage: there would be positions for which no position embedding can be computed."
]
],
"section_name": [
"Introduction",
"Motivation",
"Methodology",
"Ngram equivalence classes/Permutation",
"Experiments",
"Results",
"Analysis of ngram embeddings",
"Discussion",
"Related workThis section was written in September 2016 and revised in April 2017. To suggest corrections and additional references, please send mail to [email protected]",
"Tokenization-based Approaches",
"Bag-of-n-gram Models",
"End-to-end Models",
"Conclusion",
"Acknowledgments",
"Sparseness in tokenization-free approaches",
"Experimental settings",
"Other hyperparameters"
]
} | {
"answers": [
{
"annotation_id": [
"22f0550ef7af638f50ec0ac75d1557c9b257c1d4",
"3ad20616b18703df613acf108a20edf17221fa72"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"We define a nonsymbolic approach as one that is tokenization-free, i.e., no assumption is made that there are segmentation boundaries and that each segment (e.g., a word) should be represented (e.g., by a word embedding) in a way that is independent of the representations (e.g., word embeddings) of neighboring segments. Methods for training text representation models that require tokenized text include word embedding models like word2vec BIBREF1 and most group (ii) methods, i.e., character-level models like fastText skipgram BIBREF2 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We define a nonsymbolic approach as one that is tokenization-free, i.e., no assumption is made that there are segmentation boundaries and that each segment (e.g., a word) should be represented (e.g., by a word embedding) in a way that is independent of the representations (e.g., word embeddings) of neighboring segments."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"78ad0441054090c239a5fa575d09508c2e025f28",
"be9afd03e405d22c53243a97db0c35bd65a362fe"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Left: Evaluation results for named entity typing. Right: Neighbors of character ngrams. Rank r = 1/r = 2: nearest / second-nearest neighbor."
],
"extractive_spans": [],
"free_form_answer": "Their F1 score outperforms an existing model by 0.017 on average for the random segmentation experiment.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Left: Evaluation results for named entity typing. Right: Neighbors of character ngrams. Rank r = 1/r = 2: nearest / second-nearest neighbor."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We again use the embeddings corresponding to A-RANDOM in Table TABREF8 . We randomly selected 2,000,000 contexts of size 40 characters from Wikipedia. We then created a noise context for each of the 2,000,000 contexts by replacing one character at position i ( INLINEFORM0 , uniformly sampled) with space (probability INLINEFORM1 ) or a random character otherwise. Finally, we selected 1000 noise contexts randomly and computed their nearest neighbors among the 4,000,000 contexts (excluding the noise query). We did this in two different conditions: for a bag-of-ngram representation of the context (sum of all character ngrams) and for the concatenation of 11 position embeddings, those between 15 and 25. Our evaluation measure is mean reciprocal rank of the clean context corresponding to the noise context. This simulates a text denoising experiment: if the clean context has rank 1, then the noisy context can be corrected.",
"Table TABREF15 shows that sequence-preserving position embeddings perform better than bag-of-ngram representations."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table 4) Mean reciprocal rank of proposed model is 0.76 compared to 0.64 of bag-of-ngrams.",
"highlighted_evidence": [
"This simulates a text denoising experiment: if the clean context has rank 1, then the noisy context can be corrected.\n\nTable TABREF15 shows that sequence-preserving position embeddings perform better than bag-of-ngram representations."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"203238a4bfc0097e645baa3233f38ead390d011f",
"9609e04cd1521995cc94067ad7989019e8a728a1"
],
"answer": [
{
"evidence": [
"We define a nonsymbolic approach as one that is tokenization-free, i.e., no assumption is made that there are segmentation boundaries and that each segment (e.g., a word) should be represented (e.g., by a word embedding) in a way that is independent of the representations (e.g., word embeddings) of neighboring segments. Methods for training text representation models that require tokenized text include word embedding models like word2vec BIBREF1 and most group (ii) methods, i.e., character-level models like fastText skipgram BIBREF2 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We define a nonsymbolic approach as one that is tokenization-free, i.e., no assumption is made that there are segmentation boundaries and that each segment (e.g., a word) should be represented (e.g., by a word embedding) in a way that is independent of the representations (e.g., word embeddings) of neighboring segments. "
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"dea4e8da3e96c5f7ea8e72928bcaad6ff51e0aef",
"febd9e44e0071bac7689ac65c5f0d7c218a1c66e"
],
"answer": [
{
"evidence": [
"We run experiments on INLINEFORM0 , a 3 gigabyte English Wikipedia corpus, and train word2vec skipgram (W2V, BIBREF1 ) and fastText skipgram (FTX, BIBREF2 ) models on INLINEFORM1 and its derivatives. We randomly generate a permutation INLINEFORM2 on the alphabet and learn a transduction INLINEFORM3 (details below). In Table TABREF8 (left), the columns “method”, INLINEFORM4 and INLINEFORM5 indicate the method used (W2V or FTX) and whether experiments in a row were run on INLINEFORM6 , INLINEFORM7 or INLINEFORM8 . The values of “whitespace” are: (i) ORIGINAL (whitespace as in the original), (ii) SUBSTITUTE (what INLINEFORM9 outputs as whitespace is used as whitespace, i.e., INLINEFORM10 becomes the new whitespace) and (iii) RANDOM (random segmentation with parameters INLINEFORM11 , INLINEFORM12 , INLINEFORM13 ). Before random segmentation, whitespace is replaced with “@” – this character occurs rarely in INLINEFORM14 , so that the effect of conflating two characters (original “@” and whitespace) can be neglected. The random segmenter then indicates boundaries by whitespace – unambiguously since it is applied to text that contains no whitespace."
],
"extractive_spans": [
"3 gigabyte English Wikipedia corpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"We run experiments on INLINEFORM0 , a 3 gigabyte English Wikipedia corpus, and train word2vec skipgram (W2V, BIBREF1 ) and fastText skipgram (FTX, BIBREF2 ) models on INLINEFORM1 and its derivatives."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Evaluation. We evaluate the three models on an entity typing task, similar to BIBREF14 , but based on an entity dataset released by xie16entitydesc2 in which each entity has been assigned one or more types from a set of 50 types. For example, the entity “Harrison Ford” has the types “actor”, “celebrity” and “award winner” among others. We extract mentions from FACC (http://lemurproject.org/clueweb12/FACC1) if an entity has a mention there or we use the Freebase name as the mention otherwise. This gives us a data set of 54,334, 6085 and 6747 mentions in train, dev and test, respectively. Each mention is annotated with the types that its entity has been assigned by xie16entitydesc2. The evaluation has a strong cross-domain aspect because of differences between FACC and Wikipedia, the training corpus for our representations. For example, of the 525 mentions in dev that have a length of at least 5 and do not contain lowercase characters, more than half have 0 or 1 occurrences in the Wikipedia corpus, including many like “JOHNNY CARSON” that are frequent in other case variants."
],
"extractive_spans": [
"entity dataset released by xie16entitydesc2"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate the three models on an entity typing task, similar to BIBREF14 , but based on an entity dataset released by xie16entitydesc2 in which each entity has been assigned one or more types from a set of 50 types. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they have an elementary unit of text?",
"By how much do they outpeform existing text denoising models?",
"In their nonsymbolic representation can they represent two same string differently depending on the context?",
"On which datasets do they evaluate their models?"
],
"question_id": [
"8c872236e4475d5d0969fb90d2df94589c7ab1c4",
"f6ba0a5cfd5b35219efe5e52b0a5b86ae85c5abd",
"b21f61c0f95fefdb1bdb90d51cbba4655cd59896",
"0dbb5309d2be97f6eda29d7ae220aa16cafbabb7"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1: String operations that on average do not change meaning. “@” stands for space. ‡ is the left or right boundary of the ngram.",
"Table 2: Left: Evaluation results for named entity typing. Right: Neighbors of character ngrams. Rank r = 1/r = 2: nearest / second-nearest neighbor.",
"Table 3: Nearest ngram embeddings (rank r ∈ [1, 5]) of the position embeddings for “POS”, the positions 2/3 (best), 15/16 (monthly), 23/24 (comic), 29/30 (book) and 34/35 (publications) in the Wikipedia excerpt “best-selling monthly comic book publications sold in North America”",
"Table 5: Illustration of the result in Table 4. “rep. space” = “representation space”. We want to correct the error in the corrupted “noise” context (line 2) and produce “correct” (line 1). The nearest neighbor to line 2 in position-embedding space is the correct context (line 3, r = 1). The nearest neighbor to line 2 in bag-of-ngram space is incorrect (line 4, r = 1) because the precise position of “Seahawks” in the query is not encoded. The correct context in bag-of-ngram space is instead at rank r = 6 (line 5). “similarity” is average cosine (over eleven position embeddings) for position embeddings.",
"Table 6: Cosine similarity of ngrams that cross word boundaries and disambiguate polysemous words. The tables show three disambiguating ngrams for “exchange” and “rates” that have different meanings as indicated by low cosine similarity. In phrases like “floating exchange rates” and “historic exchange rates”, disambiguating ngrams overlap. Parts of the word “exchange” are disambiguated by preceding context (ic@exchang, ing@exchan) and parts of “exchange” provide context for disambiguating “rates” (xchange@ra).",
"Figure 1: The graph shows how many different character ngrams (kmin = 3, kmax = 10) occur in the first n bytes of the English Wikipedia for symbolic (tokenization-based) vs. nonsymbolic (tokenization-free) processing. The number of ngrams is an order of magnitude larger in the nonsymbolic approach. We counted all segments, corresponding to m = ∞. For the experiments in the paper (m = 50), the number of nonsymbolic character ngrams is smaller."
],
"file": [
"5-Table1-1.png",
"6-Table2-1.png",
"8-Table3-1.png",
"9-Table5-1.png",
"9-Table6-1.png",
"14-Figure1-1.png"
]
} | [
"By how much do they outpeform existing text denoising models?"
] | [
[
"1610.00479-6-Table2-1.png",
"1610.00479-Experiments-5",
"1610.00479-Experiments-6"
]
] | [
"Answer with content missing: (Table 4) Mean reciprocal rank of proposed model is 0.76 compared to 0.64 of bag-of-ngrams."
] | 384 |
1905.01347 | Auditing ImageNet: Towards a Model-driven Framework for Annotating Demographic Attributes of Large-Scale Image Datasets | The ImageNet dataset ushered in a flood of academic and industry interest in deep learning for computer vision applications. Despite its significant impact, there has not been a comprehensive investigation into the demographic attributes of images contained within the dataset. Such a study could lead to new insights on inherent biases within ImageNet, particularly important given it is frequently used to pretrain models for a wide variety of computer vision tasks. In this work, we introduce a model-driven framework for the automatic annotation of apparent age and gender attributes in large-scale image datasets. Using this framework, we conduct the first demographic audit of the 2012 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) subset of ImageNet and the"person"hierarchical category of ImageNet. We find that 41.62% of faces in ILSVRC appear as female, 1.71% appear as individuals above the age of 60, and males aged 15 to 29 account for the largest subgroup with 27.11%. We note that the presented model-driven framework is not fair for all intersectional groups, so annotation are subject to bias. We present this work as the starting point for future development of unbiased annotation models and for the study of downstream effects of imbalances in the demographics of ImageNet. Code and annotations are available at: http://bit.ly/ImageNetDemoAudit | {
"paragraphs": [
[
"ImageNet BIBREF0 , released in 2009, is a canonical dataset in computer vision. ImageNet follows the WordNet lexical database of English BIBREF1 , which groups words into synsets, each expressing a distinct concept. ImageNet contains 14,197,122 images in 21,841 synsets, collected through a comprehensive web-based search and annotated with Amazon Mechanical Turk (AMT) BIBREF0 . The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) BIBREF2 , held annually from 2010 to 2017, was the catalyst for an explosion of academic and industry interest in deep learning. A subset of 1,000 synsets were used in the ILSVRC classification task. Seminal work by Krizhevsky et al. BIBREF3 in the 2012 event cemented the deep convolutional neural network (CNN) as the preeminent model in computer vision.",
"Today, work in computer vision largely follows a standard process: a pretrained CNN is downloaded with weights initialized to those trained on the 2012 ILSVRC subset of ImageNet, the network is adjusted to fit the desired task, and transfer learning is performed, where the CNN uses the pretrained weights as a starting point for training new data on the new task. The use of pretrained CNNs is instrumental in applications as varied as instance segmentation BIBREF4 and chest radiograph diagnosis BIBREF5 .",
"By convention, computer vision practitioners have effectively abstracted away the details of ImageNet. While this has proved successful in practical applications, there is merit in taking a step back and scrutinizing common practices. In the ten years following the release of ImageNet, there has not been a comprehensive study into the composition of images in the classes it contains.",
"This lack of scrutiny into ImageNet's contents is concerning. Without a conscious effort to incorporate diversity in data collection, undesirable biases can collect and propagate. These biases can manifest in the form of patterns learned from data that are influential in the decision of a model, but are not aligned with values of society BIBREF6 . Age, gender and racial biases have been exposed in word embeddings BIBREF7 , image captioning models BIBREF8 , and commercial computer vision gender classifiers BIBREF9 . In the case of ImageNet, there is some evidence that CNNs pretrained on its data may also encode undesirable biases. Using adversarial examples as a form of model criticism, Stock and Cisse BIBREF6 discovered that prototypical examples of the synset `basketball' contain images of black persons, despite a relative balance of race in the class. They hypothesized that an under-representation of black persons in other classes may lead to a biased representation of `basketball'.",
"This paper is the first in a series of works to build a framework for the audit of the demographic attributes of ImageNet and other large image datasets. The main contributions of this work include the introduction of a model-driven demographic annotation pipeline for apparent age and gender, analysis of said annotation models and the presentation of annotations for each image in the training set of the ILSVRC 2012 subset of ImageNet (1.28M images) and the `person' hierarchical synset of ImageNet (1.18M images)."
],
[
"Before proceeding with annotation, there is merit in contextualizing this study with a look at the methodology proposed by Deng et al. in the construction of ImageNet. A close reading of their data collection and quality assurance processes demonstrates that the conscious inclusion of demographic diversity in ImageNet was lacking BIBREF0 .",
"First, candidate images for each synset were sourced from commercial image search engines, including Google, Yahoo!, Microsoft's Live Search, Picsearch and Flickr BIBREF10 . Gender BIBREF11 and racial BIBREF12 biases have been demonstrated to exist in image search results (i.e. images of occupations), demonstrating that a more curated approach at the top of the funnel may be necessary to mitigate inherent biases of search engines. Second, English search queries were translated into Chinese, Spanish, Dutch and Italian using WordNet databases and used for image retrieval. While this is a step in the right direction, Chinese was the only non-Western European language used, and there exists, for example, Universal Multilingual WordNet which includes over 200 languages for translation BIBREF13 . Third, the authors quantify image diversity by computing the average image of each synset and measuring the lossless JPG file size. They state that a diverse synset will result in a blurrier average image and smaller file, representative of diversity in appearance, position, viewpoint and background. This method, however, cannot quantify diversity with respect to demographic characteristics such as age, gender, and skin type."
],
[
"In order to provide demographic annotations at scale, there exist two feasible methods: crowdsourcing and model-driven annotations. In the case of large-scale image datasets, crowdsourcing quickly becomes prohibitively expensive; ImageNet, for example, employed 49k AMT workers during its collection BIBREF14 . Model-driven annotations use supervised learning methods to create models that can predict annotations, but this approach comes with its own meta-problem; as the goal of this work is to identify demographic representation in data, we must analyze the annotation models for their performance on intersectional groups to determine if they themselves exhibit bias."
],
[
"The FaceBoxes network BIBREF15 is employed for face detection, consisting of a lightweight CNN that incorporates novel Rapidly Digested and Multiple Scale Convolutional Layers for speed and accuracy, respectively. This model was trained on the WIDER FACE dataset BIBREF16 and achieves average precision of 95.50% on the Face Detection Data Set and Benchmark (FDDB) BIBREF17 . On a subset of 1,000 images from FDDB hand-annotated by the author for apparent age and gender, the model achieves a relative fair performance across intersectional groups, as show in Table TABREF1 ."
],
[
"The task of apparent age annotation arises as ground-truth ages of individuals in images are not possible to obtain in the domain of web-scraped datasets. In this work, we follow Merler et al. BIBREF18 and employ the Deep EXpectation (DEX) model of apparent age BIBREF19 , which is pre-trained on the IMDB-WIKI dataset of 500k faces with real ages and fine-tuned on the APPA-REAL training and validation sets of 3.6k faces with apparent ages, crowdsourced from an average of 38 votes per image BIBREF20 . As show in Table TABREF2 , the model achieves a mean average error of 5.22 years on the APPA-REAL test set, but exhibits worse performance on younger and older age groups."
],
[
"We recognize that a binary representation of gender does not adequately capture the complexities of gender or represent transgender identities. In this work, we express gender as a continuous value between 0 and 1. When thresholding at 0.5, we use the sex labels of `male' and `female' to define gender classes, as training datasets and evaluation benchmarks use this binary label system. We again follow Merler et al. BIBREF18 and employ a DEX model to annotate the gender of an individual. When tested on APPA-REAL, with enhanced annotations provided by BIBREF21 , the model achieves an accuracy of 91.00%, however its errors are not evenly distributed, as shown in Table TABREF3 . The model errs more on younger and older age groups and on those with a female gender label.",
"Given these biased results, we further evaluate the model on the Pilot Parliaments Benchmark (PPB) BIBREF9 , a face dataset developed by Buolamwini and Gebru for parity in gender and skin type. Results for intersectional groups on PPB are shown in Table TABREF4 . The model performs very poorly for darker-skinned females (Fitzpatrick skin types IV - VI), with an average accuracy of 69.00%, reflecting the disparate findings of commercial computer vision gender classifiers in Gender Shades BIBREF9 . We note that use of this model in annotating ImageNet will result in biased gender annotations, but proceed in order to establish a baseline upon which the results of a more fair gender annotation model can be compared in future work, via fine-tuning on crowdsourced gender annotations from the Diversity in Faces dataset BIBREF18 ."
],
[
"We evaluate the training set of the ILSVRC 2012 subset of ImageNet (1000 synsets) and the `person' hierarchical synset of ImageNet (2833 synsets) with the proposed methodology. Face detections that receive a confidence score of 0.9 or higher move forward to the annotation phase. Statistics for both datasets are presented in Tables TABREF7 and TABREF10 . In these preliminary annotations, we find that females comprise only 41.62% of images in ILSVRC and 31.11% in the `person' subset of ImageNet, and people over the age of 60 are almost non-existent in ILSVRC, accounting for 1.71%.",
"To get a sense of the most biased classes in terms of gender representation for each dataset, we filter synsets that contain at least 20 images in their class and received face detections for at least 15% of their images. We then calculate the percentage of males and females in each synset and rank them in descending order. Top synsets for each gender and dataset are presented in Tables TABREF8 and TABREF11 . Top ILSVRC synsets for males largely represent types of fish, sports and firearm-related items and top synsets for females largely represent types of clothing and dogs."
],
[
"Through the introduction of a preliminary pipeline for automated demographic annotations, this work hopes to provide insight into the ImageNet dataset, a tool that is commonly abstracted away by the computer vision community. In the future, we will continue this work to create fair models for automated demographic annotations, with emphasis on the gender annotation model. We aim to incorporate additional measures of diversity into the pipeline, such as Fitzpatrick skin type and other craniofacial measurements. When annotation models are evaluated as fair, we plan to continue this audit on all 14.2M images of ImageNet and other large image datasets. With accurate coverage of the demographic attributes of ImageNet, we will be able to investigate the downstream impact of under- and over-represented groups in the features learned in pretrained CNNs and how bias represented in these features may propagate in transfer learning to new applications."
]
],
"section_name": [
"Introduction",
"Diversity Considerations in ImageNet",
"Methodology",
"Face Detection",
"Apparent Age Annotation",
"Gender Annotation",
"Results",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"5b073db56320099a0734eb0679d8cb5c7d06dd08",
"b298e168287702dd8c2a319698c93cde03e6e904"
],
"answer": [
{
"evidence": [
"In order to provide demographic annotations at scale, there exist two feasible methods: crowdsourcing and model-driven annotations. In the case of large-scale image datasets, crowdsourcing quickly becomes prohibitively expensive; ImageNet, for example, employed 49k AMT workers during its collection BIBREF14 . Model-driven annotations use supervised learning methods to create models that can predict annotations, but this approach comes with its own meta-problem; as the goal of this work is to identify demographic representation in data, we must analyze the annotation models for their performance on intersectional groups to determine if they themselves exhibit bias.",
"Face Detection",
"The FaceBoxes network BIBREF15 is employed for face detection, consisting of a lightweight CNN that incorporates novel Rapidly Digested and Multiple Scale Convolutional Layers for speed and accuracy, respectively. This model was trained on the WIDER FACE dataset BIBREF16 and achieves average precision of 95.50% on the Face Detection Data Set and Benchmark (FDDB) BIBREF17 . On a subset of 1,000 images from FDDB hand-annotated by the author for apparent age and gender, the model achieves a relative fair performance across intersectional groups, as show in Table TABREF1 .",
"The task of apparent age annotation arises as ground-truth ages of individuals in images are not possible to obtain in the domain of web-scraped datasets. In this work, we follow Merler et al. BIBREF18 and employ the Deep EXpectation (DEX) model of apparent age BIBREF19 , which is pre-trained on the IMDB-WIKI dataset of 500k faces with real ages and fine-tuned on the APPA-REAL training and validation sets of 3.6k faces with apparent ages, crowdsourced from an average of 38 votes per image BIBREF20 . As show in Table TABREF2 , the model achieves a mean average error of 5.22 years on the APPA-REAL test set, but exhibits worse performance on younger and older age groups.",
"We recognize that a binary representation of gender does not adequately capture the complexities of gender or represent transgender identities. In this work, we express gender as a continuous value between 0 and 1. When thresholding at 0.5, we use the sex labels of `male' and `female' to define gender classes, as training datasets and evaluation benchmarks use this binary label system. We again follow Merler et al. BIBREF18 and employ a DEX model to annotate the gender of an individual. When tested on APPA-REAL, with enhanced annotations provided by BIBREF21 , the model achieves an accuracy of 91.00%, however its errors are not evenly distributed, as shown in Table TABREF3 . The model errs more on younger and older age groups and on those with a female gender label."
],
"extractive_spans": [],
"free_form_answer": "using model driven face detection, apparent age annotation and gender annotation",
"highlighted_evidence": [
"In order to provide demographic annotations at scale, there exist two feasible methods: crowdsourcing and model-driven annotations. ",
"Model-driven annotations use supervised learning methods to create models that can predict annotations, but this approach comes with its own meta-problem; as the goal of this work is to identify demographic representation in data, we must analyze the annotation models for their performance on intersectional groups to determine if they themselves exhibit bias.",
"Face Detection\nThe FaceBoxes network BIBREF15 is employed for face detection, consisting of a lightweight CNN that incorporates novel Rapidly Digested and Multiple Scale Convolutional Layers for speed and accuracy, respectively.",
"In this work, we follow Merler et al. BIBREF18 and employ the Deep EXpectation (DEX) model of apparent age BIBREF19 , which is pre-trained on the IMDB-WIKI dataset of 500k faces with real ages and fine-tuned on the APPA-REAL training and validation sets of 3.6k faces with apparent ages, crowdsourced from an average of 38 votes per image BIBREF20 . ",
"In this work, we express gender as a continuous value between 0 and 1. When thresholding at 0.5, we use the sex labels of `male' and `female' to define gender classes, as training datasets and evaluation benchmarks use this binary label system. We again follow Merler et al. BIBREF18 and employ a DEX model to annotate the gender of an individual."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"This paper is the first in a series of works to build a framework for the audit of the demographic attributes of ImageNet and other large image datasets. The main contributions of this work include the introduction of a model-driven demographic annotation pipeline for apparent age and gender, analysis of said annotation models and the presentation of annotations for each image in the training set of the ILSVRC 2012 subset of ImageNet (1.28M images) and the `person' hierarchical synset of ImageNet (1.18M images)."
],
"extractive_spans": [
" a model-driven demographic annotation pipeline for apparent age and gender, analysis of said annotation models and the presentation of annotations for each image in the training set of the ILSVRC 2012 subset of ImageNet"
],
"free_form_answer": "",
"highlighted_evidence": [
"This paper is the first in a series of works to build a framework for the audit of the demographic attributes of ImageNet and other large image datasets. The main contributions of this work include the introduction of a model-driven demographic annotation pipeline for apparent age and gender, analysis of said annotation models and the presentation of annotations for each image in the training set of the ILSVRC 2012 subset of ImageNet (1.28M images) and the `person' hierarchical synset of ImageNet (1.18M images)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"ba4d907d0cccddfbb107ee8d8b93c40929e2707b",
"c784be287b20f9ad890a5e516fb6476c52abfb46"
],
"answer": [
{
"evidence": [
"We recognize that a binary representation of gender does not adequately capture the complexities of gender or represent transgender identities. In this work, we express gender as a continuous value between 0 and 1. When thresholding at 0.5, we use the sex labels of `male' and `female' to define gender classes, as training datasets and evaluation benchmarks use this binary label system. We again follow Merler et al. BIBREF18 and employ a DEX model to annotate the gender of an individual. When tested on APPA-REAL, with enhanced annotations provided by BIBREF21 , the model achieves an accuracy of 91.00%, however its errors are not evenly distributed, as shown in Table TABREF3 . The model errs more on younger and older age groups and on those with a female gender label."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this work, we express gender as a continuous value between 0 and 1."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"We recognize that a binary representation of gender does not adequately capture the complexities of gender or represent transgender identities. In this work, we express gender as a continuous value between 0 and 1. When thresholding at 0.5, we use the sex labels of `male' and `female' to define gender classes, as training datasets and evaluation benchmarks use this binary label system. We again follow Merler et al. BIBREF18 and employ a DEX model to annotate the gender of an individual. When tested on APPA-REAL, with enhanced annotations provided by BIBREF21 , the model achieves an accuracy of 91.00%, however its errors are not evenly distributed, as shown in Table TABREF3 . The model errs more on younger and older age groups and on those with a female gender label."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We recognize that a binary representation of gender does not adequately capture the complexities of gender or represent transgender identities. In this work, we express gender as a continuous value between 0 and 1. When thresholding at 0.5, we use the sex labels of `male' and `female' to define gender classes, as training datasets and evaluation benchmarks use this binary label system"
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"207f20f5635f22ce8918e6168b32d98689f50ba3",
"6df3b702830880127f7b705d1073f49ba2527b6f"
],
"answer": [
{
"evidence": [
"We evaluate the training set of the ILSVRC 2012 subset of ImageNet (1000 synsets) and the `person' hierarchical synset of ImageNet (2833 synsets) with the proposed methodology. Face detections that receive a confidence score of 0.9 or higher move forward to the annotation phase. Statistics for both datasets are presented in Tables TABREF7 and TABREF10 . In these preliminary annotations, we find that females comprise only 41.62% of images in ILSVRC and 31.11% in the `person' subset of ImageNet, and people over the age of 60 are almost non-existent in ILSVRC, accounting for 1.71%."
],
"extractive_spans": [
"people over the age of 60"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate the training set of the ILSVRC 2012 subset of ImageNet (1000 synsets) and the `person' hierarchical synset of ImageNet (2833 synsets) with the proposed methodology. Face detections that receive a confidence score of 0.9 or higher move forward to the annotation phase. Statistics for both datasets are presented in Tables TABREF7 and TABREF10 . In these preliminary annotations, we find that females comprise only 41.62% of images in ILSVRC and 31.11% in the `person' subset of ImageNet, and people over the age of 60 are almost non-existent in ILSVRC, accounting for 1.71%."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: Table 2. Gender-biased Synsets, ILSVRC 2012 ImageNet Subset"
],
"extractive_spans": [],
"free_form_answer": "Females and males with age 75+",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2. Gender-biased Synsets, ILSVRC 2012 ImageNet Subset"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"How do they determine demographics on an image?",
"Do they assume binary gender?",
"What is the most underrepresented person group in ILSVRC?"
],
"question_id": [
"c27b885b1e38542244f52056abf288b2389b9fc6",
"1ce6c09cf886df41a3d3c52ce82f370c5a30334a",
"5429add4f166a3a66bec2ba22232821d2cbafd62"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 2. Gender-biased Synsets, ILSVRC 2012 ImageNet Subset",
"Table 3. Top-level Statistics of ImageNet ‘person’ Subset",
"Table 4. Gender-biased Synsets, ImageNet ‘person’ Subset"
],
"file": [
"3-Table2-1.png",
"3-Table3-1.png",
"3-Table4-1.png"
]
} | [
"How do they determine demographics on an image?",
"What is the most underrepresented person group in ILSVRC?"
] | [
[
"1905.01347-Gender Annotation-0",
"1905.01347-Apparent Age Annotation-0",
"1905.01347-Methodology-0",
"1905.01347-Introduction-4",
"1905.01347-Face Detection-0"
],
[
"1905.01347-Results-0",
"1905.01347-3-Table2-1.png"
]
] | [
"using model driven face detection, apparent age annotation and gender annotation",
"Females and males with age 75+"
] | 385 |
1711.04964 | Dynamic Fusion Networks for Machine Reading Comprehension | This paper presents a novel neural model - Dynamic Fusion Network (DFN), for machine reading comprehension (MRC). DFNs differ from most state-of-the-art models in their use of a dynamic multi-strategy attention process, in which passages, questions and answer candidates are jointly fused into attention vectors, along with a dynamic multi-step reasoning module for generating answers. With the use of reinforcement learning, for each input sample that consists of a question, a passage and a list of candidate answers, an instance of DFN with a sample-specific network architecture can be dynamically constructed by determining what attention strategy to apply and how many reasoning steps to take. Experiments show that DFNs achieve the best result reported on RACE, a challenging MRC dataset that contains real human reading questions in a wide variety of types. A detailed empirical analysis also demonstrates that DFNs can produce attention vectors that summarize information from questions, passages and answer candidates more effectively than other popular MRC models. | {
"paragraphs": [
[
"The goal of Machine Reading Comprehension (MRC) is to have machines read a text passage and then generate an answer (or select an answer from a list of given candidates) for any question about the passage. There has been a growing interest in the research community in exploring neural MRC models in an end-to-end fashion, thanks to the availability of large-scale datasets, such as CNN/DM BIBREF0 and SQuAD BIBREF1 .",
"Despite the variation in model structures, most state-of-the-art models perform reading comprehension in two stages. First, the symbolic representations of passages and questions are mapped into vectors in a neural space. This is commonly achieved via embedding and attention BIBREF2 , BIBREF3 or fusion BIBREF4 . Then, reasoning is performed on the vectors to generate the right answer.",
"Ideally, the best attention and reasoning strategies should adapt organically in order to answer different questions. However, most MRC models use a static attention and reasoning strategy indiscriminately, regardless of various question types. One hypothesis is because these models are optimized on those datasets whose passages and questions are domain-specific (or of a single type). For example, in CNN/DM, all the passages are news articles, and the answer to each question is an entity in the passage. In SQuAD, the passages came from Wikipedia articles and the answer to each question is a text span in the article. Such a fixed-strategy MRC model does not adapt well to other datasets. For example, the exact-match score of BiDAF BIBREF2 , one of the best models on SQuAD, drops from 81.5 to 55.8 when applied to TriviaQA BIBREF5 , whereas human performance is 82.3 and 79.7 on SQuAD and TriviaQA, respectively.",
"In real-world MRC tasks, we must deal with questions and passages of different types and complexities, which calls for models that can dynamically determine what attention and reasoning strategy to use for any input question-passage pair on the fly. In a recent paper, BIBREF6 proposed dynamic multi-step reasoning, where the number of reasoning steps is determined spontaneously (using reinforcement learning) based on the complexity of the input question and passage. With a similar intuition, in this paper we propose a novel MRC model which is dynamic not only on the number of reasoning steps it takes, but also on the way it performs attention. To the best of our knowledge, this is the first MRC model with this dual-dynamic capability.",
"The proposed model is called a Dynamic Fusion Network (DFN). In this paper, we describe the version of DFN developed on the RACE dataset BIBREF7 . In RACE, a list of candidate answers is provided for each passage-question pair. So DFN for RACE is a scoring model - the answer candidate with the highest score will be selected as the final answer.",
"Like other MRC models, DFNs also perform machine reading in two stages: attention and reasoning. DFN is unique in its use of a dynamic multi-strategy attention process in the attention stage. Here “attention” refers to the process that texts from different sources (passage, question, answers) are combined in the network. In literature, a fixed attention mechanism is usually employed in MRC models. In DFN, the attention strategy is not static; instead, the actual strategy for drawing attention among the three text sources are chosen on the fly for each sample. This lends flexibility to adapt to various question types that require different comprehension skills. The output of the attention stage is then fed into the reasoning module to generate the answer score. The reasoning module in DFN uses dynamic multi-step reasoning, where the number of steps depends on the complexity of the question-passage pair and varies from sample to sample.",
"Inspired by ReasoNet BIBREF6 and dynamic neural module networks BIBREF8 , we use deep reinforcement learning methods BIBREF9 , BIBREF10 to dynamically choose the optimal attention strategy and the optimal number of reasoning steps for a given sample. We use RL in favor of other simpler methods (like cascading, pooling or weighted averaging) mainly because we intend to learn a policy that constructs an instance of DFN of a sample-specific structure. Given an input sample consisting of a question, a passage and a list of candidate answers in RACE, an instance of DFN can be constructed via RL step by step on the fly. Such a policy is particularly appealing as it also provides insights on how the model performs on different types of questions. At each decision step, the policy maps its “state”, which represents an input sample, and DFN's partial knowledge of the right answer, to the action of assembling proper attention and reasoning modules for DFN.",
"Experiments conducted on the RACE dataset show that DFN significantly outperforms previous state-of-the-art MRC models and has achieved the best result reported on RACE. A thorough empirical analysis also demonstrates that DFN is highly effective in understanding passages of a wide variety of styles and answering questions of different complexities."
],
[
"The recent progress in MRC is largely due to the introduction of large-scale datasets. CNN/Daily Mail BIBREF0 and SQuAD BIBREF1 are two popular and widely-used datasets. More recently, other datasets using different collection methodologies have been introduced, such as MS MARCO BIBREF11 , NewsQA BIBREF12 and RACE BIBREF7 . For example, MS MARCO collects data from search engine queries and user-clicked results, thus contains a broader topic coverage than Wikipedia and news articles in SQuAD and CNN/Daily Mail. Among the large number of MRC datasets, RACE focuses primarily on developing MRC models with near-human capability. Questions in RACE come from real English exams designed specifically to test human comprehension. This makes RACE an appealing testbed for DFN; we will further illustrate this in Section \" RACE - The MRC Task\" .",
"The word “fusion” for MRC was first used by FusionNet BIBREF4 to refer to the process of updating the representation of passage (or question) using information from the question (or passage) representation. A typical way of fusion is through attention: for example, BiDAF BIBREF2 uses a bi-directional attention, where the representation of passage (or question) vectors are re-weighted by their similarities to the question (or passage) vectors. We will use “fusion” and “attention” interchangeably throughout the paper.",
"In the attention process of state-of-the-art MRC models, a pre-defined attention strategy is often applied. BIBREF13 proposed a Bi-directional Multi-Perspective Matching (BiMPM) model, which uses attention with multiple perspectives characterized by different parameters. Although multi-perspective attention might be able to handle different types of questions, all perspectives are used for all the questions. DFN is inspired by BiMPM, but our dynamic attention process is more adaptive to variations of questions.",
"Another important component of MRC systems is the answer module, which performs reasoning to generate the final prediction. The reasoning methods in existing literature can be grouped into three categories: 1) single-step reasoning BIBREF14 , BIBREF15 , BIBREF2 , BIBREF16 ; 2) multi-step reasoning with a fixed number of steps BIBREF17 , BIBREF18 , BIBREF19 ; and 3) dynamic multi-step reasoning (ReasoNet BIBREF6 ). In particular, BIBREF19 proposed handling the variations in passages and questions using Maxout units and iterative reasoning. However, this model still applies static attention and reasoning (with fixed multiple steps), where the same attention strategy is applied to all questions. DFN can be seen as an extension of ReasoNet, in the sense that the dynamic strategy is applied not only in the reasoning process but also in the attention process.",
"The idea of dynamic attention has been applied to article recommendations BIBREF20 . For MRC, Andreas et al. (2016) proposed a dynamic decision process for reading comprehension task BIBREF8 . In their dynamic neural module networks, the MRC task is divided into several predefined steps (e.g., finding, lookup, relating), and a neural network is dynamically composed via RL based on parsing information. In DFN, we also incorporate dynamic decisions, but instead of using fixed steps, we apply dynamic decisions to various attention strategies and flexible reasoning steps."
],
[
"In this section, we first give a brief introduction to the RACE dataset, and then explain the rationale behind choosing RACE as the testbed in our study."
],
[
"RACE (Reading Comprehension Dataset From Examinations) is a recently released MRC dataset consisting of 27,933 passages and 97,867 questions from English exams, targeting Chinese students aged 12-18. RACE consists of two subsets, RACE-M and RACE-H, from middle school and high school exams, respectively. RACE-M has 28,293 questions and RACE-H has 69,574. Each question is associated with 4 candidate answers, one of which is correct. The data generation process of RACE differs from most MRC datasets - instead of generating questions and answers by heuristics or crowd-sourcing, questions in RACE are specifically designed for testing human reading skills, and are created by domain experts."
],
[
"The RACE dataset has some distinctive characteristics compared to other datasets, making it an ideal testbed for developing generic MRC systems for real-world human reading tasks.",
"Variety in Comprehension Skills. RACE requires a much broader spectrum of comprehension skills than other MRC datasets. Figure 1 shows some example questions from RACE and SQuAD: most SQuAD questions lead to direct answers that can be found in the original passage, while questions in RACE require more sophisticated reading comprehension skills such as summarizing (1st question), inference (2nd question) and deduction (3rd question). For humans, various tactics and skills are required to answer different questions. Similarly, it is important for MRC systems to adapt to different question types.",
"Complexity of Answers. As shown in Figure 2 , the answers in CNN/DM dataset are entities only. In SQuAD-like datasets, answers are often constrained to spans in the passage. Different from these datasets, answer candidates in RACE are natural language sentences generated by human experts, which increases the difficulty of the task. Real-world machine reading tasks are less about span exact matching, and more about summarizing the content and extending the obtained knowledge through reasoning.",
"Multi-step reasoning. Reasoning is an important skill in human reading comprehension. It refers to the skill of making connection between sentences and summarizing information throughout the passage. Table 1 shows a comparison on the requirement of reasoning level among different datasets. The low numbers on SQuAD and CNN/DM show that reasoning skills are less critical in getting the correct answers in these datasets, whereas such skills are essential for answering RACE questions."
],
[
"In this section, we present the model details of DFN. Section \"Conclusion\" describes the overall architecture, and each component is explained in detail in subsequent subsections. Section \" Training Details\" describes the reinforcement learning methods used to train DFN."
],
[
"The overall architecture of DFN is depicted by Figure 3 . The input is a question $Q$ in length $l_q$ , a passage $P$ in length $l_p$ , and a list of $r$ answer candidates $\\mathcal {A}=\\lbrace A_1,...,A_r\\rbrace $ in length $l_{a}^1,...,l_a^r$ . The model produces scores $c_1, c_2, ..., c_r$ for each answer candidate $A_1, A_2, ..., A_r$ respectively. The final prediction module selects the answer with the highest score.",
"The architecture consists of a standard Lexicon Encoding Layer and a Context Encoding Layer, on top of which are a Dynamic Fusion Layer and a Memory Generation Layer. The Dynamic Fusion Layer applies different attention strategies to different question types, and the Memory Generation Layer encodes question-related information in the passage for answer prediction. Multi-step reasoning is conducted over the output from the Dynamic Fusion and Memory Generation layers, in the Answer Scoring Module. The final output of the model is an answer choice $C\\in \\lbrace 1,2,...,r\\rbrace $ from the Answer Prediction Module.",
"In the following subsections, we will describe the details of each component in DFN (bold letters represent trainable parameters)."
],
[
"The first layer of DFN transforms each word in the passage, question and answer candidates independently into a fixed-dimension vector. This vector is the concatenation of two parts. The first part is the pre-trained GloVe embedding BIBREF21 of each word. For each out-of-vocabulary word, we map it to an all-zero vector. The second part is the character encodings. This is carried out by mapping each character to a trainable embedding, and then feeding all characters into an LSTM BIBREF22 . The last state of this LSTM is used as the character encodings. The output of the Lexicon Encoding layer is a set of vectors for $Q,P$ and each answer candidate in $\\mathcal {A}$ , respectively: $Q^\\text{embed}=\\lbrace q^\\text{embed}_i\\rbrace _{i=1}^{l_q}, P^\\text{embed}=\\lbrace p^\\text{embed}_i\\rbrace _{i=1}^{l_p}$ , and $A^\\text{embed}_j=\\lbrace a^\\text{embed}_{i,j}\\rbrace _{i=1}^{l_a^j}, j=1,2,...,r$ ."
],
[
"The Context Encoding Layer passes $Q^{\\text{embed}}, p^{\\text{embed}}$ and $A^{\\text{embed}}$ into a bi-directional LSTM (BiLSTM) to obtain context representations. Since answer candidates $A_1,...,A_r$ are not always complete sentences, we append the question before each answer candidate and feed the concatenated sentence into BiLSTM. We use the same BiLSTM to encode the information in $P,Q$ and $\\mathcal {A}$ . The obtained context vectors are represented as: $\n&Q^\\text{c}=\\textbf {BiL}\\textbf {STM}_1(Q^\\text{embed})=\\lbrace \\overrightarrow{q_i^\\text{c}},\\overleftarrow{q_i^\\text{c}}\\rbrace _{i=1}^{l_q},\\\\\n&P^\\text{c}=\\textbf {BiL}\\textbf {STM}_1(P^\\text{embed})=\\lbrace \\overrightarrow{p_i^\\text{c}},\\overleftarrow{p_i^\\text{c}}\\rbrace _{i=1}^{l_p},\\\\\n&(Q+A)^\\text{c}_j=\\textbf {BiLSTM}_1(Q^\\text{embed}+A^\\text{embed}_j)\\\\\n&=\\lbrace \\overrightarrow{a^\\text{c}_{i,j}},\\overleftarrow{a^\\text{c}_{i,j}}\\rbrace _{i=1}^{l_p+l_a^j}, j=1,2,...,r.\n$ "
],
[
"This layer is the core of DFN. For each given question-passage pair, one of $n$ different attention strategies is selected to perform attention across the passage, question and answer candidates.",
"The dynamic fusion is conducted in two steps: in the first step, an attention strategy $G\\in \\lbrace 1,2,...,n\\rbrace $ is randomly sampled from the output of the strategy gate $f^\\text{sg}(Q^c)$ . The strategy gate takes input from the last-word representation of the question $Q^\\text{c}$ , and outputs a softmax over $\\lbrace 1,2,...n\\rbrace $ . In the second step, the $G$ -th attention strategy is activated, and computes the attention results according to $G$ -th strategy. Each strategy, denoted by Attention $_k,k=1,2,...,n$ , is essentially a function of $Q^\\text{c},P^\\text{c}$ and one answer candidate $(Q+A)^\\text{c}_j$ that performs attention in different directions. The output of each strategy is a fixed-dimension representation, as the attention result. $\nf^\\text{sg}(Q^c) \\leftarrow & \\text{softmax}(\\mathbf {W_1}(\\overrightarrow{q_{l_q}^\\text{c}};\\overleftarrow{q_{1}^\\text{c}})\\\\\nG\\sim & \\text{Category}\\left(f^\\text{sg}(Q^c)\\right),\\\\\ns_j\\leftarrow & \\textbf {Attention}_{G}(Q^\\text{c},P^\\text{c},(Q+A)^\\text{c}_j),\\\\\n& j=1,2,...,r.\n$ ",
"Attention Strategies. For experiment on RACE, we choose $n=3$ and use the following strategies:",
"Integral Attention: We treat the question and answer as a whole, and attend each word in $(Q+A)_j^\\text{c}$ to the passage $P^\\text{c}$ (Figure 4 ). This handles questions with short answers (e.g., the last question in upper box of Figure 1 ).",
"Formally, $Q^{\\text{int}}_{j},A^{\\text{int}}_{j}\\leftarrow \\text{Split}\\left((Q+A)^\\text{c}_j\\operatornamewithlimits{{\\color {blue} \\triangleright }}P^\\text{c}\\right).\n$ ",
"The operator $\\operatornamewithlimits{{\\color {blue} \\triangleright }}$ represents any one-sided attention function. For DFN, we use the single direction version of multi-perspective matching in BiMPM BIBREF13 ; For two text segments $X,X^{\\prime }\\in \\lbrace P,Q,A_j,(Q+A)_j\\rbrace $ , $X \\operatornamewithlimits{{\\color {blue} \\triangleright }}X^{\\prime }$ matches each word $w\\in X$ with respect to the whole sentence $X^{\\prime }$ , and has the same length as $X$ . We defer details of the $\\operatornamewithlimits{{\\color {blue} \\triangleright }}$ operator to Section \"Memory Generation Layer\" when we introduce our memory generation.",
"The Split $()$ function splits a vector representation in length $l_q+l_a^j$ into two vector representations in length $l_q$ and $l_a^j$ , to be consistent with other strategies.",
"Answer-only Attention: This strategy only attends each word in the answer candidate to the passage (Figure 4 ), without taking the question into consideration. This is to handle questions with full-sentence answer candidates (e.g., the first and the third questions in the upper box of Figure 1 ). $ M_a\\leftarrow & A_j^\\text{c}\\operatornamewithlimits{{\\color {blue} \\triangleright }}P^\\text{c},\\\\\nQ_j^{\\text{aso}},A_j^{\\text{aso}}\\leftarrow &Q^\\text{c},M_a.\n$ ",
"Entangled Attention: As shown in Figure 4 , each word in question and answer is attended to the passage, denoted by $M_q$ and $M_a$ . Then, we entangle the results by attending each word in $M_q$ to $M_a$ , and also $M_a$ to $M_q$ . This attention is more complicated than the other two mentioned above, and targets questions that require reasoning (e.g., the second question in the upper box of Figure 1 ). $\nM_q\\leftarrow & Q^\\text{c}\\operatornamewithlimits{{\\color {blue} \\triangleright }}P^\\text{c}\\\\\nM_a\\leftarrow & A_j^\\text{c}\\operatornamewithlimits{{\\color {blue} \\triangleright }}P^\\text{c},\\\\\nQ_j^{\\text{ent}},A_j^{\\text{ent}}\\leftarrow &M_q \\operatornamewithlimits{{\\color {blue} \\triangleright }}M_a, M_a \\operatornamewithlimits{{\\color {blue} \\triangleright }}M_q.\n$ ",
"We can incorporate a large number of strategies into the framework depending on the question types we need to deal with. In this paper, we use three example strategies to demonstrate the effectiveness of DFN.",
"Attention Aggregation. Following previous work, we aggregate the result of each attention strategy through a BiLSTM. The first and the last states of these BiLSTMs are used as the output of the attention strategies. We use different BiLSTM for different strategies, which proved to slightly improve the model performance. $\nQ_j^x,A_j^x\\leftarrow &\\textbf {BiLSTM}^x(Q_j^x), \\textbf {BiLSTM}^x(A_j^x),\\\\\n\\textbf {Attention}_k\\leftarrow & \\text{FinalState}(Q_j^x,A_j^x),\\\\\n&\\text{ for } (k,x)\\in \\lbrace \\text{(1,int),(2,aso),(3,ent)}\\rbrace .\n$ ",
"The main advantages of dynamic multi-strategy fusion are three-fold: 1) It provides adaptivity for different types of questions. This addresses the challenge in the rich variety of comprehension skills aforementioned in Section \"Distinctive Characteristics in RACE\" . The key to adaptivity is the strategy gate $G$ . Our observation is that the model performance degrades when trained using simpler methods such as max-pooling or model averaging. 2) The dynamic fusion takes all three elements (question, passage and answer candidates) into account in the attention process. This way, answer candidates are fused together with the question and the passage to get a complete understanding of the full context. 3) There is no restriction on the attention strategy used in this layer, which allows flexibility for incorporating existing attention mechanisms.",
"Although some of the attention strategies appear to be straightforward (e.g., long/short answers), it is difficult to use simple heuristic rules for strategy selection. For example, questions with a placeholder “_” might be incomplete question sentences that require integral attention; but in some questions (e.g., “we can infer from the passage that _ .”), the choices are full sentences and the answer-only attention should be applied here instead. Therefore, we turn to reinforcement learning methods (see Section \" Training Details\" ) to optimize the choice of attention strategies, which leads to a policy that give important insights on our model behavior."
],
[
"A memory is generated for the answer module in this layer. The memory $M$ has the same length as $P$ , and is the result of attending each word in $P^\\text{c}$ to the question $Q^\\text{c}$ (Figure 4 ). We use the same attention function for $M$ as that for attention strategies, and then aggregate the results. The memory is computed as $M\\leftarrow \\textbf {BiLSTM}_2(Q^\\text{c}\\operatornamewithlimits{{\\color {blue} \\triangleright }}P^\\text{c})$ , where $\\operatornamewithlimits{{\\color {blue} \\triangleright }}$ is the attention operator specified as below.",
"Our attention operator takes the same form as BiMPM BIBREF13 . For simplicity, we use $P,Q,(Q+A)_j$ to denote $P^\\text{c},Q^\\text{c}$ and $(Q+A)^\\text{c}_j$ in this section. Recall that for $X,X^{\\prime }\\in \\lbrace P,Q,A_j,(Q+A)_j\\rbrace $ , and $X \\operatornamewithlimits{{\\color {blue} \\triangleright }}X^{\\prime }$ computes the relevance of each word $w\\in X$ with respect to the whole sentence $X^{\\prime }$ . $X \\operatornamewithlimits{{\\color {blue} \\triangleright }}X^{\\prime }$ has the same length as $X^{\\prime }$ . Each operation $\\operatornamewithlimits{{\\color {blue} \\triangleright }}$ is associated with a set of trainable weights denoted by $P^\\text{c},Q^\\text{c}$0 . For $P^\\text{c},Q^\\text{c}$1 in different strategies, we use different sets of trainable weights; the only exception is for $P^\\text{c},Q^\\text{c}$2 computed both in Answer-only Attention and Entangled Attention: These two operations have the same weights since they are exactly the same. We find untying weights in different $P^\\text{c},Q^\\text{c}$3 operations can slightly improve our model performance.",
"We use a multi-perspective function to describe $\\operatornamewithlimits{{\\color {blue} \\triangleright }}$ . For any two vectors $v_1,v_2\\in \\mathbb {R}^d$ , define the multi-perspective function $g(v_1,v_2;\\mathbf {W})=\\left\\lbrace \\cos (\\mathbf {W}^{(k)} \\circ v_1, \\mathbf {W}^{(k)}\\circ v_2 )\\right\\rbrace _{k=1}^N, $ ",
"where $\\mathbf {W}\\in \\mathbb {R}^{N\\times d}$ is a trainable parameter, $N$ is a hyper-parameter (the number of perspectives), and $\\mathbf {W}^{(k)}$ denotes the $k$ -th row of $\\mathbf {W}$ . In our experiments, we set $N=10$ .",
"Now we define $X \\operatornamewithlimits{{\\color {blue} \\triangleright }}X^{\\prime }$ using $g$ and four different ways to combine vectors in text $X,X^{\\prime }$ . Denote by $x_i,x_i^{\\prime }\\in \\mathbb {R}^d$ the $i$ -th vector in $X,X^{\\prime }$ respectively. The function work concurrently for the forward and backward LSTM activations (generated by BiLSTM in the Context Encoding layer) in $X$ and $X^{\\prime }$ ; denoted by $\\overrightarrow{x}_i$ and $\\overleftarrow{x}_i$ , the forward and backward activations respectively (and similarly for $g$0 ). The output of $g$1 also has activations in two directions for further attention operation (e.g., in Entangled Attention). The two directions are concatenated before feeding into the aggregation BiLSTM.",
"Let $l_x,l^{\\prime }_x$ be the length of $X,X^{\\prime }$ respectively. $X\\operatornamewithlimits{{\\color {blue} \\triangleright }}X^{\\prime }$ outputs two groups of vectors $\\lbrace \\overrightarrow{u}_i,\\overleftarrow{u}_i \\rbrace _{i=1}^{l_x}$ by concatenating the following four parts below:",
"Full Matching: $\\overrightarrow{u}_i^{\\text{full}}=g(\\overrightarrow{x}_i,\\overrightarrow{x}_{l_x^{\\prime }},\\mathbf {W}_{o1}),$ $\\overleftarrow{u}_i^{\\text{full}}=g(\\overleftarrow{x}_i,\\overleftarrow{x}^{\\prime }_{1},\\mathbf {W}_{o2}). $ ",
"Maxpooling Matching: $\\overrightarrow{u}_i^{\\text{max}}=\\max _{j\\in \\lbrace 1,...,l_x\\rbrace }g(\\overrightarrow{x}_i,\\overrightarrow{x}_j^{\\prime },\\mathbf {W}_{o3}),$ $\\overleftarrow{u}_i^{\\text{max}}=\\max _{j\\in \\lbrace 1,...,l_x\\rbrace }g(\\overleftarrow{x}_i,\\overleftarrow{x}_j^{\\prime },\\mathbf {W}_{o4}),$ ",
"here $\\max $ means element-wise maximum.",
"Attentive Matching: for $j=1,2,...,N$ compute $\\overrightarrow{\\alpha }_{i,j}=\\cos (\\overrightarrow{x}_i,\\overrightarrow{x}_j^{\\prime }),\\overleftarrow{\\alpha }_{i,j}=\\cos (\\overleftarrow{x}_i,\\overleftarrow{x}_j^{\\prime }). $ ",
"Take weighted mean according to $\\overrightarrow{\\alpha }_{i,j},\\overleftarrow{\\alpha }_{i,j}$ : $\\overrightarrow{x}_i^{\\text{mean}}=\\frac{\\sum _{j=1}^{l_x^{\\prime }} \\overrightarrow{\\alpha }_{i,j} \\cdot \\overrightarrow{x}_j^{\\prime }}{\\sum _{j=1}^{l_x^{\\prime }} \\overrightarrow{\\alpha }_{i,j}},$ $\\overleftarrow{x}_i^{\\text{mean}}=\\frac{\\sum _{j=1}^{l_x^{\\prime }} \\overleftarrow{\\alpha }_{i,j} \\cdot \\overleftarrow{x}_j^{\\prime }}{\\sum _{j=1}^{l_x^{\\prime }} \\overleftarrow{\\alpha }_{i,j}}.$ ",
"Use multi-perspective function to obtain attentive matching: $\\overrightarrow{u}_i^{\\text{att}}=g(\\overrightarrow{x}_i,\\overrightarrow{x}_i^{\\text{mean}},\\mathbf {W}_{o5}),$ $\\overleftarrow{u}_i^{\\text{att}}=g(\\overleftarrow{x}_i,\\overleftarrow{x}_i^{\\text{mean}},\\mathbf {W}_{o6}).$ ",
"Max-Attentive Matching: The same as attentive matching, but taking the maximum over $\\overrightarrow{\\alpha }_{i,j},\\overleftarrow{\\alpha }_{i,j}, j=1,2,...,l_x^{\\prime }$ instead of using the weighted mean."
],
[
"This module performs multi-step reasoning in the neural space to generate the right answer. This unit adopts the architecture of ReasoNet BIBREF6 . We simulate multi-step reasoning with a GRU cell BIBREF23 to skim through the memory several times, changing its internal state as the skimming progresses. The initial state $s_j^{(0)}=s_j$ is generated from the Dynamic Fusion Layer for each answer candidate $j=1,2,...,r$ . We skim through the passage for at most $\\mathcal {T}_{\\max }$ times. In every step $t\\in \\lbrace 1,2,...,\\mathcal {T}_{\\max }\\rbrace $ , an attention vector $f^{(t)}_{\\text{att}}$ is generated from the previous state $s_j^{t-1}$ and the memory $M$ . To compute $f_{\\text{att}}$ , an attention score $a_{t,i}$ is computed based on each word $m_i$ in memory $j=1,2,...,r$0 and state $j=1,2,...,r$1 as $j=1,2,...,r$2 ",
"where $l_m=l_p$ is the memory length, and $\\mathbf {W_2},\\mathbf {W_3}$ are trainable weights. We set $\\lambda =10$ in our experiments. The attention vector is then computed as a weighted sum of memory vectors using attention scores, i.e., $f^{(t)}_{\\text{att}}\\leftarrow \\sum _{i=1}^{l_m} a_{i,j}^{(t)}m_i.$ Then, the GRU cell takes the attention vector $f_{\\text{att}}^{(t)}$ as input and changes its internal state. $\ns_j^{(0)}\\leftarrow s_j, \\;s_j^{(t)}\\leftarrow \\textbf {GRU}\\left(f_{\\text{att}}^{(t)},s_j^{(t-1)}\\right).\n$ ",
" To decide when to stop skimming, a termination gate (specified below) takes $s_j^{(t)}, j=1,...,r$ at step $t$ as the input, and outputs a probability $p_t$ of whether to stop reading. The number of reading steps is decided by sampling a Bernoulli variable $T_t$ with parameter $p_t$ . If $T_t$ is 1, the Answer Scoring Module stops skimming, and score $c_j\\leftarrow \\mathbf {W}_5\\text{ReLU}(\\mathbf {W}_4s_j^{(t)}) $ is generated for each answer candidate $j$ . The input to the termination gate in step $t$ is the state representation of all possible answers, $s_j^{(t)}, j=1,2,...,r$ . We do not use separate termination gates for each answer candidate. This is to restrain the size of the action space and variance in training. Since answers are mutable, the input weights for each answer candidate fed into the gate softmax are the same. $t$0 ",
"Answer Prediction. Finally, an answer prediction is drawn from the softmax distribution over the scores of each answer candidate: $C\\sim \\text{Softmax}\\left(c_1,c_2,...,c_r\\right). $ "
],
[
"Since the strategy choice and termination steps are discrete random variables, DFN cannot be optimized by backpropagation directly. Instead, we see strategy choice $G$ , termination decision $T_t$ and final prediction $C$ as policies, and use the REINFORCE algorithm BIBREF24 to train the network. Let $T$ be the actual skimming steps taken, i.e., $T=\\min \\lbrace t:T_t=1\\rbrace $ . We define the reward $r$ to be 1 if $C$ (final answer) is correct, and 0 otherwise. Each possible value pair of $(C,G,T)$ corresponds to a possible episode, which leads to $r\\cdot n \\cdot \\mathcal {T}$ possible episodes. Let $\\pi (c,g,t;\\theta )$ be any policy parameterized by DFN parameter $T_t$0 , and $T_t$1 be the expected reward. Then: ",
"$$&\\nabla _\\theta J(\\theta )\\nonumber \\\\\n=&E_{\\pi (g,c,t;\\theta )}\\left[\\nabla _\\theta \\log \\pi (c,g,t;\\theta )(r-b)\\right]\\nonumber \\\\\n=&\\sum _{g,c,t}\\pi (g,c,t;\\theta )\\left[\\nabla _\\theta \\log \\pi (c,g,t;\\theta )(r-b)\\right].$$ (Eq. 29) ",
" where $b$ is a critic value function. Following BIBREF6 , we set $b=\\sum _{g,c,t}\\pi (g,c,t;\\theta )r$ and replace the $(r-b)$ term above by $(r/b-1)$ to achieve better performance and stability."
],
[
"To evaluate the proposed DFN model, we conducted experiments on the RACE dataset. Statistics of the training/dev/test data are provided in Table 2 . In this section, we present the experimental results, with a detailed analysis on the dynamic selection of strategies and multi-step reasoning. An ablation study is also provided to demonstrate the effectiveness of dynamic fusion and reasoning in DFN."
],
[
"Most of our parameter settings follow BIBREF13 and BIBREF6 . We use ( 29 ) to update the model, and use ADAM BIBREF25 with a learning rate of 0.001 and batch size of 64 for optimization. A small dropout rate of 0.1 is applied to each layer. For word embedding, we use 300-dimension GloVe BIBREF21 embedding from the 840B Common Crawl corpus. The word embeddings are not updated during training. The character embedding has 20 dimensions and the character LSTM has 50 hidden units. All other LSTMs have a hidden dimension of 100. The maximum reasoning step $\\mathcal {T}$ is set to 5. We limit the length of passage/question/answer to a maximum of 500/100/100 for efficient computation. We also train an ensemble model of 9 DFNs using randomly initialized parameters. Training usually converges within 20 epochs. The model is implemented with Tensorflow BIBREF26 and the source code will be released upon paper acceptance."
],
[
"Table 3 shows a comparison between DFN and a few previously proposed models. All models were trained with the full RACE dataset, and tested on RACE-M and RACE-H, respectively. As shown in the table, on RACE-M, DFN leads to a 7.8% and 7.3% performance boost over GA and Stanford AR, respectively. On RACE-H, the outperformance is 1.5% and 2.7%. The ensemble models also gained a performance boost of 4-5% comparing to previous methods. We suspect that the lower gain on RACE-H might result from the higher level of difficulty in those questions in RACE-H, as well as ambiguity in the dataset. Human performance drops from 85.1 on RACE-M to 69.4 on RACE-H, which indicates RACE-H is very challenging even for human.",
"Figure 5 shows six randomly-selected questions from the dataset that DFN answered correctly, grouped by their attention strategies. Recall that the three attention strategies proposed for this task are: 1) Integral Attention for short answers; 2) Answer-only Attention for long answers; and 3) Entangled Attention for deeper reasoning. Question 1 and 2 in Figure 5 present two examples that used Integral Attention. In both of the questions, the question and answer candidates are partial sentences. So the system chose Integral Attention in this case. In the first question, DFN used 3 steps of reasoning, which indicates the question requires some level of reasoning (e.g., resolving coreference of “the third way”). In the second question, the combined sentence comes directly from the passage, so DFN only used 1 step of reasoning.",
"Question 3 and 4 in Figure 5 provide two instances that use answer-only attentions. As shown in these examples, Answer-only attention usually deals with long and natural language answer candidates. Such answers cannot be derived without the model reading through multiple sentences in the passage, and this requires multi-step reasoning. So in both examples, the system went through 5 steps of reasoning.",
"Question 5 and 6 in Figure 5 show two examples that used the Entangled Attention. Both questions require a certain level of reasoning. Question 5 asks for the causes of a scenario, which is not explicitly mentioned in the passage. And question 6 asks for a counting of concepts, which is implicit and has to be derived from the text as well. For both cases, the entangled attention was selected by the model. As for the reasoning steps, we find that for the majority of questions that use Entangled Attention, DFN only uses one reasoning step. This is probably because entangled attention is powerful enough to derive the answer.",
"We also examined the strategy choices with respect to certain keywords. For each word $w$ in vocabulary, we computed the distribution $\\Pr [G,T|w\\in Q]$ , i.e., the conditional distribution of strategy and step when $w$ appeared in the question. Table 4 provides some keywords and their associated dominant strategies and step choices. The results validate the assumption that DFN dynamically selects specific attention strategy based on different question types. For example, the underline “_” indicates that the question and choice should be concatenated to form a sentence. This led to Integral Attention being most favorable when “_” is present. In another example, “not” and “except” usually appear in questions like “Which of the following is not TRUE”. Such questions usually have long answer candidates that require more reasoning. So Answer-only Attention with Reasoning Step#5 became dominant."
],
[
"For ablation studies, we conducted experiments with 4 different model configurations:",
"The full DFN model with all the components aforementioned.",
"DFN without dynamic fusion (DF). We dropped the Strategy Gate $G$ , and used only one attention strategy in the Dynamic Fusion Layer.",
"DFN without multi-step reasoning (MR). Here we dropped the Answer Scoring Module, and used the output of Dynamic Fusion Layer to generate a score for each answer.",
"DFN without DF and MR.",
"To select the best strategy for each configuration, we trained 3 different models for ii) and iv), and chose the best model based on their performance on the dev set. This explains the smaller performance gap between the full model and ablation models on the dev set than that on the test set. Experimental results show that for both ii) and iv), the Answer-Only Attention gave the best performance.",
"To avoid variance in training and provide a fair comparison, 3 ensembles of each model were trained and evaluated on both dev and test sets. As shown in Table 5 , the DFN model has a 1.6% performance gain over the basic model (without DF and MR). This performance boost was contributed by both multi-step reasoning and dynamic fusion. When omitting DF or MR alone, the performance of DFN model dropped by 1.1% and 1.2%, respectively.",
"To validate the effectiveness of the DFN model, we also performed a significance test and compared the full model with each ablation model. The null hypothesis is: the full DFN model has the same performance as the ablation model. As shown in Table 5 , the combination of DF and MR leads to an improvement with a statistically significant margin in our experiments, although neither DF or MR can, individually."
],
[
"In this work, we propose a novel neural model - Dynamic Fusion Network (DFN), for MRC. For a given input sample, DFN can dynamically construct an model instance with a sample-specific network structure by picking an optimal attention strategy and an optimal number of reasoning steps on the fly. The capability allows DFN to adapt effectively to handling questions of different types. By training the policy of model construction with reinforcement learning, our DFN model can substantially outperform previous state-of-the-art MRC models on the challenging RACE dataset. Experiments show that by marrying dynamic fusion (DF) with multi-step reasoning (MR), the performance boost of DFN over baseline models is statistically significant. For future directions, we plan to incorporate more comprehensive attention strategies into the DFN model, and to apply the model to other challenging MRC tasks with more complex questions that need DF and MR jointly. Future extension also includes constructing a “composable” structure on the fly - by making the Dynamic Fusion Layer more flexible than it is now."
]
],
"section_name": [
"Introduction",
"Related Work",
" RACE - The MRC Task",
" The Dataset",
"Distinctive Characteristics in RACE",
"Dynamic Fusion Networks",
"Model Architecture",
"Lexicon Encoding Layer",
"Context Encoding Layer",
"Dynamic Fusion Layer",
"Memory Generation Layer",
"Answer Scoring Module",
" Training Details",
"Experiments",
"Parameter Setup",
"Model Performance",
"Ablation Studies",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"20cb93d5d46a050a690a36186d3e2b407eadb696",
"ec5cf556a73f497a071d8af1b7c844c02d290c4b"
],
"answer": [
{
"evidence": [
"Table 3 shows a comparison between DFN and a few previously proposed models. All models were trained with the full RACE dataset, and tested on RACE-M and RACE-H, respectively. As shown in the table, on RACE-M, DFN leads to a 7.8% and 7.3% performance boost over GA and Stanford AR, respectively. On RACE-H, the outperformance is 1.5% and 2.7%. The ensemble models also gained a performance boost of 4-5% comparing to previous methods. We suspect that the lower gain on RACE-H might result from the higher level of difficulty in those questions in RACE-H, as well as ambiguity in the dataset. Human performance drops from 85.1 on RACE-M to 69.4 on RACE-H, which indicates RACE-H is very challenging even for human."
],
"extractive_spans": [],
"free_form_answer": "7.3% on RACE-M and 1.5% on RACE-H",
"highlighted_evidence": [
"As shown in the table, on RACE-M, DFN leads to a 7.8% and 7.3% performance boost over GA and Stanford AR, respectively. On RACE-H, the outperformance is 1.5% and 2.7%."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To avoid variance in training and provide a fair comparison, 3 ensembles of each model were trained and evaluated on both dev and test sets. As shown in Table 5 , the DFN model has a 1.6% performance gain over the basic model (without DF and MR). This performance boost was contributed by both multi-step reasoning and dynamic fusion. When omitting DF or MR alone, the performance of DFN model dropped by 1.1% and 1.2%, respectively."
],
"extractive_spans": [
"1.6%"
],
"free_form_answer": "",
"highlighted_evidence": [
"To avoid variance in training and provide a fair comparison, 3 ensembles of each model were trained and evaluated on both dev and test sets. As shown in Table 5 , the DFN model has a 1.6% performance gain over the basic model (without DF and MR). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"ca2a4695129d0180768a955fb5910d639f79aa34",
"35491e1e579f6d147f4793edce4c1a80ab2410e7"
]
}
],
"nlp_background": [
"five"
],
"paper_read": [
"no"
],
"question": [
"How much improvement is given on RACE by their introduced approach?"
],
"question_id": [
"8d3f79620592d040f9f055b4fce0f73cc45aab63"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"Machine Reading"
],
"topic_background": [
"familiar"
]
} | {
"caption": [
"Figure 1: Above: Questions from English exams. Below: Questions from SQuAD.",
"Table 1: Percentage of questions in each dataset that requires Single-sentence Inference(SI) and Multi-sentence Inference(MI), from (Lai et al., 2017).",
"Figure 2: Left: Examples from RACE. Right: Examples from CNN dataset.",
"Figure 3: Architecture of MUSIC. i) Passage, question and answer choices are mapped through word and character embeddings in the Embedding Layer. ii) The embeddings are then fed into a Bi-LSTM in the Context Layer. iii) The Gated Matching Layer makes customized attention matchings across the three representations from passage, question and answer choices. iv) The Multi-step Reasoning Unit reads through the memory for a dynamic number of steps. v) Answer prediction gives the final answer.",
"Figure 4: Examples of choosing Matching Gate and number of inference steps in different questions types. Left: The question and the answer choice are part of a full sentence, a case handled by Gate 1, with one-step inference. Middle: A complex question with natural language answer choices, a case where Gate 2 was applied and 5 inteference steps were required. Right: A question that requires deep inference, a case handled by Gate 3, with one-step inference.",
"Table 2: Accuracy% of MUSIC compared to baseline methods on RACE test sets. Results of baseline models come from (Lai et al., 2017) and unpublished (Anonymous, 2018). Note that ElimiNetEnsemble ensembles 6 equivalent models.",
"Table 3: Ablation studies of MUSIC for multistrategy matching (MM) and multi-step reasoning (MR)."
],
"file": [
"1-Figure1-1.png",
"3-Table1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"7-Figure4-1.png",
"7-Table2-1.png",
"7-Table3-1.png"
]
} | [
"How much improvement is given on RACE by their introduced approach?"
] | [
[
"1711.04964-Model Performance-0",
"1711.04964-Ablation Studies-6"
]
] | [
"7.3% on RACE-M and 1.5% on RACE-H"
] | 389 |
1611.01116 | Binary Paragraph Vectors | Recently Le&Mikolov described two log-linear models, called Paragraph Vector, that can be used to learn state-of-the-art distributed representations of documents. Inspired by this work, we present Binary Paragraph Vector models: simple neural networks that learn short binary codes for fast information retrieval. We show that binary paragraph vectors outperform autoencoder-based binary codes, despite using fewer bits. We also evaluate their precision in transfer learning settings, where binary codes are inferred for documents unrelated to the training corpus. Results from these experiments indicate that binary paragraph vectors can capture semantics relevant for various domain-specific documents. Finally, we present a model that simultaneously learns short binary codes and longer, real-valued representations. This model can be used to rapidly retrieve a short list of highly relevant documents from a large document collection. | {
"paragraphs": [
[
"One of the significant challenges in contemporary information processing is the sheer volume of available data. BIBREF0 , for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing BIBREF1 , relies on hashing data into short, locality-preserving binary codes BIBREF2 . The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items. Most of the algorithms from this family are data-oblivious, i.e. can generate hashes for any type of data. Nevertheless, some methods target specific kind of input data, like text or image.",
"In this work we focus on learning binary codes for text documents. An important work in this direction has been presented by BIBREF3 . Their semantic hashing leverages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-of-words (BOW) representation. Salakhutdinov & Hinton report that binary codes allow for up to 20-fold improvement in document ranking speed, compared to real-valued representation of the same dimensionality. Moreover, they demonstrate that semantic hashing codes used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning binary representation from BOW, however, has its disadvantages. First, word-count representation, and in turn the learned codes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: even for moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fully-connected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov & Hinton restricted the BOW vocabulary in their experiments to 2000 most frequent words.",
"Binary codes have also been applied to cross-modal retrieval where text is one of the modalities. Specifically, BIBREF4 incorporated tag information that often accompany text documents, while BIBREF5 employed siamese neural networks to learn single binary representation for text and image data.",
"Recently several works explored simple neural models for unsupervised learning of distributed representations of words, sentences and documents. BIBREF6 proposed log-linear models that learn distributed representations of words by predicting a central word from its context (CBOW model) or by predicting context words given the central word (Skip-gram model). The CBOW model was then extended by BIBREF7 to learn distributed representations of documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model, in which the central word is predicted given the context words and the document vector. During training, PV-DM learns the word embeddings and the parameters of the softmax that models the conditional probability distribution for the central words. During inference, word embeddings and softmax weights are fixed, but the gradients are backpropagated to the inferred document vector. In addition to PV-DM, Le & Mikolov studied also a simpler model, namely Paragraph Vector Distributed Bag of Words (PV-DBOW). This model predicts words in the document given only the document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le & Mikolov demonstrated that paragraph vectors outperform BOW and bag-of-bigrams in information retrieval task, while using only few hundreds of dimensions. These models are also amendable to learning and inference over large vocabularies. Original CBOW network used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation BIBREF8 or importance sampling BIBREF9 to approximate the gradients with respect to the softmax logits.",
"An alternative approach to learning representation of pieces of text has been recently described by BIBREF10 . Networks proposed therein, inspired by the Skip-gram model, learn to predict surrounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned on the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level.",
"In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by BIBREF11 on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While BIBREF11 employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents."
],
[
"The basic idea in binary paragraph vector models is to introduce a sigmoid nonlinearity before the softmax that models the conditional probability of words given the context. If we then enforce binary or near-binary activations in this nonlinearity, the probability distribution over words will be conditioned on a bit vector context, rather than real-valued representation. The inference in the model proceeds like in Paragraph Vector, except the document code is constructed from the sigmoid activations. After rounding, this code can be seen as a distributed binary representation of the document.",
"In the simplest Binary PV-DBOW model (Figure FIGREF1 ) the dimensionality of the real-valued document embeddings is equal to the length of the binary codes. Despite this low dimensional representation – a useful binary hash will typically have 128 or fewer bits – this model performed surprisingly well in our experiments.",
"Note that we cannot simply increase the embedding dimensionality in Binary PV-DBOW in order to learn better codes: binary vectors learned in this way would be too long to be useful in document hashing. The retrieval performance can, however, be improved by using binary codes for initial filtering of documents, and then using a representation with higher capacity to rank the remaining documents by their similarity to the query. BIBREF3 , for example, used semantic hashing codes for initial filtering and TF-IDF for ranking. A similar document retrieval strategy can be realized with binary paragraph vectors. Furthermore, we can extend the Binary PV-DBOW model to simultaneously learn short binary codes and higher-dimensional real-valued representations. Specifically, in the Real-Binary PV-DBOW model (Figure FIGREF2 ) we introduce a linear projection between the document embedding matrix and the sigmoid nonlinearity. During training, we learn the softmax parameters and the projection matrix. During inference, softmax weights and the projection matrix are fixed. This way, we simultaneously obtain a high-capacity representation of a document in the embedding matrix, e.g. 300-dimensional real-valued vector, and a short binary representation from the sigmoid activations.",
"One advantage of using the Real-Binary PV-DBOW model over two separate networks is that we need to store only one set of softmax parameters (and a small projection matrix) in the memory, instead of two large weight matrices. Additionally, only one model needs to be trained, rather than two distinct networks.",
"Binary document codes can also be learned by extending distributed memory models. BIBREF7 suggest that in PV-DM, a context of the central word can be constructed by either concatenating or averaging the document vector and the embeddings of the surrounding words. However, in Binary PV-DM (Figure FIGREF3 ) we always construct the context by concatenating the relevant vectors before applying the sigmoid nonlinearity. This way, the length of binary codes is not tied to the dimensionality of word embeddings.",
"Softmax layers in the models described above should be trained to predict words in documents given binary context vectors. Training should therefore encourage binary activations in the preceding sigmoid layers. This can be done in several ways. In semantic hashing autoencoders BIBREF3 added noise to the sigmoid coding layer. Error backpropagation then countered the noise, by forcing the activations to be close to 0 or 1. Another approach was used by BIBREF12 in autoencoders that learned binary codes for small images. During the forward pass, activations in the coding layer were rounded to 0 or 1. Original (i.e. not rounded) activations were used when backpropagating errors. Alternatively, one could model the document codes with stochastic binary neurons. Learning in this case can still proceed with error backpropagation, provided that a suitable gradient estimator is used alongside stochastic activations. We experimented with the methods used in semantic hashing and Krizhevsky's autoencoders, as well as with the two biased gradient estimators for stochastic binary neurons discussed by BIBREF13 . We also investigated the slope annealing trick BIBREF14 when training networks with stochastic binary activations. From our experience, binary paragraph vector models with rounded activations are easy to train and learn better codes than models with noise-based binarization or stochastic neurons. We therefore use Krizhevsky's binarization in our models."
],
[
"To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by BIBREF15 indicate that performance of PV-DBOW can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. In case of English Wikipedia we used words and bigrams with at least 100 occurrences, which gives a vocabulary with approximately 1.5 million elements.",
"The 20 Newsgroups dataset comes with reference train/test sets. In case of RCV1 we used half of the documents for training and the other half for evaluation. In case of English Wikipedia we held out for testing randomly selected 10% of the documents. We perform document retrieval by selecting queries from the test set and ordering other test documents according to the similarity of the inferred codes. We use Hamming distance for binary codes and cosine similarity for real-valued representations. Results are averaged over queries. We assess the performance of our models with precision-recall curves and two popular information retrieval metrics, namely mean average precision (MAP) and the normalized discounted cumulative gain at the 10th result (NDCG@10) BIBREF16 . The results depend, of course, on the chosen document relevancy measure. Relevancy measure for the 20 Newsgroups dataset is straightforward: a retrieved document is relevant to the query if they both belong to the same newsgroup. In RCV1 each document belongs to a hierarchy of topics, making the definition of relevancy less obvious. In this case we adopted the relevancy measure used by BIBREF3 . That is, the relevancy is calculated as the fraction of overlapping labels in a retrieved document and the query document. Overall, our selection of test datasets and relevancy measures for 20 Newsgroups and RCV1 follows BIBREF3 , enabling comparison with semantic hashing codes. To assess the relevancy of articles in English Wikipedia we can employ categories assigned to them. However, unlike in RCV1, Wikipedia categories can have multiple parent categories and cyclic dependencies. Therefore, for this dataset we adopted a simplified relevancy measure: two articles are relevant if they share at least one category. We also removed from the test set categories with less than 20 documents as well as documents that were left with no categories. Overall, the relevancy is measured over more than INLINEFORM0 categories, making English Wikipedia harder than the other two benchmarks.",
"We use AdaGrad BIBREF17 for training and inference in all experiments reported in this work. During training we employ dropout BIBREF18 in the embedding layer. To facilitate models with large vocabularies, we approximate the gradients with respect to the softmax logits using the method described by BIBREF9 . Binary PV-DM networks use the same number of dimensions for document codes and word embeddings.",
"Performance of 128- and 32-bit binary paragraph vector codes is reported in Table TABREF8 and in Figure FIGREF7 . For comparison we also report performance of real-valued paragraph vectors. Note that the binary codes perform very well, despite their far lower capacity: on 20 Newsgroups and RCV1 the 128-bit Binary PV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors, while on English Wikipedia its performance is slightly lower. Furthermore, Binary PV-DBOW with bigrams outperforms semantic hashing codes: comparison of precision-recall curves from Figures FIGREF7 a and FIGREF7 b with BIBREF3 shows that 128-bit codes learned with this model outperform 128-bit semantic hashing codes on 20 Newsgroups and RCV1. Moreover, the 32-bit codes from this model outperform 128-bit semantic hashing codes on the RCV1 dataset, and on the 20 Newsgroups dataset give similar precision up to approximately 3% recall and better precision for higher recall levels. Note that the difference in this case lies not only in retrieval precision: the short 32-bit Binary PV-DBOW codes are more efficient for indexing than long 128-bit semantic hashing codes.",
"We also compared binary paragraph vectors against codes constructed by first inferring short, real-valued paragraph vectors and then using a separate hashing algorithm for binarization. When the dimensionality of the paragraph vectors is equal to the size of binary codes, the number of network parameters in this approach is similar to that of Binary PV models. We experimented with two standard hashing algorithms, namely random hyperplane projection BIBREF19 and iterative quantization BIBREF20 . Paragraph vectors in these experiments were inferred using PV-DBOW with bigrams. Results reported in Table TABREF9 show no benefit from using a separate algorithm for binarization. On the 20 Newsgroups and RCV1 datasets Binary PV-DBOW yielded higher MAP than the two baseline approaches. On English Wikipedia iterative quantization achieved MAP equal to Binary PV-DBOW, while random hyperplane projection yielded lower MAP. Some gain in precision of top hits can be observed for iterative quantization, as indicated by NDCG@10. However, precision of top hits can also be improved by querying with Real-Binary PV-DBOW model (Section SECREF15 ). It is also worth noting that end-to-end inference in Binary PV models is more convenient than inferring real-valued vectors and then using another algorithm for hashing.",
" BIBREF15 argue that PV-DBOW outperforms PV-DM on a sentiment classification task, and demonstrate that the performance of PV-DBOW can be improved by including bigrams in the vocabulary. We observed similar results with Binary PV models. That is, including bigrams in the vocabulary usually improved retrieval precision. Also, codes learned with Binary PV-DBOW provided higher retrieval precision than Binary PV-DM codes. Furthermore, to choose the context size for the Binary PV-DM models, we evaluated several networks on validation sets taken out of the training data. The best results were obtained with a minimal one-word, one-sided context window. This is the distributed memory architecture most similar to the Binary PV-DBOW model."
],
[
"In the experiments presented thus far we had at our disposal training sets with documents similar to the documents for which we inferred binary codes. One could ask a question, if it is possible to use binary paragraph vectors without collecting a domain-specific training set? For example, what if we needed to hash documents that are not associated with any available domain-specific corpus? One solution could be to train the model with a big generic text corpus, that covers a wide variety of domains. BIBREF21 evaluated this approach for real-valued paragraph vectors, with promising results. It is not obvious, however, whether short binary codes would also perform well in similar settings. To shed light on this question we trained Binary PV-DBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. The results are presented in Table TABREF14 and in Figure FIGREF11 . The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets, which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. Importantly, these results were obtained without domain-specific finetuning."
],
[
"As pointed out by BIBREF3 , when working with large text collections one can use short binary codes for indexing and a representation with more capacity for ranking. Following this idea, we proposed Real-Binary PV-DBOW model (Section SECREF2 ) that can simultaneously learn short binary codes and high-dimensional real-valued representations. We begin evaluation of this model by comparing retrieval precision of real-valued and binary representations learned by it. To this end, we trained a Real-Binary PV-DBOW model with 28-bit binary codes and 300-dimensional real-valued representations on the 20 Newsgroups and RCV1 datasets. Results are reported in Figure FIGREF16 . The real-valued representations learned with this model give lower precision than PV-DBOW vectors but, importantly, improve precision over binary codes for top ranked documents. This justifies their use alongside binary codes.",
"Using short binary codes for initial filtering of documents comes with a tradeoff between the retrieval performance and the recall level. For example, one can select a small subset of similar documents by using 28–32 bit codes and retrieving documents within small Hamming distance to the query. This will improve retrieval performance, and possibly also precision, at the cost of recall. Conversely, short codes provide a less fine-grained hashing and can be used to index documents within larger Hamming distance to the query. They can therefore be used to improve recall at the cost of retrieval performance, and possibly also precision. For these reasons, we evaluated Real-Binary PV-DBOW models with different code sizes and under different limits on the Hamming distance to the query. In general, we cannot expect these models to achieve 100% recall under the test settings. Furthermore, recall will vary on query-by-query basis. We therefore decided to focus on the NDCG@10 metric in this evaluation, as it is suited for measuring model performance when a short list of relevant documents is sought, and the recall level is not known. MAP and precision-recall curves are not applicable in these settings.",
"Information retrieval results for Real-Binary PV-DBOW are summarized in Table TABREF19 . The model gives higher NDCG@10 than 32-bit Binary PV-DBOW codes (Table TABREF8 ). The difference is large when the initial filtering is restrictive, e.g. when using 28-bit codes and 1-2 bit Hamming distance limit. Real-Binary PV-DBOW can therefore be useful when one needs to quickly find a short list of relevant documents in a large text collection, and the recall level is not of primary importance. If needed, precision can be further improved by using plain Binary PV-DBOW codes for filtering and standard DBOW representation for raking (Table TABREF19 , column B). Note, however, that PV-DBOW model would then use approximately 10 times more parameters than Real-Binary PV-DBOW."
],
[
"In this article we presented simple neural networks that learn short binary codes for text documents. Our networks extend Paragraph Vector by introducing a sigmoid nonlinearity before the softmax that predicts words in documents. Binary codes inferred with the proposed networks achieve higher retrieval precision than semantic hashing codes on two popular information retrieval benchmarks. They also retain a lot of their precision when trained on an unrelated text corpus. Finally, we presented a network that simultaneously learns short binary codes and longer, real-valued representations.",
"The best codes in our experiments were inferred with Binary PV-DBOW networks. The Binary PV-DM model did not perform so well. BIBREF15 made similar observations for Paragraph Vector models, and argue that in distributed memory model the word context takes a lot of the burden of predicting the central word from the document code. An interesting line of future research could, therefore, focus on models that account for word order, while learning good binary codes. It is also worth noting that BIBREF7 constructed paragraph vectors by combining DM and DBOW representations. This strategy may proof useful also with binary codes, when employed with hashing algorithms designed for longer codes, e.g. with multi-index hashing BIBREF22 ."
],
[
"This research is supported by National Science Centre, Poland grant no. 2013/09/B/ST6/01549 “Interactive Visual Text Analytics (IVTA): Development of novel, user-driven text mining and visualization methods for large text corpora exploration.” This research was carried out with the support of the “HPC Infrastructure for Grand Challenges of Science and Engineering” project, co-financed by the European Regional Development Fund under the Innovative Economy Operational Programme. This research was supported in part by PL-Grid Infrastructure."
],
[
"For an additional comparison with semantic hashing, we used t-distributed Stochastic Neighbor Embedding BIBREF23 to construct two-dimensional visualizations of codes learned by Binary PV-DBOW with bigrams. We used the same subsets of newsgroups and RCV1 topics that were used by BIBREF3 . Codes learned by Binary PV-DBOW (Figure FIGREF20 ) appear slightly more clustered.",
" "
]
],
"section_name": [
"Introduction",
"Binary paragraph vector models",
"Experiments",
"Transfer learning",
"Retrieval with Real-Binary models",
"Conclusion",
"Acknowledgments",
"Visualization of Binary PV codes"
]
} | {
"answers": [
{
"annotation_id": [
"36ab3da79c1c3e3776b3389aeaaf99bcac18c1cb",
"3e05e4269b5e07840d8567804560753ed5d3ae52"
],
"answer": [
{
"evidence": [
"In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by BIBREF11 on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While BIBREF11 employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents.",
"To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by BIBREF15 indicate that performance of PV-DBOW can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. In case of English Wikipedia we used words and bigrams with at least 100 occurrences, which gives a vocabulary with approximately 1.5 million elements."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. ",
"To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"The 20 Newsgroups dataset comes with reference train/test sets. In case of RCV1 we used half of the documents for training and the other half for evaluation. In case of English Wikipedia we held out for testing randomly selected 10% of the documents. We perform document retrieval by selecting queries from the test set and ordering other test documents according to the similarity of the inferred codes. We use Hamming distance for binary codes and cosine similarity for real-valued representations. Results are averaged over queries. We assess the performance of our models with precision-recall curves and two popular information retrieval metrics, namely mean average precision (MAP) and the normalized discounted cumulative gain at the 10th result (NDCG@10) BIBREF16 . The results depend, of course, on the chosen document relevancy measure. Relevancy measure for the 20 Newsgroups dataset is straightforward: a retrieved document is relevant to the query if they both belong to the same newsgroup. In RCV1 each document belongs to a hierarchy of topics, making the definition of relevancy less obvious. In this case we adopted the relevancy measure used by BIBREF3 . That is, the relevancy is calculated as the fraction of overlapping labels in a retrieved document and the query document. Overall, our selection of test datasets and relevancy measures for 20 Newsgroups and RCV1 follows BIBREF3 , enabling comparison with semantic hashing codes. To assess the relevancy of articles in English Wikipedia we can employ categories assigned to them. However, unlike in RCV1, Wikipedia categories can have multiple parent categories and cyclic dependencies. Therefore, for this dataset we adopted a simplified relevancy measure: two articles are relevant if they share at least one category. We also removed from the test set categories with less than 20 documents as well as documents that were left with no categories. Overall, the relevancy is measured over more than INLINEFORM0 categories, making English Wikipedia harder than the other two benchmarks."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Results are averaged over queries. We assess the performance of our models with precision-recall curves and two popular information retrieval metrics, namely mean average precision (MAP) and the normalized discounted cumulative gain at the 10th result (NDCG@10) BIBREF16 ."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"4c3a9f46288b3b4df0fa5a8442d7027c57aa0549",
"9ce479115dc2a9e72064bea097bb0150445d0b97"
],
"answer": [
{
"evidence": [
"To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by BIBREF15 indicate that performance of PV-DBOW can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. In case of English Wikipedia we used words and bigrams with at least 100 occurrences, which gives a vocabulary with approximately 1.5 million elements.",
"In the experiments presented thus far we had at our disposal training sets with documents similar to the documents for which we inferred binary codes. One could ask a question, if it is possible to use binary paragraph vectors without collecting a domain-specific training set? For example, what if we needed to hash documents that are not associated with any available domain-specific corpus? One solution could be to train the model with a big generic text corpus, that covers a wide variety of domains. BIBREF21 evaluated this approach for real-valued paragraph vectors, with promising results. It is not obvious, however, whether short binary codes would also perform well in similar settings. To shed light on this question we trained Binary PV-DBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. The results are presented in Table TABREF14 and in Figure FIGREF11 . The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets, which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. Importantly, these results were obtained without domain-specific finetuning."
],
"extractive_spans": [],
"free_form_answer": "They perform information-retrieval tasks on popular benchmarks",
"highlighted_evidence": [
"To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia.",
"In the experiments presented thus far we had at our disposal training sets with documents similar to the documents for which we inferred binary codes. One could ask a question, if it is possible to use binary paragraph vectors without collecting a domain-specific training set?",
"o shed light on this question we trained Binary PV-DBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. ",
"The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets, which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In the experiments presented thus far we had at our disposal training sets with documents similar to the documents for which we inferred binary codes. One could ask a question, if it is possible to use binary paragraph vectors without collecting a domain-specific training set? For example, what if we needed to hash documents that are not associated with any available domain-specific corpus? One solution could be to train the model with a big generic text corpus, that covers a wide variety of domains. BIBREF21 evaluated this approach for real-valued paragraph vectors, with promising results. It is not obvious, however, whether short binary codes would also perform well in similar settings. To shed light on this question we trained Binary PV-DBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. The results are presented in Table TABREF14 and in Figure FIGREF11 . The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets, which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. Importantly, these results were obtained without domain-specific finetuning."
],
"extractive_spans": [
" trained Binary PV-DBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets"
],
"free_form_answer": "",
"highlighted_evidence": [
"To shed light on this question we trained Binary PV-DBOW with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. The results are presented in Table TABREF14 and in Figure FIGREF11 . The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets, which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. Importantly, these results were obtained without domain-specific finetuning."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"72aff66ef07819725b922c4f3ce88466d12e4b34",
"bc010540afa8c29a110ac302a6cb55b28fe2cb7c"
],
"answer": [
{
"evidence": [
"To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by BIBREF15 indicate that performance of PV-DBOW can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. In case of English Wikipedia we used words and bigrams with at least 100 occurrences, which gives a vocabulary with approximately 1.5 million elements.",
"In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by BIBREF11 on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While BIBREF11 employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents."
],
"extractive_spans": [
"20 Newsgroups",
"Reuters Corpus Volume",
"English Wikipedia"
],
"free_form_answer": "",
"highlighted_evidence": [
"To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia",
"In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by BIBREF15 indicate that performance of PV-DBOW can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. In case of English Wikipedia we used words and bigrams with at least 100 occurrences, which gives a vocabulary with approximately 1.5 million elements."
],
"extractive_spans": [
" 20 Newsgroups",
"RCV1",
"English Wikipedia"
],
"free_form_answer": "",
"highlighted_evidence": [
"To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"215837855a4221e59c42433bd918131c8ceb524a",
"8340b29735ac925a57d49ff6bbb721faa4d529a2"
],
"answer": [
{
"evidence": [
"In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by BIBREF11 on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While BIBREF11 employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents.",
"Visualization of Binary PV codes",
"For an additional comparison with semantic hashing, we used t-distributed Stochastic Neighbor Embedding BIBREF23 to construct two-dimensional visualizations of codes learned by Binary PV-DBOW with bigrams. We used the same subsets of newsgroups and RCV1 topics that were used by BIBREF3 . Codes learned by Binary PV-DBOW (Figure FIGREF20 ) appear slightly more clustered.",
"FLOAT SELECTED: Figure 7: t-SNE visualization of binary paragraph vector codes; the Hamming distance was used to calculate code similarity."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents.",
"Visualization of Binary PV codes\nFor an additional comparison with semantic hashing, we used t-distributed Stochastic Neighbor Embedding BIBREF23 to construct two-dimensional visualizations of codes learned by Binary PV-DBOW with bigrams. We used the same subsets of newsgroups and RCV1 topics that were used by BIBREF3 . Codes learned by Binary PV-DBOW (Figure FIGREF20 ) appear slightly more clustered.",
"FLOAT SELECTED: Figure 7: t-SNE visualization of binary paragraph vector codes; the Hamming distance was used to calculate code similarity."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by BIBREF15 indicate that performance of PV-DBOW can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. In case of English Wikipedia we used words and bigrams with at least 100 occurrences, which gives a vocabulary with approximately 1.5 million elements."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"Do they evaluate binary paragraph vectors on a downstream task?",
"How do they show that binary paragraph vectors capture semantics?",
"Which training dataset do they use?",
"Do they analyze the produced binary codes?"
],
"question_id": [
"65e26b15e087bedb6e8782d91596b35e7454b16b",
"a8f189fad8b72f8b2b4d2da4ed8475d31642d9e7",
"eafea4a24d103fdecf8f347c7d84daff6ef828a3",
"e099a37db801718ab341ac9a380a146c7452fd21"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The Binary PV-DBOW model. Modifications to the original PV-DBOW model are highlighted.",
"Figure 2: The Real-Binary PV-DBOW model. Modifications to the original PV-DBOW model are highlighted.",
"Figure 3: The Binary PV-DM model. Modifications to the original PV-DM model are highlighted.",
"Table 1: Information retrieval results. The best results with binary models are highlighted.",
"Table 2: Information retrieval results for 32-bit binary codes constructed by first inferring 32d realvalued paragraph vectors and then employing another unsupervised model or hashing algorithm for binarization. Paragraph vectors were inferred using PV-DBOW with bigrams.",
"Figure 4: Precision-recall curves for the 20 Newsgroups and RCV1 datasets. Cosine similarity was used with real-valued representations and the Hamming distance with binary codes. For comparison we also included semantic hashing results reported by Salakhutdinov & Hinton (2009, Figures 6 & 7).",
"Figure 5: Precision-recall curves for the baseline Binary PV-DBOW models and a Binary PVDBOW model trained on an unrelated text corpus. Results are reported for 128-bit codes.",
"Figure 6: Information retrieval results for binary and real-valued codes learned by the Real-Binary PV-DBOW model with bigrams. Results are reported for 28-bit binary codes and 300d real-valued codes. A 300d PV-DBOW model is included for reference.",
"Table 4: Information retrieval results for the Real-Binary PV-DBOW model. All real valued representations have 300 dimensions and are use for ranking documents according to the cosine similarity to the query. (A) Real-valued representations learned by Real-Binary PV-DBOW are used for ranking all test documents. (B) Binary codes are used for selecting documents within a given Hamming distance to the query and real-valued representations are used for ranking. (C) For comparison, variant B was repeated with binary codes inferred using plain Binary PV-DBOW and real-valued representation inferred using original PV-DBOW model.",
"Figure 7: t-SNE visualization of binary paragraph vector codes; the Hamming distance was used to calculate code similarity."
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"5-Table1-1.png",
"5-Table2-1.png",
"6-Figure4-1.png",
"7-Figure5-1.png",
"8-Figure6-1.png",
"8-Table4-1.png",
"11-Figure7-1.png"
]
} | [
"How do they show that binary paragraph vectors capture semantics?"
] | [
[
"1611.01116-Experiments-0",
"1611.01116-Transfer learning-0"
]
] | [
"They perform information-retrieval tasks on popular benchmarks"
] | 391 |
1908.10001 | Real-World Conversational AI for Hotel Bookings | In this paper, we present a real-world conversational AI system to search for and book hotels through text messaging. Our architecture consists of a frame-based dialogue management system, which calls machine learning models for intent classification, named entity recognition, and information retrieval subtasks. Our chatbot has been deployed on a commercial scale, handling tens of thousands of hotel searches every day. We describe the various opportunities and challenges of developing a chatbot in the travel industry. | {
"paragraphs": [
[
"Task-oriented chatbots have recently been applied to many areas in e-commerce. In this paper, we describe a task-oriented chatbot system that provides hotel recommendations and deals. Users access the chatbot through third-party messaging platforms, such as Facebook Messenger (Figure FIGREF4), Amazon Alexa, and WhatsApp. The chatbot elicits information, such as travel dates and hotel preferences, through a conversation, then recommends a set of suitable hotels that the user can then book. Our system uses a dialogue manager that integrates a combination of NLP models to handle the most frequent scenarios, and defer to a human support agent for more difficult situations.",
"The travel industry is an excellent target for e-commerce chatbots for several reasons:",
"Typical online travel agencies provide a web interface (such as buttons, dropdowns, and checkboxes) to enter information and filter search results; this can be difficult to navigate. In contrast, chatbot have a much gentler learning curve, since users interact with the bot using natural language. Additionally, chatbots are lightweight as they are embedded in an instant messaging platform that handles authentication. All of these factors contribute to higher user convenience BIBREF0.",
"Many people book vacations using travel agents, so the idea of booking travel through conversation is already familiar. Thus, we emulate the role of a travel agent, who talks to the customer while performing searches on various supplier databases on his behalf.",
"Our chatbot has the advantage of a narrow focus, so that every conversation is related to booking a hotel. This constrains conversations to a limited set of situations, thus allowing us to develop specialized models to handle hotel-related queries with very high accuracy.",
"The automated component of the chatbot is also closely integrated with human support agents: when the NLP system is unable to understand a customer's intentions, customer support agents are notified and take over the conversation. The agents' feedback is then used to improve the AI, providing valuable training data (Figure FIGREF5). In this paper, we describe our conversational AI systems, datasets, and models."
],
[
"Numerous task-oriented chatbots have been developed for commercial and recreational purposes. Most commercial chatbots today use a frame-based dialogue system, which was first proposed in 1977 for a flight booking task BIBREF1. Such a system uses a finite-state automaton to direct the conversation, which fills a set of slots with user-given values before an action can be taken. Modern frame-based systems often use machine learning for the slot-filling subtask BIBREF2.",
"Natural language processing has been applied to other problems in the travel industry, for example, text mining hotel information from user reviews for a recommendation system BIBREF3, or determining the economic importance of various hotel characteristics BIBREF4. Sentiment analysis techniques have been applied to hotel reviews for classifying polarity BIBREF5 and identifying common complaints to report to hotel management BIBREF6."
],
[
"Our chatbot system tries to find a desirable hotel for the user, through an interactive dialogue. First, the bot asks a series of questions, such as the dates of travel, the destination city, and a budget range. After the necessary information has been collected, the bot performs a search and sends a list of matching hotels, sorted based on the users' preferences; if the user is satisfied with the results, he can complete the booking within the chat client. Otherwise, the user may continue talking to the bot to further narrow down his search criteria.",
"At any point in the conversation, the user may request to talk to a customer support agent by clicking an “agent” or “help” button. The bot also sends the conversation to an agent if the user says something that the bot does not understand. Thus, the bot handles the most common use cases, while humans handle a long tail of specialized and less common requests.",
"The hotel search is backed by a database of approximately 100,000 cities and 300,000 hotels, populated using data from our partners. Each database entry contains the name of the city or hotel, geographic information (e.g., address, state, country), and various metadata (e.g., review score, number of bookings)."
],
[
"Our dialog system can be described as a frame-based slot-filling system, controlled by a finite-state automaton. At each stage, the bot prompts the user to fill the next slot, but supports filling a different slot, revising a previously filled slot, or filling multiple slots at once. We use machine learning to assist with this, extracting the relevant information from natural language text (Section SECREF4). Additionally, the system allows universal commands that can be said at any point in the conversation, such as requesting a human agent or ending the conversation.",
"Figure FIGREF7 shows part of the state machine, invoked when a user starts a new hotel search. Figure FIGREF8 shows a typical conversation between a user and the bot, annotated with the corresponding state transitions and calls to our machine learning models."
],
[
"We collect labelled training data from two sources. First, data for the intent model is extracted from conversations between users and customer support agents. To save time, the model suggests a pre-written response to the user, which the agent either accepts by clicking a button, or composes a response from scratch. This action is logged, and after being checked by a professional annotator, is added to our training data.",
"Second, we employ professional annotators to create training data for each of our models, using a custom-built interface. A pool of relevant messages is selected from past user conversations; each message is annotated once and checked again by a different annotator to minimize errors. We use the PyBossa framework to manage the annotation processes."
],
[
"Our conversational AI uses machine learning for three separate, cascading tasks: intent classification, named entity recognition (NER), and information retrieval (IR). That is, the intent model is run on all messages, NER is run on only a subset of messages, and IR is run on a further subset of those. In this section, we give an overview of each task's model and evaluation metrics."
],
[
"The intent model processes each incoming user message and classifies it as one of several intents. The most common intents are thanks, cancel, stop, search, and unknown (described in Table TABREF12); these intents were chosen for automation based on volume, ease of classification, and business impact. The result of the intent model is used to determine the bot's response, what further processing is necessary (in the case of search intent), and whether to direct the conversation to a human agent (in the case of unknown intent).",
"We use a two-stage model; the first stage is a set of keyword-matching rules that cover some unambiguous words. The second stage is a neural classification model. We use ELMo BIBREF7 to generate a sequence of 1024-dimensional embeddings from the text message; these embeddings are then processed with a bi-LSTM with 100-dimensional hidden layer. The hidden states produced by the bi-LSTM are then fed into a feedforward neural network, followed by a final softmax to generate a distribution over all possible output classes. If the confidence of the best prediction is below a threshold, then the message is classified as unknown. The preprocessing and training is implemented using AllenNLP BIBREF8.",
"We evaluate our methods using per-category precision, recall, and F1 scores. These are more informative metrics than accuracy because of the class imbalance, and also because some intent classes are easier to classify than others. In particular, it is especially important to accurately classify the search intent, because more downstream models depend on this output."
],
[
"For queries identified as search intent, we perform named entity recognition (NER) to extract spans from the query representing names of hotels and cities. Recently, neural architectures have shown to be successful for NER BIBREF9, BIBREF10. Typically, they are trained on the CoNLL-2003 Shared Task BIBREF11 which features four entity types (persons, organizations, locations, and miscellaneous).",
"Our NER model instead identifies hotel and location names, for example:",
"“double room in the cosmopolitan, las vegas for Aug 11-16”,",
"“looking for a resort in Playa del carmen near the beach”.",
"We use SpaCy to train custom NER models. The model initialized with SpaCy's English NER model, then fine-tuned using our data, consisting of 21K messages labelled with hotel and location entities. Our first model treats hotels and locations as separate entities, while our second model merges them and considers both hotels and locations as a single combined entity type. All models are evaluated by their precision, recall, and F1 scores for each entity type. The results are shown in Table TABREF14.",
"The combined NER model achieves the best accuracy, significantly better than the model with separate entity types. This is expected, since it only needs to identify entities as either hotel or location, without needing to distinguish them. The model is ineffective at differentiating between hotel and location names, likely because this is not always possible using syntactic properties alone; sometimes, world knowledge is required that is not available to the model."
],
[
"The information retrieval (IR) system takes a user search query and matches it with the best location or hotel entry in our database. It is invoked when the intent model detects a search intent, and the NER model recognizes a hotel or location named entity. This is a non-trivial problem because the official name of a hotel often differs significantly from what a user typically searches. For example, a user looking for the hotel “Hyatt Regency Atlanta Downtown” might search for “hyatt hotel atlanta”.",
"We first apply NER to extract the relevant parts of the query. Then, we use ElasticSearch to quickly retrieve a list of potentially relevant matches from our large database of cities and hotels, using tf-idf weighted n-gram matching. Finally, we train a neural network to rank the ElasticSearch results for relevancy, given the user query and the official hotel name.",
"Deep learning has been applied to short text ranking, for example, using LSTMs BIBREF13, or CNN-based architectures BIBREF14, BIBREF15. We experiment with several neural architectures, which take in the user query as one input and the hotel or city name as the second input. The model is trained to classify the match as relevant or irrelevant to the query. We compare the following models:",
"Averaged GloVe + feedforward: We use 100-dimensional, trainable GloVe embeddings BIBREF16 trained on Common Crawl, and produce sentence embeddings for each of the two inputs by averaging across all tokens. The sentence embeddings are then given to a feedforward neural network to predict the label.",
"BERT + fine-tuning: We follow the procedure for BERT sentence pair classification. That is, we feed the query as sentence A and the hotel name as sentence B into BERT, separated by a [SEP] token, then take the output corresponding to the [CLS] token into a final linear layer to predict the label. We initialize the weights with the pretrained checkpoint and fine-tune all layers for 3 epochs (Figure FIGREF19).",
"The models are trained on 9K search messages, with up to 10 results from ElasticSearch and annotations for which results are valid matches. Each training row is expanded into multiple message-result pairs, which are fed as instances to the network. For the BERT model, we use the uncased BERT-base, which requires significantly less memory than BERT-large. All models are trained end-to-end and implemented using AllenNLP BIBREF8.",
"For evaluation, the model predicts a relevance score for each entry returned by ElasticSearch, which gives a ranking of the results. Then, we evaluate the top-1 and top-3 recall: the proportion of queries for which a correct result appears as the top-scoring match, or among the top three scoring matches, respectively. The majority of our dataset has exactly one correct match. We use these metrics because depending on the confidence score, the chatbot either sends the top match directly, or sends a set of three potential matches and asks the user to disambiguate.",
"We also implement a rule-based unigram matching baseline, which takes the entry with highest unigram overlap with the query string to be the top match. This model only returns the top match, so only top-1 recall is evaluated, and top-3 recall is not applicable. Both neural models outperform the baseline, but by far the best performing model is BERT with fine-tuning, which retrieves the correct match for nearly 90% of queries (Table TABREF21)."
],
[
"Each of our three models is evaluated by internal cross-validation using the metrics described above; however, the conversational AI system as a whole is validated using external metrics: agent handoff rate and booking completion rate. The agent handoff rate is the proportion of conversations that involve a customer support agent; the booking completion rate is the proportion of conversations that lead to a completed hotel booking. Both are updated on a daily basis.",
"External metrics serve as a proxy for our NLP system's performance, since users are more likely to request an agent and less likely to complete their booking when the bot fails. Thus, an improvement in these metrics after a model deployment validates that the model functions as intended in the real world. However, both metrics are noisy and are affected by factors unrelated to NLP, such as seasonality and changes in the hotel supply chain."
],
[
"In this paper, we give an overview of our conversational AI and NLP system for hotel bookings, which is currently deployed in the real world. We describe the various machine learning models that we employ, and the unique opportunities of developing an e-commerce chatbot in the travel industry. Currently, we are building models to handle new types of queries (e.g., a hotel question-answering system), and using multi-task learning to combine our separate models. Another ongoing challenge is improving the efficiency of our models in production: since deep language models are memory-intensive, it is important to share memory across different models. We leave the detailed analysis of these systems to future work.",
"Our success demonstrates that our chatbot is a viable alternative to traditional mobile and web applications for commerce. Indeed, we believe that innovations in task-oriented chatbot technology will have tremendous potential to improve consumer experience and drive business growth in new and unexplored channels."
],
[
"We thank Frank Rudzicz for his helpful suggestions to drafts of this paper. We also thank the engineers at SnapTravel for building our chatbot: the conversational AI is just one of the many components."
]
],
"section_name": [
"Introduction",
"Related work",
"Chatbot architecture",
"Chatbot architecture ::: Dialogue management",
"Chatbot architecture ::: Data labelling",
"Models",
"Models ::: Intent model",
"Models ::: Named entity recognition",
"Models ::: Information retrieval",
"Models ::: External validation",
"Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"45cbac6b761799f2f93a16c2234e78a905d2c44f",
"d0c3494980ecc752f5012780ee00e02c1587b891"
],
"answer": [
{
"evidence": [
"We also implement a rule-based unigram matching baseline, which takes the entry with highest unigram overlap with the query string to be the top match. This model only returns the top match, so only top-1 recall is evaluated, and top-3 recall is not applicable. Both neural models outperform the baseline, but by far the best performing model is BERT with fine-tuning, which retrieves the correct match for nearly 90% of queries (Table TABREF21)."
],
"extractive_spans": [
"rule-based unigram matching baseline"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also implement a rule-based unigram matching baseline, which takes the entry with highest unigram overlap with the query string to be the top match. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We also implement a rule-based unigram matching baseline, which takes the entry with highest unigram overlap with the query string to be the top match. This model only returns the top match, so only top-1 recall is evaluated, and top-3 recall is not applicable. Both neural models outperform the baseline, but by far the best performing model is BERT with fine-tuning, which retrieves the correct match for nearly 90% of queries (Table TABREF21)."
],
"extractive_spans": [
"a rule-based unigram matching baseline, which takes the entry with highest unigram overlap with the query string to be the top match"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also implement a rule-based unigram matching baseline, which takes the entry with highest unigram overlap with the query string to be the top match. This model only returns the top match, so only top-1 recall is evaluated, and top-3 recall is not applicable. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"783e2f777d8de0a6b4b4f1bc16fd2b1d7f60e4fd",
"794a1acf92ee426585e8828aed369075ce72b0b4"
],
"answer": [
{
"evidence": [
"We use SpaCy to train custom NER models. The model initialized with SpaCy's English NER model, then fine-tuned using our data, consisting of 21K messages labelled with hotel and location entities. Our first model treats hotels and locations as separate entities, while our second model merges them and considers both hotels and locations as a single combined entity type. All models are evaluated by their precision, recall, and F1 scores for each entity type. The results are shown in Table TABREF14."
],
"extractive_spans": [],
"free_form_answer": "Using SpaCy",
"highlighted_evidence": [
"We use SpaCy to train custom NER models."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We use SpaCy to train custom NER models. The model initialized with SpaCy's English NER model, then fine-tuned using our data, consisting of 21K messages labelled with hotel and location entities. Our first model treats hotels and locations as separate entities, while our second model merges them and considers both hotels and locations as a single combined entity type. All models are evaluated by their precision, recall, and F1 scores for each entity type. The results are shown in Table TABREF14."
],
"extractive_spans": [],
"free_form_answer": "Trained using SpaCy and fine-tuned with their data of hotel and location entities",
"highlighted_evidence": [
"We use SpaCy to train custom NER models. The model initialized with SpaCy's English NER model, then fine-tuned using our data, consisting of 21K messages labelled with hotel and location entities."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"d550d8377d98368fd91ef591eeb7fcb08f5d8ee8",
"f85e35bf843372f67958ef50f640f8be5e0b9a84"
],
"answer": [
{
"evidence": [
"Averaged GloVe + feedforward: We use 100-dimensional, trainable GloVe embeddings BIBREF16 trained on Common Crawl, and produce sentence embeddings for each of the two inputs by averaging across all tokens. The sentence embeddings are then given to a feedforward neural network to predict the label.",
"BERT + fine-tuning: We follow the procedure for BERT sentence pair classification. That is, we feed the query as sentence A and the hotel name as sentence B into BERT, separated by a [SEP] token, then take the output corresponding to the [CLS] token into a final linear layer to predict the label. We initialize the weights with the pretrained checkpoint and fine-tune all layers for 3 epochs (Figure FIGREF19)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Averaged GloVe + feedforward: We use 100-dimensional, trainable GloVe embeddings BIBREF16 trained on Common Crawl, and produce sentence embeddings for each of the two inputs by averaging across all tokens.",
"BERT + fine-tuning: We follow the procedure for BERT sentence pair classification. That is, we feed the query as sentence A and the hotel name as sentence B into BERT, separated by a [SEP] token, then take the output corresponding to the [CLS] token into a final linear layer to predict the label. "
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We use a two-stage model; the first stage is a set of keyword-matching rules that cover some unambiguous words. The second stage is a neural classification model. We use ELMo BIBREF7 to generate a sequence of 1024-dimensional embeddings from the text message; these embeddings are then processed with a bi-LSTM with 100-dimensional hidden layer. The hidden states produced by the bi-LSTM are then fed into a feedforward neural network, followed by a final softmax to generate a distribution over all possible output classes. If the confidence of the best prediction is below a threshold, then the message is classified as unknown. The preprocessing and training is implemented using AllenNLP BIBREF8."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" We use ELMo BIBREF7 to generate a sequence of 1024-dimensional embeddings from the text message; these embeddings are then processed with a bi-LSTM with 100-dimensional hidden layer. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"4254ca311334b98007b8ddcb3400725349c7a967",
"52fea250e9b3bc738dec727ddea24751398dade6"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: TABLE II RESULTS OF NER MODEL",
"FLOAT SELECTED: TABLE III RESULTS OF IR MODELS"
],
"extractive_spans": [],
"free_form_answer": "For NER, combined entity model achieves the best performance (F1 0.96). For IR, BERT+fine-tuning model achieves TOP-1 Recall 0.895 and Top-3 Recall 0.961.",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE II RESULTS OF NER MODEL",
"FLOAT SELECTED: TABLE III RESULTS OF IR MODELS"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: TABLE II RESULTS OF NER MODEL",
"FLOAT SELECTED: TABLE III RESULTS OF IR MODELS"
],
"extractive_spans": [],
"free_form_answer": "F1 score of 0.96 on recognizing both hotel and location entities and Top-1 recall of 0.895 with the IR BERT model",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE II RESULTS OF NER MODEL",
"FLOAT SELECTED: TABLE III RESULTS OF IR MODELS"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"beafea76542ea345fad668fad35ce96fd5dd0ddf",
"cdd6989a86eca45537b1d022e69348e806bdfe6d"
],
"answer": [
{
"evidence": [
"The hotel search is backed by a database of approximately 100,000 cities and 300,000 hotels, populated using data from our partners. Each database entry contains the name of the city or hotel, geographic information (e.g., address, state, country), and various metadata (e.g., review score, number of bookings).",
"We collect labelled training data from two sources. First, data for the intent model is extracted from conversations between users and customer support agents. To save time, the model suggests a pre-written response to the user, which the agent either accepts by clicking a button, or composes a response from scratch. This action is logged, and after being checked by a professional annotator, is added to our training data.",
"Second, we employ professional annotators to create training data for each of our models, using a custom-built interface. A pool of relevant messages is selected from past user conversations; each message is annotated once and checked again by a different annotator to minimize errors. We use the PyBossa framework to manage the annotation processes."
],
"extractive_spans": [],
"free_form_answer": "From conversions between users and customer support agents through their partners, and professional annotators creating data.",
"highlighted_evidence": [
"The hotel search is backed by a database of approximately 100,000 cities and 300,000 hotels, populated using data from our partners.",
"We collect labelled training data from two sources. First, data for the intent model is extracted from conversations between users and customer support agents. ",
"Second, we employ professional annotators to create training data for each of our models, using a custom-built interface."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Our chatbot system tries to find a desirable hotel for the user, through an interactive dialogue. First, the bot asks a series of questions, such as the dates of travel, the destination city, and a budget range. After the necessary information has been collected, the bot performs a search and sends a list of matching hotels, sorted based on the users' preferences; if the user is satisfied with the results, he can complete the booking within the chat client. Otherwise, the user may continue talking to the bot to further narrow down his search criteria.",
"The hotel search is backed by a database of approximately 100,000 cities and 300,000 hotels, populated using data from our partners. Each database entry contains the name of the city or hotel, geographic information (e.g., address, state, country), and various metadata (e.g., review score, number of bookings)."
],
"extractive_spans": [],
"free_form_answer": "Information from users and information from database of approximately 100,000 cities and 300,000 hotels, populated using data from their partners.",
"highlighted_evidence": [
"First, the bot asks a series of questions, such as the dates of travel, the destination city, and a budget range. After the necessary information has been collected, the bot performs a search and sends a list of matching hotels, sorted based on the users' preferences; if the user is satisfied with the results, ",
"The hotel search is backed by a database of approximately 100,000 cities and 300,000 hotels, populated using data from our partners. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"22967d346d1bb4c24b65a2049e24f0be6e685f4d",
"8c0eb417c28a1ed79eb0b1c9472230f7c5b6f104"
],
"answer": [
{
"evidence": [
"The intent model processes each incoming user message and classifies it as one of several intents. The most common intents are thanks, cancel, stop, search, and unknown (described in Table TABREF12); these intents were chosen for automation based on volume, ease of classification, and business impact. The result of the intent model is used to determine the bot's response, what further processing is necessary (in the case of search intent), and whether to direct the conversation to a human agent (in the case of unknown intent)."
],
"extractive_spans": [
"thanks",
"cancel",
"stop",
"search",
"unknown "
],
"free_form_answer": "",
"highlighted_evidence": [
"The intent model processes each incoming user message and classifies it as one of several intents. The most common intents are thanks, cancel, stop, search, and unknown (described in Table TABREF12); these intents were chosen for automation based on volume, ease of classification, and business impact"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The intent model processes each incoming user message and classifies it as one of several intents. The most common intents are thanks, cancel, stop, search, and unknown (described in Table TABREF12); these intents were chosen for automation based on volume, ease of classification, and business impact. The result of the intent model is used to determine the bot's response, what further processing is necessary (in the case of search intent), and whether to direct the conversation to a human agent (in the case of unknown intent)."
],
"extractive_spans": [
"The most common intents are thanks, cancel, stop, search, and unknown"
],
"free_form_answer": "",
"highlighted_evidence": [
"The intent model processes each incoming user message and classifies it as one of several intents. The most common intents are thanks, cancel, stop, search, and unknown (described in Table TABREF12); these intents were chosen for automation based on volume, ease of classification, and business impact."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"question": [
"What is the baseline?",
"How is their NER model trained?",
"Do they use pretrained word embeddings such as BERT?",
"How well does the system perform?",
"Where does their information come from?",
"What intents do they have?"
],
"question_id": [
"c25014b7e57bb2949138d64d49f356d69838bc25",
"25a8d432bf94af1662837877bc6c284e2fc3fbe2",
"be632f0246c2e5f049d12e796812f496e083c33e",
"415b42ef6ff92553d04bd44ed0cbf6b3d6c83e51",
"9da181ac8f2600eb19364c1b1e3cdeb569811a11",
"67f1b8a9f72e62cd74ec42e9631ef763a9b098c7"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Fig. 1. Screenshot of a typical conversation with our bot in Facebook Messenger.",
"TABLE I SOME INTENT CLASSES PREDICTED BY OUR MODEL.",
"Fig. 2. The intent model determines for each incoming message, whether the bot can respond adequately. If the message cannot be recognized as one of our intent classes, then the conversation is handed to a human agent, and is added to our training data.",
"Fig. 3. Diagram showing part of the state machine, with relevant transitions; this part is invoked when a user starts a new search for a hotel.",
"Fig. 4. Example of a conversation with our bot, with corresponding state transitions and model logic. First, the user message is processed by the intent model, which classifies the message into one of several intents (described in Table I). Depending on the intent and current conversation state, other models (NER and IR) may need to be invoked. Then, a response is generated based on output of the models, and the conversation transitions to a different state.",
"TABLE II RESULTS OF NER MODEL",
"TABLE III RESULTS OF IR MODELS",
"Fig. 5. BERT model for IR. The inputs are tokens for the user query (after NER) and the official hotel name, separated by a [SEP] token. The model learns to predict a relevance score between 0 and 1 (i.e., the pointwise approach to the learning-to-rank problem). Figure adapted from [13]."
],
"file": [
"1-Figure1-1.png",
"2-TableI-1.png",
"2-Figure2-1.png",
"3-Figure3-1.png",
"3-Figure4-1.png",
"3-TableII-1.png",
"4-TableIII-1.png",
"4-Figure5-1.png"
]
} | [
"How is their NER model trained?",
"How well does the system perform?",
"Where does their information come from?"
] | [
[
"1908.10001-Models ::: Named entity recognition-4"
],
[
"1908.10001-3-TableII-1.png",
"1908.10001-4-TableIII-1.png"
],
[
"1908.10001-Chatbot architecture-2",
"1908.10001-Chatbot architecture ::: Data labelling-0",
"1908.10001-Chatbot architecture ::: Data labelling-1",
"1908.10001-Chatbot architecture-0"
]
] | [
"Trained using SpaCy and fine-tuned with their data of hotel and location entities",
"F1 score of 0.96 on recognizing both hotel and location entities and Top-1 recall of 0.895 with the IR BERT model",
"Information from users and information from database of approximately 100,000 cities and 300,000 hotels, populated using data from their partners."
] | 395 |
1710.10380 | Speeding up Context-based Sentence Representation Learning with Non-autoregressive Convolutional Decoding | Context plays an important role in human language understanding, thus it may also be useful for machines learning vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. We carefully designed experiments to show that neither an autoregressive decoder nor an RNN decoder is required. After that, we designed a model which still keeps an RNN as the encoder, while using a non-autoregressive convolutional decoder. We further combine a suite of effective designs to significantly improve model efficiency while also achieving better performance. Our model is trained on two different large unlabelled corpora, and in both cases the transferability is evaluated on a set of downstream NLP tasks. We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks. | {
"paragraphs": [
[
"Learning distributed representations of sentences is an important and hard topic in both the deep learning and natural language processing communities, since it requires machines to encode a sentence with rich language content into a fixed-dimension vector filled with real numbers. Our goal is to build a distributed sentence encoder learnt in an unsupervised fashion by exploiting the structure and relationships in a large unlabelled corpus.",
"Numerous studies in human language processing have supported that rich semantics of a word or sentence can be inferred from its context BIBREF0 , BIBREF1 . The idea of learning from the co-occurrence BIBREF2 was recently successfully applied to vector representation learning for words in BIBREF3 and BIBREF4 .",
"A very recent successful application of the distributional hypothesis BIBREF0 at the sentence-level is the skip-thoughts model BIBREF5 . The skip-thoughts model learns to encode the current sentence and decode the surrounding two sentences instead of the input sentence itself, which achieves overall good performance on all tested downstream NLP tasks that cover various topics. The major issue is that the training takes too long since there are two RNN decoders to reconstruct the previous sentence and the next one independently. Intuitively, given the current sentence, inferring the previous sentence and inferring the next one should be different, which supports the usage of two independent decoders in the skip-thoughts model. However, BIBREF6 proposed the skip-thought neighbour model, which only decodes the next sentence based on the current one, and has similar performance on downstream tasks compared to that of their implementation of the skip-thoughts model.",
"In the encoder-decoder models for learning sentence representations, only the encoder will be used to map sentences to vectors after training, which implies that the quality of the generated language is not our main concern. This leads to our two-step experiment to check the necessity of applying an autoregressive model as the decoder. In other words, since the decoder's performance on language modelling is not our main concern, it is preferred to reduce the complexity of the decoder to speed up the training process. In our experiments, the first step is to check whether “teacher-forcing” is required during training if we stick to using an autoregressive model as the decoder, and the second step is to check whether an autoregressive decoder is necessary to learn a good sentence encoder. Briefly, the experimental results show that an autoregressive decoder is indeed not essential in learning a good sentence encoder; thus the two findings of our experiments lead to our final model design.",
"Our proposed model has an asymmetric encoder-decoder structure, which keeps an RNN as the encoder and has a CNN as the decoder, and the model explores using only the subsequent context information as the supervision. The asymmetry in both model architecture and training pair reduces a large amount of the training time.",
"The contribution of our work is summarised as:",
"The following sections will introduce the components in our “RNN-CNN” model, and discuss our experimental design."
],
[
"Our model is highly asymmetric in terms of both the training pairs and the model structure. Specifically, our model has an RNN as the encoder, and a CNN as the decoder. During training, the encoder takes the INLINEFORM0 -th sentence INLINEFORM1 as the input, and then produces a fixed-dimension vector INLINEFORM2 as the sentence representation; the decoder is applied to reconstruct the paired target sequence INLINEFORM3 that contains the subsequent contiguous words. The distance between the generated sequence and the target one is measured by the cross-entropy loss at each position in INLINEFORM4 . An illustration is in Figure FIGREF4 . (For simplicity, we omit the subscript INLINEFORM5 in this section.)",
"1. Encoder: The encoder is a bi-directional Gated Recurrent Unit (GRU, BIBREF7 ). Suppose that an input sentence INLINEFORM0 contains INLINEFORM1 words, which are INLINEFORM2 , and they are transformed by an embedding matrix INLINEFORM3 to word vectors. The bi-directional GRU takes one word vector at a time, and processes the input sentence in both the forward and backward directions; both sets of hidden states are concatenated to form the hidden state matrix INLINEFORM7 , where INLINEFORM8 is the dimension of the hidden states INLINEFORM9 ( INLINEFORM10 ).",
"2. Representation: We aim to provide a model with faster training speed and better transferability than existing algorithms; thus we choose to apply a parameter-free composition function, which is a concatenation of the outputs from a global mean pooling over time and a global max pooling over time, on the computed sequence of hidden states INLINEFORM0 . The composition function is represented as DISPLAYFORM0 ",
"where INLINEFORM0 is the max operation on each row of the matrix INLINEFORM1 , which outputs a vector with dimension INLINEFORM2 . Thus the representation INLINEFORM3 . 3. Decoder: The decoder is a 3-layer CNN to reconstruct the paired target sequence INLINEFORM4 , which needs to expand INLINEFORM5 , which can be considered as a sequence with only one element, to a sequence with INLINEFORM6 elements. Intuitively, the decoder could be a stack of deconvolution layers. For fast training speed, we optimised the architecture to make it possible to use fully-connected layers and convolution layers in the decoder, since generally, convolution layers run faster than deconvolution layers in modern deep learning frameworks.",
"Suppose that the target sequence INLINEFORM0 has INLINEFORM1 words, which are INLINEFORM2 , the first layer of deconvolution will expand INLINEFORM3 , into a feature map with INLINEFORM4 elements. It can be easily implemented as a concatenation of outputs from INLINEFORM5 linear transformations in parallel. Then the second and third layer are 1D-convolution layers. The output feature map is INLINEFORM6 , where INLINEFORM7 is the dimension of the word vectors.",
"Note that our decoder is not an autoregressive model and has high training efficiency. We will discuss the reason for choosing this decoder which we call a predict-all-words CNN decoder.",
"4. Objective: The training objective is to maximise the likelihood of the target sequence being generated from the decoder. Since in our model, each word is predicted independently, a softmax layer is applied after the decoder to produce a probability distribution over words in INLINEFORM0 at each position, thus the probability of generating a word INLINEFORM1 in the target sequence is defined as: DISPLAYFORM0 ",
" where, INLINEFORM0 is the vector representation of INLINEFORM1 in the embedding matrix INLINEFORM2 , and INLINEFORM3 is the dot-product between the word vector and the feature vector produced by the decoder at position INLINEFORM4 . The training objective is to minimise the sum of the negative log-likelihood over all positions in the target sequence INLINEFORM5 : DISPLAYFORM0 ",
" where INLINEFORM0 and INLINEFORM1 contain the parameters in the encoder and the decoder, respectively. The training objective INLINEFORM2 is summed over all sentences in the training corpus."
],
[
"We use an encoder-decoder model and use context for learning sentence representations in an unsupervised fashion. Since the decoder won't be used after training, and the quality of the generated sequences is not our main focus, it is important to study the design of the decoder. Generally, a fast training algorithm is preferred; thus proposing a new decoder with high training efficiency and also strong transferability is crucial for an encoder-decoder model."
],
[
"Our design of the decoder is basically a 3-layer ConvNet that predicts all words in the target sequence at once. In contrast, existing work, such as skip-thoughts BIBREF5 , and CNN-LSTM BIBREF9 , use autoregressive RNNs as the decoders. As known, an autoregressive model is good at generating sequences with high quality, such as language and speech. However, an autoregressive decoder seems to be unnecessary in an encoder-decoder model for learning sentence representations, since it won't be used after training, and it takes up a large portion of the training time to compute the output and the gradient. Therefore, we conducted experiments to test the necessity of using an autoregressive decoder in learning sentence representations, and we had two findings.",
"Finding I: It is not necessary to input the correct words into an autoregressive decoder for learning sentence representations.",
"The experimental design was inspired by BIBREF10 . The model we designed for the experiment has a bi-directional GRU as the encoder, and an autoregressive decoder, including both RNN and CNN. We started by analysing the effect of different sampling strategies of the input words on learning an auto-regressive decoder.",
"We compared three sampling strategies of input words in decoding the target sequence with an autoregressive decoder: (1) Teacher-Forcing: the decoder always gets the ground-truth words; (2) Always Sampling: at time step INLINEFORM0 , a word is sampled from the multinomial distribution predicted at time step INLINEFORM1 ; (3) Uniform Sampling: a word is uniformly sampled from the dictionary INLINEFORM2 , then fed to the decoder at every time step.",
"The results are presented in Table TABREF10 (top two subparts). As we can see, the three decoding settings do not differ significantly in terms of the performance on selected downstream tasks, with RNN or CNN as the decoder. The results show that, in terms of learning good sentence representations, the autoregressive decoder doesn't require the correct ground-truth words as the inputs.",
"Finding II: The model with an autoregressive decoder performs similarly to the model with a predict-all-words decoder.",
"With Finding I, we conducted an experiment to test whether the model needs an autoregressive decoder at all. In this experiment, the goal is to compare the performance of the predict-all-words decoders and that of the autoregressive decoders separate from the RNN/CNN distinction, thus we designed a predict-all-words CNN decoder and RNN decoder. The predict-all-words CNN decoder is described in Section SECREF2 , which is a stack of three convolutional layers, and all words are predicted once at the output layer of the decoder. The predict-all-words RNN decoder is built based on our CNN decoder. To keep the number of parameters of the two predict-all-words decoder roughly the same, we replaced the last two convolutional layers with a bidirectional GRU.",
"The results are also presented in Table TABREF10 (3rd and 4th subparts). The performance of the predict-all-words RNN decoder does not significantly differ from that of any one of the autoregressive RNN decoders, and the same situation can be also observed in CNN decoders.",
"These two findings indeed support our choice of using a predict-all-words CNN as the decoder, as it brings the model high training efficiency while maintaining strong transferability."
],
[
"Since the encoder is a bi-directional RNN in our model, we have multiple ways to select/compute on the generated hidden states to produce a sentence representation. Instead of using the last hidden state as the sentence representation as done in skip-thoughts BIBREF5 and SDAE BIBREF11 , we followed the idea proposed in BIBREF12 . They built a model for supervised training on the SNLI dataset BIBREF13 that concatenates the outputs from a global mean pooling over time and a global max pooling over time to serve as the sentence representation, and showed a performance boost on the SNLI task. BIBREF14 found that the model with global max pooling function provides stronger transferability than the model with a global mean pooling function does.",
"In our proposed RNN-CNN model, we empirically show that the mean+max pooling provides stronger transferability than the max pooling alone does, and the results are presented in the last two sections of Table TABREF10 . The concatenation of a mean-pooling and a max pooling function is actually a parameter-free composition function, and the computation load is negligible compared to all the heavy matrix multiplications in the model. Also, the non-linearity of the max pooling function augments the mean pooling function for constructing a representation that captures a more complex composition of the syntactic information."
],
[
"We choose to share the parameters in the word embedding layer of the RNN encoder and the word prediction layer of the CNN decoder. Tying was shown in both BIBREF15 and BIBREF16 , and it generally helped to learn a better language model. In our model, tying also drastically reduces the number of parameters, which could potentially prevent overfitting.",
"Furthermore, we initialise the word embeddings with pretrained word vectors, such as word2vec BIBREF3 and GloVe BIBREF4 , since it has been shown that these pretrained word vectors can serve as a good initialisation for deep learning models, and more likely lead to better results than a random initialisation."
],
[
"We studied hyperparameters in our model design based on three out of 10 downstream tasks, which are SICK-R, SICK-E BIBREF17 , and STS14 BIBREF18 . The first model we created, which is reported in Section SECREF2 , is a decent design, and the following variations didn't give us much performance change except improvements brought by increasing the dimensionality of the encoder. However, we think it is worth mentioning the effect of hyperparameters in our model design. We present the Table TABREF21 in the supplementary material and we summarise it as follows:",
"1. Decoding the next sentence performed similarly to decoding the subsequent contiguous words.",
"2. Decoding the subsequent 30 words, which was adopted from the skip-thought training code, gave reasonably good performance. More words for decoding didn't give us a significant performance gain, and took longer to train.",
"3. Adding more layers into the decoder and enlarging the dimension of the convolutional layers indeed sightly improved the performance on the three downstream tasks, but as training efficiency is one of our main concerns, it wasn't worth sacrificing training efficiency for the minor performance gain.",
"4. Increasing the dimensionality of the RNN encoder improved the model performance, and the additional training time required was less than needed for increasing the complexity in the CNN decoder. We report results from both smallest and largest models in Table TABREF16 ."
],
[
"The vocabulary for unsupervised training contains the 20k most frequent words in BookCorpus. In order to generalise the model trained with a relatively small, fixed vocabulary to the much larger set of all possible English words, we followed the vocabulary expansion method proposed in BIBREF5 , which learns a linear mapping from the pretrained word vectors to the learnt RNN word vectors. Thus, the model benefits from the generalisation ability of the pretrained word embeddings.",
"The downstream tasks for evaluation include semantic relatedness (SICK, BIBREF17 ), paraphrase detection (MSRP, BIBREF19 ), question-type classification (TREC, BIBREF20 ), and five benchmark sentiment and subjective datasets, which include movie review sentiment (MR, BIBREF21 , SST, BIBREF22 ), customer product reviews (CR, BIBREF23 ), subjectivity/objectivity classification (SUBJ, BIBREF24 ), opinion polarity (MPQA, BIBREF25 ), semantic textual similarity (STS14, BIBREF18 ), and SNLI BIBREF13 . After unsupervised training, the encoder is fixed, and applied as a representation extractor on the 10 tasks.",
"To compare the effect of different corpora, we also trained two models on Amazon Book Review dataset (without ratings) which is the largest subset of the Amazon Review dataset BIBREF26 with 142 million sentences, about twice as large as BookCorpus.",
"Both training and evaluation of our models were conducted in PyTorch, and we used SentEval provided by BIBREF14 to evaluate the transferability of our models. All the models were trained for the same number of iterations with the same batch size, and the performance was measured at the end of training for each of the models."
],
[
"Table TABREF16 presents the results on 9 evaluation tasks of our proposed RNN-CNN models, and related work. The “small RNN-CNN” refers to the model with the dimension of representation as 1200, and the “large RNN-CNN” refers to that as 4800. The results of our “large RNN-CNN” model on SNLI is presented in Table TABREF19 .",
"Our work was inspired by analysing the skip-thoughts model BIBREF5 . The skip-thoughts model successfully applied this form of learning from the context information into unsupervised representation learning for sentences, and then, BIBREF29 augmented the LSTM with proposed layer-normalisation (Skip-thought+LN), which improved the skip-thoughts model generally on downstream tasks. In contrast, BIBREF11 proposed the FastSent model which only learns source and target word embeddings and is an adaptation of Skip-gram BIBREF3 to sentence-level learning without word order information. BIBREF9 applied a CNN as the encoder, but still applied LSTMs for decoding the adjacent sentences, which is called CNN-LSTM.",
"Our RNN-CNN model falls in the same category as it is an encoder-decoder model. Instead of decoding the surrounding two sentences as in skip-thoughts, FastSent and the compositional CNN-LSTM, our model only decodes the subsequent sequence with a fixed length. Compared with the hierarchical CNN-LSTM, our model showed that, with a proper model design, the context information from the subsequent words is sufficient for learning sentence representations. Particularly, our proposed small RNN-CNN model runs roughly three times faster than our implemented skip-thoughts model on the same GPU machine during training.",
"Proposed by BIBREF30 , BYTE m-LSTM model uses a multiplicative LSTM unit BIBREF31 to learn a language model. This model is simple, providing next-byte prediction, but achieves good results likely due to the extremely large training corpus (Amazon Review data, BIBREF26 ) that is also highly related to many of the sentiment analysis downstream tasks (domain matching).",
"We experimented with the Amazon Book review dataset, the largest subset of the Amazon Review. This subset is significantly smaller than the full Amazon Review dataset but twice as large as BookCorpus. Our RNN-CNN model trained on the Amazon Book review dataset resulted in performance improvement on all single-sentence classification tasks relative to that achieved with training under BookCorpus.",
"Unordered sentences are also useful for learning representations of sentences. ParagraphVec BIBREF32 learns a fixed-dimension vector for each sentence by predicting the words within the given sentence. However, after training, the representation for a new sentence is hard to derive, since it requires optimising the sentence representation towards an objective. SDAE BIBREF11 learns the sentence representations with a denoising auto-encoder model. Our proposed RNN-CNN model trains faster than SDAE does, and also because we utilised the sentence-level continuity as a supervision which SDAE doesn't, our model largely performs better than SDAE.",
"Another transfer approach is to learn a supervised discriminative classifier by distinguishing whether the sentence pair or triple comes from the same context. BIBREF33 proposed a model that learns to classify whether the input sentence triplet contains three contiguous sentences. DiscSent BIBREF34 and DisSent BIBREF35 both utilise the annotated explicit discourse relations, which is also good for learning sentence representations. It is a very promising research direction since the proposed models are generally computational efficient and have clear intuition, yet more investigations need to be done to augment the performance.",
"Supervised training for transfer learning is also promising when a large amount of human-annotated data is accessible. BIBREF14 proposed the InferSent model, which applies a bi-directional LSTM as the sentence encoder with multiple fully-connected layers to classify whether the hypothesis sentence entails the premise sentence in SNLI BIBREF13 , and MultiNLI BIBREF36 . The trained model demonstrates a very impressive transferability on downstream tasks, including both supervised and unsupervised. Our RNN-CNN model trained on Amazon Book Review data in an unsupervised way has better results on supervised tasks than InferSent but slightly inferior results on semantic relatedness tasks. We argue that labelling a large amount of training data is time-consuming and costly, while unsupervised learning provides great performance at a fraction of the cost. It could potentially be leveraged to initialise or more generally augment the costly human labelling, and make the overall system less costly and more efficient."
],
[
"In BIBREF11 , internal consistency is measured on five single sentence classification tasks (MR, CR, SUBJ, MPQA, TREC), MSRP and STS-14, and was found to be only above the “acceptable” threshold. They empirically showed that models that worked well on supervised evaluation tasks generally didn't perform well on unsupervised ones. This implies that we should consider supervised and unsupervised evaluations separately, since each group has higher internal consistency.",
"As presented in Table TABREF16 , the encoders that only sum over pretrained word vectors perform better overall than those with RNNs on unsupervised evaluation tasks, including STS14. In recent proposed log-bilinear models, such as FastSent BIBREF11 and SiameseBOW BIBREF37 , the sentence representation is composed by summing over all word representations, and the only tunable parameters in the models are word vectors. These resulting models perform very well on unsupervised tasks. By augmenting the pretrained word vectors with a weighted averaging process, and removing the top few principal components, which mainly encode frequently-used words, as proposed in BIBREF38 and BIBREF39 , the performance on the unsupervised evaluation tasks gets even better. Prior work suggests that incorporating word-level information helps the model to perform better on cosine distance based semantic textual similarity tasks.",
"Our model predicts all words in the target sequence at once, without an autoregressive process, and ties the word embedding layer in the encoder with the prediction layer in the decoder, which explicitly uses the word vectors in the target sequence as the supervision in training. Thus, our model incorporates the word-level information by using word vectors as the targets, and it improves the model performance on STS14 compared to other RNN-based encoders.",
" BIBREF38 conducted an experiment to show that the word order information is crucial in getting better results on supervised tasks. In our model, the encoder is still an RNN, which explicitly utilises the word order information. We believe that the combination of encoding a sentence with its word order information and decoding all words in a sentence independently inherently leverages the benefits from both log-linear models and RNN-based models."
],
[
"Inspired by learning to exploit the contextual information present in adjacent sentences, we proposed an asymmetric encoder-decoder model with a suite of techniques for improving context-based unsupervised sentence representation learning. Since we believe that a simple model will be faster in training and easier to analyse, we opt to use simple techniques in our proposed model, including 1) an RNN as the encoder, and a predict-all-words CNN as the decoder, 2) learning by inferring subsequent contiguous words, 3) mean+max pooling, and 4) tying word vectors with word prediction. With thorough discussion and extensive evaluation, we justify our decision making for each component in our RNN-CNN model. In terms of the performance and the efficiency of training, we justify that our model is a fast and simple algorithm for learning generic sentence representations from unlabelled corpora. Further research will focus on how to maximise the utility of the context information, and how to design simple architectures to best make use of it."
],
[
"Table TABREF21 presents the effect of hyperparameters."
],
[
"Given that the encoder takes a sentence as input, decoding the next sentence versus decoding the next fixed length window of contiguous words is conceptually different. This is because decoding the subsequent fixed-length sequence might not reach or might go beyond the boundary of the next sentence. Since the CNN decoder in our model takes a fixed-length sequence as the target, when it comes to decoding sentences, we would need to zero-pad or chop the sentences into a fixed length. As the transferability of the models trained in both cases perform similarly on the evaluation tasks (see rows 1 and 2 in Table TABREF21 ), we focus on the simpler predict-all-words CNN decoder that learns to reconstruct the next window of contiguous words."
],
[
"We varied the length of target sequences in three cases, which are 10, 30 and 50, and measured the performance of three models on all tasks. As stated in rows 1, 3, and 4 in Table TABREF21 , decoding short target sequences results in a slightly lower Pearson score on SICK, and decoding longer target sequences lead to a longer training time. In our understanding, decoding longer target sequences leads to a harder optimisation task, and decoding shorter ones leads to a problem that not enough context information is included for every input sentence. A proper length of target sequences is able to balance these two issues. The following experiments set subsequent 30 contiguous words as the target sequence."
],
[
"The CNN encoder we built followed the idea of AdaSent BIBREF41 , and we adopted the architecture proposed in BIBREF14 . The CNN encoder has four layers of convolution, each followed by a non-linear activation function. At every layer, a vector is calculated by a global max-pooling function over time, and four vectors from four layers are concatenated to serve as the sentence representation. We tweaked the CNN encoder, including different kernel size and activation function, and we report the best results of CNN-CNN model at row 6 in Table TABREF21 .",
"Even searching over many hyperparameters and selecting the best performance on the evaluation tasks (overfitting), the CNN-CNN model performs poorly on the evaluation tasks, although the model trains much faster than any other models with RNNs (which were not similarly searched). The RNN and CNN are both non-linear systems, and they both are capable of learning complex composition functions on words in a sentence. We hypothesised that the explicit usage of the word order information will augment the transferability of the encoder, and constrain the search space of the parameters in the encoder. The results support our hypothesis.",
"The future predictor in BIBREF9 also applies a CNN as the encoder, but the decoder is still an RNN, listed at row 11 in Table TABREF21 . Compared to our designed CNN-CNN model, their CNN-LSTM model contains more parameters than our model does, but they have similar performance on the evaluation tasks, which is also worse than our RNN-CNN model."
],
[
"Clearly, we can tell from the comparison between rows 1, 9 and 12 in Table TABREF21 , increasing the dimensionality of the RNN encoder leads to better transferability of the model.",
"Compared with RNN-RNN model, even with double-sized encoder, the model with CNN decoder still runs faster than that with RNN decoder, and it slightly outperforms the model with RNN decoder on the evaluation tasks.",
"At the same dimensionality of representation with Skip-thought and Skip-thought+LN, our proposed RNN-CNN model performs better on all tasks but TREC, on which our model gets similar results as other models do.",
"Compared with the model with larger-size CNN decoder, apparently, we can see that larger encoder size helps more than larger decoder size does (rows 7,8, and 9 in Table TABREF21 ).",
"In other words, an encoder with larger size will result in a representation with higher dimensionality, and generally, it will augment the expressiveness of the vector representation, and the transferability of the model."
],
[
"Our small RNN-CNN model has a bi-directional GRU as the encoder, with 300 dimension each direction, and the large one has 1200 dimension GRU in each direction. The batch size we used for training our model is 512, and the sequence length for both encoding and decoding are 30. The initial learning rate is INLINEFORM0 , and the Adam optimiser BIBREF40 is applied to tune the parameters in our model."
],
[
"Table TABREF26 contains all supervised task-dependent models for comparison."
]
],
"section_name": [
"Introduction",
"RNN-CNN Model",
"Architecture Design",
"CNN as the decoder",
"Mean+Max Pooling",
"Tying Word Embeddings and Word Prediction Layer",
"Study of the Hyperparameters in Our Model Design",
"Experiment Settings",
"Related work and Comparison",
"Discussion",
"Conclusion",
"Supplemental Material",
"Decoding Sentences vs. Decoding Sequences",
"Length of the Target Sequence TT",
"RNN Encoder vs. CNN Encoder",
"Dimensionality",
"Experimental Details",
"Results including supervised task-dependent models"
]
} | {
"answers": [
{
"annotation_id": [
"44af3d7690e3a896e074292d95e02396c0cc40be",
"f48678483981e059daef2577621d06a629b50d81"
],
"answer": [
{
"evidence": [
"The downstream tasks for evaluation include semantic relatedness (SICK, BIBREF17 ), paraphrase detection (MSRP, BIBREF19 ), question-type classification (TREC, BIBREF20 ), and five benchmark sentiment and subjective datasets, which include movie review sentiment (MR, BIBREF21 , SST, BIBREF22 ), customer product reviews (CR, BIBREF23 ), subjectivity/objectivity classification (SUBJ, BIBREF24 ), opinion polarity (MPQA, BIBREF25 ), semantic textual similarity (STS14, BIBREF18 ), and SNLI BIBREF13 . After unsupervised training, the encoder is fixed, and applied as a representation extractor on the 10 tasks."
],
"extractive_spans": [
"semantic relatedness (SICK, BIBREF17 ), paraphrase detection (MSRP, BIBREF19 ), question-type classification (TREC, BIBREF20 ), and five benchmark sentiment and subjective datasets, which include movie review sentiment (MR, BIBREF21 , SST, BIBREF22 ), customer product reviews (CR, BIBREF23 ), subjectivity/objectivity classification (SUBJ, BIBREF24 ), opinion polarity (MPQA, BIBREF25 ), semantic textual similarity (STS14, BIBREF18 ), and SNLI BIBREF13"
],
"free_form_answer": "",
"highlighted_evidence": [
"The downstream tasks for evaluation include semantic relatedness (SICK, BIBREF17 ), paraphrase detection (MSRP, BIBREF19 ), question-type classification (TREC, BIBREF20 ), and five benchmark sentiment and subjective datasets, which include movie review sentiment (MR, BIBREF21 , SST, BIBREF22 ), customer product reviews (CR, BIBREF23 ), subjectivity/objectivity classification (SUBJ, BIBREF24 ), opinion polarity (MPQA, BIBREF25 ), semantic textual similarity (STS14, BIBREF18 ), and SNLI BIBREF13 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The downstream tasks for evaluation include semantic relatedness (SICK, BIBREF17 ), paraphrase detection (MSRP, BIBREF19 ), question-type classification (TREC, BIBREF20 ), and five benchmark sentiment and subjective datasets, which include movie review sentiment (MR, BIBREF21 , SST, BIBREF22 ), customer product reviews (CR, BIBREF23 ), subjectivity/objectivity classification (SUBJ, BIBREF24 ), opinion polarity (MPQA, BIBREF25 ), semantic textual similarity (STS14, BIBREF18 ), and SNLI BIBREF13 . After unsupervised training, the encoder is fixed, and applied as a representation extractor on the 10 tasks."
],
"extractive_spans": [
"SICK",
"MSRP",
"TREC",
"MR",
"SST",
"CR",
"SUBJ",
"MPQA",
"STS14",
"SNLI"
],
"free_form_answer": "",
"highlighted_evidence": [
"The downstream tasks for evaluation include semantic relatedness (SICK, BIBREF17 ), paraphrase detection (MSRP, BIBREF19 ), question-type classification (TREC, BIBREF20 ), and five benchmark sentiment and subjective datasets, which include movie review sentiment (MR, BIBREF21 , SST, BIBREF22 ), customer product reviews (CR, BIBREF23 ), subjectivity/objectivity classification (SUBJ, BIBREF24 ), opinion polarity (MPQA, BIBREF25 ), semantic textual similarity (STS14, BIBREF18 ), and SNLI BIBREF13 . "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"22df19f69dde438dbad338287f47421a78a6dfcc",
"6bc04ab76e5c5419fc84c2c7dcc8159fcd90b285"
],
"answer": [
{
"evidence": [
"To compare the effect of different corpora, we also trained two models on Amazon Book Review dataset (without ratings) which is the largest subset of the Amazon Review dataset BIBREF26 with 142 million sentences, about twice as large as BookCorpus."
],
"extractive_spans": [
"Amazon Review dataset BIBREF26 with 142 million sentences, about twice as large as BookCorpus"
],
"free_form_answer": "",
"highlighted_evidence": [
"To compare the effect of different corpora, we also trained two models on Amazon Book Review dataset (without ratings) which is the largest subset of the Amazon Review dataset BIBREF26 with 142 million sentences, about twice as large as BookCorpus."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"To compare the effect of different corpora, we also trained two models on Amazon Book Review dataset (without ratings) which is the largest subset of the Amazon Review dataset BIBREF26 with 142 million sentences, about twice as large as BookCorpus."
],
"extractive_spans": [],
"free_form_answer": "71000000, 142000000",
"highlighted_evidence": [
"To compare the effect of different corpora, we also trained two models on Amazon Book Review dataset (without ratings) which is the largest subset of the Amazon Review dataset BIBREF26 with 142 million sentences, about twice as large as BookCorpus."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five"
],
"paper_read": [
"no",
"no"
],
"question": [
"Which downstream tasks are considered?",
"How long are the two unlabelled corpora?"
],
"question_id": [
"99f898eb91538cb82bc9a00892d54ae2a740961e",
"cf68906b7d96ca0c13952a6597d1f23e5184c304"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Our proposed model is composed of an RNN encoder, and a CNN decoder. During training, a batch of sentences are sent to the model, and the RNN encoder computes a vector representation for each of sentences; then the CNN decoder needs to reconstruct the paired target sequence, which contains 30 contiguous words right after the input sentence, given the vector representation. 300 is the dimension of word vectors. 2dh is the dimension of the sentence representation which varies with the RNN encoder size. (Better view in colour.)",
"Table 1: The models here all have a bi-directional GRU as the encoder (dimensionality 300 in each direction). The default way of producing the representation is a concatenation of outputs from a global mean-pooling and a global max-pooling, while “·-MaxOnly” refers to the model with only global maxpooling. Bold numbers are the best results among all presented models. We found that 1) inputting correct words to an autoregressive decoder is not necessary; 2) predict-all-words decoders work roughly the same as autoregressive decoders; 3) mean+max pooling provides stronger transferability than the max-pooling alone does. The table supports our choice of the predict-all-words CNN decoder and the way of producing vector representations from the bi-directional RNN encoder.",
"Table 2: Related Work and Comparison. As presented, our designed asymmetric RNN-CNN model has strong transferability, and is overall better than existing unsupervised models in terms of fast training speed and good performance on evaluation tasks. “†”s refer to our models, and “small/large” refers to the dimension of representation as 1200/4800. Bold numbers are the best ones among the models with same training and transferring setting, and underlined numbers are best results among all transfer learning models. The training time of each model was collected from the paper that proposed it.",
"Table 3: We implemented the same classifier as mentioned in Vendrov et al. (2015) on top of the features computed by our model. Our proposed RNN-CNN model gets similar result on SNLI as skip-thoughts, but with much less training time.",
"Table 4: Architecture Comparison. As shown in the table, our designed asymmetric RNN-CNN model (row 1,9, and 12) works better than other asymmetric models (CNN-LSTM, row 11), and models with symmetric structure (RNN-RNN, row 5 and 10). In addition, with larger encoder size, our model demonstrates stronger transferability. The default setting for our CNN decoder is that it learns to reconstruct 30 words right next to every input sentence. “CNN(10)” represents a CNN decoder with the length of outputs as 10, and “CNN(50)” represents it with the length of outputs as 50. “†” indicates that the CNN decoder learns to reconstruct next sentence. “‡” indicates the results reported in Gan et al. as future predictor. The CNN encoder in our experiment, noted as “§”, was based on AdaSent in Zhao et al. and Conneau et al.. Bold numbers are best results among models at same dimension, and underlined numbers are best results among all models. For STS14, the performance measures are Pearson’s and Spearman’s score. For MSRP, the performance measures are accuracy and F1 score.",
"Table 5: Related Work and Comparison. As presented, our designed asymmetric RNN-CNN model has strong transferability, and is overall better than existing unsupervised models in terms of fast training speed and good performance on evaluation tasks. “†”s refer to our models, and “small/large” refers to the dimension of representation as 1200/4800. Bold numbers are the best ones among the models with same training and transferring setting, and underlined numbers are best results among all transfer learning models. The training time of each model was collected from the paper that proposed it."
],
"file": [
"3-Figure1-1.png",
"4-Table1-1.png",
"6-Table2-1.png",
"7-Table3-1.png",
"12-Table4-1.png",
"13-Table5-1.png"
]
} | [
"How long are the two unlabelled corpora?"
] | [
[
"1710.10380-Experiment Settings-2"
]
] | [
"71000000, 142000000"
] | 397 |
1911.11025 | Women, politics and Twitter: Using machine learning to change the discourse | Including diverse voices in political decision-making strengthens our democratic institutions. Within the Canadian political system, there is gender inequality across all levels of elected government. Online abuse, such as hateful tweets, leveled at women engaged in politics contributes to this inequity, particularly tweets focusing on their gender. In this paper, we present ParityBOT: a Twitter bot which counters abusive tweets aimed at women in politics by sending supportive tweets about influential female leaders and facts about women in public life. ParityBOT is the first artificial intelligence-based intervention aimed at affecting online discourse for women in politics for the better. The goal of this project is to: $1$) raise awareness of issues relating to gender inequity in politics, and $2$) positively influence public discourse in politics. The main contribution of this paper is a scalable model to classify and respond to hateful tweets with quantitative and qualitative assessments. The ParityBOT abusive classification system was validated on public online harassment datasets. We conclude with analysis of the impact of ParityBOT, drawing from data gathered during interventions in both the $2019$ Alberta provincial and $2019$ Canadian federal elections. | {
"paragraphs": [
[
"Our political systems are unequal, and we suffer for it. Diversity in representation around decision-making tables is important for the health of our democratic institutions BIBREF0. One example of this inequity of representation is the gender disparity in politics: there are fewer women in politics than men, largely because women do not run for office at the same rate as men. This is because women face systemic barriers in political systems across the world BIBREF1. One of these barriers is online harassment BIBREF2, BIBREF3. Twitter is an important social media platform for politicians to share their visions and engage with their constituents. Women are disproportionately harassed on this platform because of their gender BIBREF4.",
"To raise awareness of online abuse and shift the discourse surrounding women in politics, we designed, built, and deployed ParityBOT: a Twitter bot that classifies hateful tweets directed at women in politics and then posts “positivitweets”. This paper focuses on how ParityBOT improves discourse in politics.",
"Previous work that addressed online harassment focused on collecting tweets directed at women engaged in politics and journalism and determining if they were problematic or abusive BIBREF5, BIBREF3, BIBREF6. Inspired by these projects, we go one step further and develop a tool that directly engages in the discourse on Twitter in political communities. Our hypothesis is that by seeing “positivitweets” from ParityBOT in their Twitter feeds, knowing that each tweet is an anonymous response to a hateful tweet, women in politics will feel encouraged and included in digital political communitiesBIBREF7. This will reduce the barrier to fair engagement on Twitter for women in politics. It will also help achieve gender balance in Canadian politics and improve gender equality in our society."
],
[
"In this section, we outline the technical details of ParityBot. The system consists of: 1) a Twitter listener that collects and classifies tweets directed at a known list of women candidates, and 2) a responder that sends out positivitweets when hateful tweets are detected.",
"We collect tweets from Twitter's real-time streaming API. The stream listener uses the open-source Python library Tweepy BIBREF8. The listener analyses tweets in real-time by firing an asynchronous tweet analysis and storage function for each English tweet mentioning one or more candidate usernames of interest. We limit the streaming to English as our text analysis models are trained on English language corpora. We do not track or store retweets to avoid biasing the analysis by counting the same content multiple times. Twitter data is collected and used in accordance with the acceptable terms of use BIBREF9.",
"The tweet analysis and storage function acts as follows: 1) parsing the tweet information to clean and extract the tweet text, 2) scoring the tweet using multiple text analysis models, and 3) storing the data in a database table. We clean tweet text with a variety of rules to ensure that the tweets are cleaned consistent with the expectations of the analysis models (see Appdx SECREF9).",
"The text analysis models classify a tweet by using, as features, the outputs from Perspective API from Jigsaw BIBREF10, HateSonar BIBREF11, and VADER sentiment models BIBREF12. Perspective API uses machine learning models to score the perceived impact a tweet might have BIBREF10. The outputs from these models (i.e. 17 from Perspective, 3 from HateSonar, and 4 from VADER) are combined into a single feature vector for each tweet (see Appdx SECREF10). No user features are included in the tweet analysis models. While these features may improve classification accuracy they can also lead to potential bias BIBREF13.",
"We measure the relative correlation of each feature with the hateful or not hateful labels. We found that Perspective API's TOXICITY probability was the most consistently predictive feature for classifying hateful tweets. Fig. FIGREF5 shows the relative frequencies of hateful and non-hateful tweets over TOXICITY scores. During both elections, we opted to use a single Perspective API feature to trigger sending positivitweets. Using the single TOXICITY feature is almost as predictive as using all features and a more complex model SECREF14. It was also simpler to implement and process tweets at scale. The TOXICITY feature is the only output from the Perspective API with transparent evaluation details summarized in a Model Card BIBREF14, BIBREF15."
],
[
"Deploying ParityBOT during the Alberta 2019 election required volunteers to use online resources to create a database of all the candidates running in the Alberta provincial election. Volunteers recorded each candidate's self-identifying gender and Twitter handle in this database. For the 2019 federal Canadian election, we scraped a Wikipedia page that lists candidates BIBREF16. We used the Python library gender-guesser BIBREF17 to predict the gender of each candidate based on their first names. As much as possible, we manually validated these predictions with corroborating evidence found in candidates' biographies on their party's websites and in their online presence.",
"ParityBOT sent positivitweets composed by volunteers. These tweets expressed encouragement, stated facts about women in politics, and aimed to inspire and uplift the community. Volunteers submitted many of these positivitweets through an online form. Volunteers were not screened and anyone could access the positivitweet submission form. However, we mitigate the impact of trolls submitting hateful content, submitter bias, and ill-equipped submitters by reviewing, copy editing, and fact checking each tweet. Asking for community contribution in this way served to maximize limited copywriting resources and engage the community in the project."
],
[
"We evaluated the social impact of our system by interviewing individuals involved in government ($n=5$). We designed a discussion guide based on user experience research interview standards to speak with politicians in relevant jurisdictions BIBREF18. Participants had varying levels of prior awareness of the ParityBOT project. Our participants included 3 women candidates, each from a different major political party in the 2019 Alberta provincial election, and 2 men candidates at different levels of government representing Alberta areas. The full discussion guide for qualitative assessment is included in Appdx SECREF27. All participants provided informed consent to their anonymous feedback being included in this paper."
],
[
"We deployed ParityBOT during two elections: 1) the 2019 Alberta provincial election, and 2) the 2019 Canadian federal election. For each tweet we collected, we calculated the probability that the tweet was hateful or abusive. If the probability was higher than our response decision threshold, a positivitweet was posted. Comprehensive quantitative results are listed in Appendix SECREF6.",
"During the Alberta election, we initially set the decision threshold to a TOXICITY score above $0.5$ to capture the majority of hateful tweets, but we were sending too many tweets given the number of positivitweets we had in our library and the Twitter API daily limit BIBREF9. Thus, after the first 24 hours that ParityBOT was live, we increased the decision threshold to $0.8$, representing a significant inflection point for hatefulness in the training data (Fig. FIGREF5). We further increased the decision threshold to $0.9$ for the Canadian federal election given the increase in the number and rate of tweets processed. For the Alberta provincial election, the model classified 1468 tweets of the total 12726 as hateful, and posted only 973 positivitweets. This means that we did not send out a positivitweet for every classified hateful tweet, and reflects our decision rate-limit of ParityBOT. Similar results were found for the 2019 Canadian election."
],
[
"We wrote guidelines and values for this to guide the ongoing development of the ParityBOT project. These values help us make decision and maintain focus on the goal of this project.",
"While there is potential to misclassify tweets, the repercussions of doing so are limited. With ParityBOT, false negatives, hateful tweets classified as non-hateful, are not necessarily bad, since the bot is tweeting a positive message. False positives, non-hateful tweets classified as hateful, may result in tweeting too frequently, but this is mitigated by our choice of decision threshold.",
"In developing ParityBOT, we discussed the risks of using bots on social media and in politics. First, we included the word “bot” in the project title and Twitter handle to be clear that the Twitter account was tweeting automatically. We avoided automating any direct “at (@) mention” of Twitter users, only identifying individuals' Twitter handles manually when they had requested credit for their submitted positivitweet. We also acknowledge that we are limited in achieving certainty in assigning a gender to each candidate."
],
[
"In our qualitative research, we discovered that ParityBOT played a role in changing the discourse. One participant said, “it did send a message in this election that there were people watching” (P2). We consistently heard that negative online comments are a fact of public life, even to the point where it's a signal of growing influence. “When you're being effective, a good advocate, making good points, people are connecting with what you're saying. The downside is, it comes with a whole lot more negativity [...] I can always tell when a tweet has been effective because I notice I'm followed by trolls” (P1).",
"We heard politicians say that the way they have coped with online abuse is to ignore it. One participant explained, “I've tried to not read it because it's not fun to read horrible things about yourself” (P4). Others dismiss the idea that social media is a useful space for constructive discourse: “Because of the diminishing trust in social media, I'm stopping going there for more of my intelligent discourse. I prefer to participate in group chats with people I know and trust and listen to podcasts” (P3)."
],
[
"We would like to run ParityBOT in more jurisdictions to expand the potential impact and feedback possibilities. In future iterations, the system might better match positive tweets to the specific type of negative tweet the bot is responding to. Qualitative analysis helps to support the interventions we explore in this paper. To that end, we plan to survey more women candidates to better understand how a tool like this impacts them. Additionally, we look forward to talking to more women interested in politics to better understand whether a tool like this would impact their decision to run for office. We would like to expand our hateful tweet classification validation study to include larger, more recent abusive tweet datasets BIBREF19, BIBREF20. We are also exploring plans to extend ParityBOT to invite dialogue: for example, asking people to actively engage with ParityBOT and analyse reply and comment tweet text using natural language-based discourse analysis methods.",
"During the 2019 Alberta provincial and 2019 Canadian federal elections, ParityBOT highlighted that hate speech is prevalent and difficult to combat on our social media platforms as they currently exist, and it is impacting democratic health and gender equality in our communities BIBREF21. We strategically designed ParityBOT to inject hope and positivity into politics, to encourage more diverse candidates to participate. By using machine learning technology to address these systemic issues, we can help change the discourse an link progress in science to progress in humanity."
],
[
"We use regular expression rules to clean tweets: convert the text to lowercase, remove URLs, strip newlines, replace whitespace with a single space, and replace mentions with the text tag `MENTION'. While these rules may bias the classifiers, they allow for consistency and generalization between training, validation, and testing datasets."
],
[
"Each tweet is processed by three models: Perspective API from Jigsaw BIBREF10, HateSonar BIBREF11, and VADER sentiment models BIBREF12. Each of these models outputs a score between $[0,1]$ which correlates the text of the tweet with the specific measure of the feature. The outputs from these models (i.e. 17 from Perspective, 3 from HateSonar, and 4 from VADER) are combined into a single feature vector for each tweet. Below we list the outputs for each text featurization model:",
"nolistsep",
"[noitemsep]",
"Perspective API: 'IDENTITY_ATTACK', 'INCOHERENT', 'TOXICITY_FAST', 'THREAT', 'INSULT', 'LIKELY_TO_REJECT', 'TOXICITY', 'PROFANITY', 'SEXUALLY_EXPLICIT', 'ATTACK_ON_AUTHOR', 'SPAM', 'ATTACK_ON_COMMENTER', 'OBSCENE', 'SEVERE_TOXICITY', 'INFLAMMATORY'",
"HateSonar: 'sonar_hate_speech', 'sonar_offensive_language', 'sonar_neither'",
"VADER: 'vader_neg', 'vader_neu', 'vader_pos', 'vader_compound'"
],
[
"For validation, we found the most relevant features and set an abusive prediction threshold by using a dataset of 20194 cleaned, unique tweets identified as either hateful and not hateful from previous research BIBREF22. Each entry in our featurized dataset is composed of 24 features and a class label of hateful or not hateful. The dataset is shuffled and randomly split into training (80%) and testing (20%) sets matching the class balance ($25.4\\%$ hateful) of the full dataset. We use Adaptive Synthetic (ADASYN) sampling to resample and balance class proportions in the dataset BIBREF23.",
"With the balanced training dataset, we found the best performing classifier to be a gradient boosted decision tree BIBREF24 by sweeping over a set of possible models and hyperparameters using TPOT BIBREF25. For this sweep, we used 10-fold cross validation on the training data. We randomly partition this training data 10 times, fit a model on a training fraction, and validate on the held-out set.",
"We performed an ablation experiment to test the relative impact of the features derived from the various text classification models."
],
[
"This table includes quantitative results from the deployment of ParityBOT in the Alberta 2019 provincial and Canadian 2019 federal elections."
],
[
"Overview Interviews will be completed in three rounds with three different target participant segments.",
"Research Objectives nolistsep",
"[noitemsep]",
"Understand if and how the ParityBOT has impacted women in politics",
"Obtain feedback from Twitter users who've interacted with the bot",
"Explore potential opportunities to build on the existing idea and platform",
"Gain feedback and initial impressions from people who haven't interacted with the Bot, but are potential audience",
"Target Participants",
"nolistsep",
"[noitemsep]",
"Round 1: Women in politics who are familiar with the Bot",
"Round 2: Women who've interacted with the Bot (maybe those we don't know)",
"Round 3: Some women who may be running in the federal election who haven't heard of the ParityBOT, but might benefit from following it",
"All participants: Must be involved in politics in Canada and must be engaged on Twitter - i.e. have an account and follow political accounts and/or issues",
"Recruiting",
"nolistsep",
"[noitemsep]",
"Round 1: [Author] recruit from personal network via text",
"Round 2: Find people who've interacted with the bot on Twitter who we don't know, send them a DM, and ask if we can get their feedback over a 15- to 30-minute phone call",
"Round 3: Use contacts in Canadian politics to recruit participants who have no prior awareness of ParityBOT",
"Method 15- to 30-minute interviews via telephone",
"Output Summary of findings in the form of a word document that can be put into the paper"
],
[
"Introduction",
"[Author]: Hey! Thanks for doing this. This shouldn't take longer than 20 minutes. [Author] is a UX researcher and is working with us. They'll take it from here and explain our process, get your consent and conduct the interview. I'll be taking notes. Over to [Author]!",
"[Author]: Hi, my name is [Author], I'm working with [Author] and [Author] to get feedback on the ParityBOT; the Twitter Bot they created during the last provincial election.",
"With your permission, we'd like to record our conversation. The recording will only be used to help us capture notes from the session and figure out how to improve the project, and it won't be seen by anyone except the people working on this project. We may use some quotes in an academic paper, You'll be anonymous and we won't identify you personally by name.",
"If you have any concerns at time, we can stop the interview and the recording. Do we have your permission to do this? (Wait for verbal “yes”).",
"Round 1 (Women in Politics familiar with ParityBOT)",
"Background and Warm Up",
"nolistsep",
"[noitemsep]",
"When you were thinking about running for politics what were your major considerations? For example, barriers, concerns?",
"We know that online harassment is an issue for women in politics - have you experienced this in your career? How do you deal with harassment? What are your coping strategies?",
"What advice would you give to women in politics experiencing online harassment?",
"Introduction to PartyBOT Thanks very much, now, more specifically about the ParityBOT:",
"nolistsep",
"[noitemsep]",
"What do you know about the ParityBOT?",
"What do you think it's purpose is?",
"Did you encounter it? Tell me about how you first encountered it? Did it provide any value to you during your campaign? How? Do you think this is a useful tool? Why or why not? Did it mitigate the barrier of online harassment during your time as a politician?",
"Is there anything you don't like about the Bot?",
"Next Steps If you could build on this idea of mitigating online harassment for women in politics, what ideas or suggestions would you have?",
"Conclusion Any other thoughts or opinions about the ParityBOT you'd like to share before we end our call?",
"Thank you very much for your time! If you have any questions, or further comments, feel free to text or email [Author]."
]
],
"section_name": [
"Introduction",
"Methods ::: Technical Details for ParityBot",
"Methods ::: Collecting Twitter handles, predicting candidate gender, curating “positivitweets”",
"Methods ::: Qualitative Assessment",
"Results and Outcomes",
"Results and Outcomes ::: Values and Limitations",
"Results and Outcomes ::: User experience research results",
"Future Work and Conclusions",
"Tweet Cleaning and Feature Details ::: Tweet Cleaning Methods",
"Tweet Cleaning and Feature Details ::: Tweet Featurization Details",
"Tweet Cleaning and Feature Details ::: Validation and Ablation Experiments",
"Quantitative analysis of elections",
"ParityBOT Research Plan and Discussion Guide",
"ParityBOT Research Plan and Discussion Guide ::: Discussion Guide"
]
} | {
"answers": [
{
"annotation_id": [
"538c485c136354df717bf8213dcec2e7fb527c64",
"95a27af0ea105cb48e92f8da869a654178bab9f8"
],
"answer": [
{
"evidence": [
"We collect tweets from Twitter's real-time streaming API. The stream listener uses the open-source Python library Tweepy BIBREF8. The listener analyses tweets in real-time by firing an asynchronous tweet analysis and storage function for each English tweet mentioning one or more candidate usernames of interest. We limit the streaming to English as our text analysis models are trained on English language corpora. We do not track or store retweets to avoid biasing the analysis by counting the same content multiple times. Twitter data is collected and used in accordance with the acceptable terms of use BIBREF9."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We limit the streaming to English as our text analysis models are trained on English language corpora."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We collect tweets from Twitter's real-time streaming API. The stream listener uses the open-source Python library Tweepy BIBREF8. The listener analyses tweets in real-time by firing an asynchronous tweet analysis and storage function for each English tweet mentioning one or more candidate usernames of interest. We limit the streaming to English as our text analysis models are trained on English language corpora. We do not track or store retweets to avoid biasing the analysis by counting the same content multiple times. Twitter data is collected and used in accordance with the acceptable terms of use BIBREF9."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" We limit the streaming to English as our text analysis models are trained on English language corpora. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"484fb230d8e5d133ae4eb712d4317464de7ac93d",
"90d58e0afc3086ade3a398f44e84362576c6411e"
],
"answer": [
{
"evidence": [
"We evaluated the social impact of our system by interviewing individuals involved in government ($n=5$). We designed a discussion guide based on user experience research interview standards to speak with politicians in relevant jurisdictions BIBREF18. Participants had varying levels of prior awareness of the ParityBOT project. Our participants included 3 women candidates, each from a different major political party in the 2019 Alberta provincial election, and 2 men candidates at different levels of government representing Alberta areas. The full discussion guide for qualitative assessment is included in Appdx SECREF27. All participants provided informed consent to their anonymous feedback being included in this paper."
],
"extractive_spans": [
" interviewing individuals involved in government ($n=5$)"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluated the social impact of our system by interviewing individuals involved in government ($n=5$). We designed a discussion guide based on user experience research interview standards to speak with politicians in relevant jurisdictions BIBREF18."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluated the social impact of our system by interviewing individuals involved in government ($n=5$). We designed a discussion guide based on user experience research interview standards to speak with politicians in relevant jurisdictions BIBREF18. Participants had varying levels of prior awareness of the ParityBOT project. Our participants included 3 women candidates, each from a different major political party in the 2019 Alberta provincial election, and 2 men candidates at different levels of government representing Alberta areas. The full discussion guide for qualitative assessment is included in Appdx SECREF27. All participants provided informed consent to their anonymous feedback being included in this paper."
],
"extractive_spans": [
"by interviewing individuals involved in government"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluated the social impact of our system by interviewing individuals involved in government ($n=5$). "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"2eea5cd61ffce39ad103c8da1ef85e06e51c961d",
"a10685a753a7ec3d20442f96c6b3152c9f809dd2"
],
"answer": [
{
"evidence": [
"For validation, we found the most relevant features and set an abusive prediction threshold by using a dataset of 20194 cleaned, unique tweets identified as either hateful and not hateful from previous research BIBREF22. Each entry in our featurized dataset is composed of 24 features and a class label of hateful or not hateful. The dataset is shuffled and randomly split into training (80%) and testing (20%) sets matching the class balance ($25.4\\%$ hateful) of the full dataset. We use Adaptive Synthetic (ADASYN) sampling to resample and balance class proportions in the dataset BIBREF23."
],
"extractive_spans": [
"20194 cleaned, unique tweets identified as either hateful and not hateful from previous research BIBREF22"
],
"free_form_answer": "",
"highlighted_evidence": [
"For validation, we found the most relevant features and set an abusive prediction threshold by using a dataset of 20194 cleaned, unique tweets identified as either hateful and not hateful from previous research BIBREF22."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"For validation, we found the most relevant features and set an abusive prediction threshold by using a dataset of 20194 cleaned, unique tweets identified as either hateful and not hateful from previous research BIBREF22. Each entry in our featurized dataset is composed of 24 features and a class label of hateful or not hateful. The dataset is shuffled and randomly split into training (80%) and testing (20%) sets matching the class balance ($25.4\\%$ hateful) of the full dataset. We use Adaptive Synthetic (ADASYN) sampling to resample and balance class proportions in the dataset BIBREF23."
],
"extractive_spans": [
" unique tweets identified as either hateful and not hateful from previous research BIBREF22"
],
"free_form_answer": "",
"highlighted_evidence": [
"For validation, we found the most relevant features and set an abusive prediction threshold by using a dataset of 20194 cleaned, unique tweets identified as either hateful and not hateful from previous research BIBREF22."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"231b85ed930bdf392a39481ed5f0de6d2227875c",
"a85d2d56858fb9723d70669995dc035a36fb0118"
],
"answer": [
{
"evidence": [
"ParityBOT sent positivitweets composed by volunteers. These tweets expressed encouragement, stated facts about women in politics, and aimed to inspire and uplift the community. Volunteers submitted many of these positivitweets through an online form. Volunteers were not screened and anyone could access the positivitweet submission form. However, we mitigate the impact of trolls submitting hateful content, submitter bias, and ill-equipped submitters by reviewing, copy editing, and fact checking each tweet. Asking for community contribution in this way served to maximize limited copywriting resources and engage the community in the project."
],
"extractive_spans": [],
"free_form_answer": "Manualy (volunteers composed them)",
"highlighted_evidence": [
"ParityBOT sent positivitweets composed by volunteers."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"ParityBOT sent positivitweets composed by volunteers. These tweets expressed encouragement, stated facts about women in politics, and aimed to inspire and uplift the community. Volunteers submitted many of these positivitweets through an online form. Volunteers were not screened and anyone could access the positivitweet submission form. However, we mitigate the impact of trolls submitting hateful content, submitter bias, and ill-equipped submitters by reviewing, copy editing, and fact checking each tweet. Asking for community contribution in this way served to maximize limited copywriting resources and engage the community in the project."
],
"extractive_spans": [
"Volunteers submitted many of these positivitweets through an online form"
],
"free_form_answer": "",
"highlighted_evidence": [
"These tweets expressed encouragement, stated facts about women in politics, and aimed to inspire and uplift the community. Volunteers submitted many of these positivitweets through an online form. Volunteers were not screened and anyone could access the positivitweet submission form. However, we mitigate the impact of trolls submitting hateful content, submitter bias, and ill-equipped submitters by reviewing, copy editing, and fact checking each tweet."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"4c6ebb5d9d2040a577dcdcf8c8cb2911570aedd7",
"9c137ff9a26e13e6736598dccd152c783bd674d6"
],
"answer": [
{
"evidence": [
"The text analysis models classify a tweet by using, as features, the outputs from Perspective API from Jigsaw BIBREF10, HateSonar BIBREF11, and VADER sentiment models BIBREF12. Perspective API uses machine learning models to score the perceived impact a tweet might have BIBREF10. The outputs from these models (i.e. 17 from Perspective, 3 from HateSonar, and 4 from VADER) are combined into a single feature vector for each tweet (see Appdx SECREF10). No user features are included in the tweet analysis models. While these features may improve classification accuracy they can also lead to potential bias BIBREF13."
],
"extractive_spans": [
"The text analysis models classify a tweet by using, as features, the outputs from Perspective API from Jigsaw BIBREF10, HateSonar BIBREF11, and VADER sentiment models BIBREF12"
],
"free_form_answer": "",
"highlighted_evidence": [
"The text analysis models classify a tweet by using, as features, the outputs from Perspective API from Jigsaw BIBREF10, HateSonar BIBREF11, and VADER sentiment models BIBREF12. Perspective API uses machine learning models to score the perceived impact a tweet might have BIBREF10. The outputs from these models (i.e. 17 from Perspective, 3 from HateSonar, and 4 from VADER) are combined into a single feature vector for each tweet (see Appdx SECREF10)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The text analysis models classify a tweet by using, as features, the outputs from Perspective API from Jigsaw BIBREF10, HateSonar BIBREF11, and VADER sentiment models BIBREF12. Perspective API uses machine learning models to score the perceived impact a tweet might have BIBREF10. The outputs from these models (i.e. 17 from Perspective, 3 from HateSonar, and 4 from VADER) are combined into a single feature vector for each tweet (see Appdx SECREF10). No user features are included in the tweet analysis models. While these features may improve classification accuracy they can also lead to potential bias BIBREF13."
],
"extractive_spans": [
"classify a tweet by using, as features, the outputs from Perspective API from Jigsaw BIBREF10, HateSonar BIBREF11, and VADER sentiment models BIBREF12"
],
"free_form_answer": "",
"highlighted_evidence": [
"The text analysis models classify a tweet by using, as features, the outputs from Perspective API from Jigsaw BIBREF10, HateSonar BIBREF11, and VADER sentiment models BIBREF12. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do the authors report only on English data?",
"How is the impact of ParityBOT analyzed?",
"What public online harassment datasets was the system validated on?",
"Where do the supportive tweets about women come from? Are they automatically or manually generated?",
"How are the hateful tweets aimed at women detected/classified?"
],
"question_id": [
"3e5162e6399c7d03ecc7007efd21d06c04cf2843",
"bd255aadf099854541d06997f83a0e478f526120",
"a9ff35f77615b3a4e7fd7b3a53d0b288a46f06ce",
"69a46a227269c3aac9bf9d7c3d698c787642f806",
"ebe6b8ec141172f7fea66f0a896b3124276d4884"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: Visualizing the training data distribution. Relative frequency of hateful versus not hateful tweets for varying levels of the Perspective API [17] TOXICITY score. Normalized histograms are plotted underneath kernel density estimation (KDE) plots.",
"Figure 2: 10-fold cross validation ablation experiment showing the relative impacts of including various feature sets (PERSPECTIVE [17], VADER [11], HATESONAR [8], from left to right) in the feature vectors on performance. Performance is measured using binary classification area under the receiver operated characteristic curve (ROC AUC) and averaged over the 10-folds. These methods are compared with the best performing classifier from the validation study, a gradient-boosted decision tree (textscLGBM [13], left) and a stratified RANDOM classifier (right)."
],
"file": [
"3-Figure1-1.png",
"7-Figure2-1.png"
]
} | [
"Where do the supportive tweets about women come from? Are they automatically or manually generated?"
] | [
[
"1911.11025-Methods ::: Collecting Twitter handles, predicting candidate gender, curating “positivitweets”-1"
]
] | [
"Manualy (volunteers composed them)"
] | 398 |
1708.07252 | A Study on Neural Network Language Modeling | An exhaustive study on neural network language modeling (NNLM) is performed in this paper. Different architectures of basic neural network language models are described and examined. A number of different improvements over basic neural network language models, including importance sampling, word classes, caching and bidirectional recurrent neural network (BiRNN), are studied separately, and the advantages and disadvantages of every technique are evaluated. Then, the limits of neural network language modeling are explored from the aspects of model architecture and knowledge representation. Part of the statistical information from a word sequence will loss when it is processed word by word in a certain order, and the mechanism of training neural network by updating weight matrixes and vectors imposes severe restrictions on any significant enhancement of NNLM. For knowledge representation, the knowledge represented by neural network language models is the approximate probabilistic distribution of word sequences from a certain training data set rather than the knowledge of a language itself or the information conveyed by word sequences in a natural language. Finally, some directions for improving neural network language modeling further is discussed. | {
"paragraphs": [
[
"Generally, a well-designed language model makes a critical difference in various natural language processing (NLP) tasks, like speech recognition BIBREF0 , BIBREF1 , machine translation BIBREF2 , BIBREF3 , semantic extraction BIBREF4 , BIBREF5 and etc. Language modeling (LM), therefore, has been the research focus in NLP field all the time, and a large number of sound research results have been published in the past decades. N-gram based LM BIBREF6 , a non-parametric approach, is used to be state of the art, but now a parametric method - neural network language modeling (NNLM) is considered to show better performance and more potential over other LM techniques, and becomes the most commonly used LM technique in multiple NLP tasks.",
"Although some previous attempts BIBREF7 , BIBREF8 , BIBREF9 had been made to introduce artificial neural network (ANN) into LM, NNLM began to attract researches' attentions only after BIBREF10 and did not show prominent advantages over other techniques of LM until recurrent neural network (RNN) was investigated for NNLM BIBREF11 , BIBREF12 . After more than a decade's research, numerous improvements, marginal or critical, over basic NNLM have been proposed. However, the existing experimental results of these techniques are not comparable because they were obtained under different experimental setups and, sometimes, these techniques were evaluated combined with other different techniques. Another significant problem is that most researchers focus on achieving a state of the art language model, but the limits of NNLM are rarely studied. In a few works BIBREF13 on exploring the limits of NNLM, only some practical issues, like computational complexity, corpus, vocabulary size, and etc., were dealt with, and no attention was spared on the effectiveness of modeling a natural language using NNLM.",
"Since this study focuses on NNLM itself and does not aim at raising a state of the art language model, the techniques of combining neural network language models with other kind of language models, like N-gram based language models, maximum entropy (ME) language models and etc., will not be included. The rest of this paper is organized as follows: In next section, the basic neural network language models - feed-forward neural network language model (FNNLM), recurrent neural network language model (RNNLM) and long-short term memory (LSTM) RNNLM, will be introduced, including the training and evaluation of these models. In the third section, the details of some important NNLM techniques, including importance sampling, word classes, caching and bidirectional recurrent neural network (BiRNN), will be described, and experiments will be performed on them to examine their advantages and disadvantages separately. The limits of NNLM, mainly about the aspects of model architecture and knowledge representation, will be explored in the fourth section. A further work section will also be given to represent some further researches on NNLM. In last section, a conclusion about the findings in this paper will be made."
],
[
"The goal of statistical language models is to estimate the probability of a word sequence INLINEFORM0 in a natural language, and the probability can be represented by the production of the conditional probability of every word given all the previous ones: INLINEFORM1 ",
"where, INLINEFORM0 . This chain rule is established on the assumption that words in a word sequence only statistically depend on their previous context and forms the foundation of all statistical language modeling. NNLM is a kind of statistical language modeling, so it is also termed as neural probabilistic language modeling or neural statistical language modeling. According to the architecture of used ANN, neural network language models can be classified as: FNNLM, RNNLM and LSTM-RNNLM."
],
[
"As mentioned above, the objective of FNNLM is to evaluate the conditional probability INLINEFORM0 , but feed-forward neural network (FNN) lacks of an effective way to represent history context. Hence, the idea of n-gram based LM is adopted in FNNLM that words in a word sequence more statistically depend on the words closer to them, and only the INLINEFORM1 direct predecessor words are considered when evaluating the conditional probability, this is: INLINEFORM2 ",
"The architecture of the original FNNLM proposed by BIBREF10 is showed in Figure FIGREF2 , and INLINEFORM0 , INLINEFORM1 are the start and end marks of a word sequence respectively. In this model, a vocabulary is pre-built from a training data set, and every word in this vocabulary is assigned with a unique index. To evaluate the conditional probability of word INLINEFORM2 , its INLINEFORM3 direct previous words INLINEFORM4 are projected linearly into feature vectors using a shared matrix INLINEFORM5 according to their index in the vocabulary, where INLINEFORM6 is the size of the vocabulary and INLINEFORM7 is the feature vectors' dimension. In fact, every row of projection matrix INLINEFORM8 is a feature vector of a word in the vocabulary. The input INLINEFORM9 of FNN is formed by concatenating the feature vectors of words INLINEFORM10 , where INLINEFORM11 is the size of FNN's input layer. FNN can be generally represented as: INLINEFORM12 ",
"Where, INLINEFORM0 , INLINEFORM1 are weight matrixes, INLINEFORM2 is the size of hidden layer, INLINEFORM3 is the size of output layer, weight matrix INLINEFORM4 is for the direct connections between input layer and output layer, INLINEFORM5 and INLINEFORM6 are vectors for bias terms in hidden layer and output layer respectively, INLINEFORM7 is output vector, and INLINEFORM8 is activation function.",
"The INLINEFORM0 -th element of output vector INLINEFORM1 is the unnormalized conditional probability of the word with index INLINEFORM2 in the vocabulary. In order to guarantee all the conditional probabilities of words positive and summing to one, a softmax layer is always adopted following the output layer of FNN: INLINEFORM3 ",
"where INLINEFORM0 is the INLINEFORM1 -th element of output vector INLINEFORM2 , and INLINEFORM3 is the INLINEFORM4 -th word in the vocabulary.",
"Training of neural network language models is usually achieved by maximizing the penalized log-likelihood of the training data: INLINEFORM0 ",
"where, INLINEFORM0 is the set of model's parameters to be trained, INLINEFORM1 is a regularization term.",
"The recommended learning algorithm for neural network language models is stochastic gradient descent (SGD) method using backpropagation (BP) algorithm. A common choice for the loss function is the cross entroy loss which equals to negative log-likelihood here. The parameters are usually updated as: INLINEFORM0 ",
"where, INLINEFORM0 is learning rate and INLINEFORM1 is regularization parameter.",
"The performance of neural network language models is usually measured using perplexity (PPL) which can be defined as: INLINEFORM0 ",
"Perplexity can be defined as the exponential of the average number of bits required to encode the test data using a language model and lower perplexity indicates that the language model is closer to the true model which generates the test data."
],
[
"The idea of applying RNN in LM was proposed much earlier BIBREF10 , BIBREF14 , but the first serious attempt to build a RNNLM was made by BIBREF11 , BIBREF12 . RNNs are fundamentally different from feed-forward architectures in the sense that they operate on not only an input space but also an internal state space, and the state space enables the representation of sequentially extended dependencies. Therefore, arbitrary length of word sequence can be dealt with using RNNLM, and all previous context can be taken into account when predicting next word. As showed in Figure FIGREF5 , the representation of words in RNNLM is the same as that of FNNLM, but the input of RNN at every step is the feature vector of a direct previous word instead of the concatenation of the INLINEFORM0 previous words' feature vectors and all other previous words are taken into account by the internal state of previous step. At step INLINEFORM1 , RNN can be described as: DISPLAYFORM0 ",
"where, weight matrix INLINEFORM0 , and the input layer's size of RNN INLINEFORM1 . The outputs of RNN are also unnormalized probabilities and should be regularized using a softmax layer.",
"Because of the involvement of previous internal state at every step, back-propagation through time (BPTT) algorithm BIBREF15 is preferred for better performance when training RNNLMs. If data set is treated as a single long word sequence, truncated BPTT should be used and back-propagating error gradient through 5 steps is enough, at least for small corpus BIBREF16 . In this paper, neural network language models will all be trained on data set sentence by sentence, and the error gradient will be back-propagated trough every whole sentence without any truncation."
],
[
"Although RNNLM can take all predecessor words into account when predicting next word in a word sequence, but it is quite difficult to be trained over long term dependencies because of the vanishing or exploring problem BIBREF17 . LSTM-RNN was designed aiming at solving this problem, and better performance can be expected by replacing RNN with LSTM-RNN. LSTM-RNNLM was first proposed by BIBREF18 , and the whole architecture is almost the same as RNNLM except the part of neural network. LSTM-RNN was proposed by BIBREF17 and was refined and popularized in following works BIBREF19 , BIBREF20 . The general architecture of LSTM-RNN is: DISPLAYFORM0 ",
"Where, INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are input gate, forget gate and output gate, respectively. INLINEFORM3 is the internal memory of unit. INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 , INLINEFORM9 , INLINEFORM10 , INLINEFORM11 , INLINEFORM12 , INLINEFORM13 , INLINEFORM14 , INLINEFORM15 are all weight matrixes. INLINEFORM16 , INLINEFORM17 , INLINEFORM18 , INLINEFORM19 , and INLINEFORM20 are vectors for bias terms. INLINEFORM21 is the activation function in hidden layer and INLINEFORM22 is the activation function for gates."
],
[
"Comparisons among neural network language models with different architectures have already been made on both small and large corpus BIBREF16 , BIBREF21 . The results show that, generally, RNNLMs outperform FNNLMs and the best performance is achieved using LSTM-NNLMs. However, the neural network language models used in these comparisons are optimized using various techniques, and even combined with other kind of language models, let alone the different experimental setups and implementation details, which make the comparison results fail to illustrate the fundamental discrepancy in the performance of neural network language models with different architecture and cannot be taken as baseline for the studies in this paper.",
"Comparative experiments on neural network language models with different architecture were repeated here. The models in these experiments were all implemented plainly, and only a class-based speed-up technique was used which will be introduced later. Experiments were performed on the Brown Corpus, and the experimental setup for Brown corpus is the same as that in BIBREF10 , the first 800000 words (ca01 INLINEFORM0 cj54) were used for training, the following 200000 words (cj55 INLINEFORM1 cm06) for validation and the rest (cn01 INLINEFORM2 cr09) for test.",
"The experiment results are showed in Table TABREF9 which suggest that, on a small corpus likes the Brown Corpus, RNNLM and LSTM-RNN did not show a remarkable advantage over FNNLM, instead a bit higher perplexity was achieved by LSTM-RNNLM. Maybe more data is needed to train RNNLM and LSTM-RNNLM because longer dependencies are taken into account by RNNLM and LSTM-RNNLM when predicting next word. LSTM-RNNLM with bias terms or direct connections was also evaluated here. When the direct connections between input layer and output layer of LSTM-RNN are enabled, a slightly higher perplexity but shorter training time were obtained. An explanation given for this phenomenon by BIBREF10 is that direct connections provide a bit more capacity and faster learning of the \"linear\" part of mapping from inputs to outputs but impose a negative effect on generalization. For bias terms, no significant improvement on performance was gained by adding bias terms which was also observed on RNNLM by BIBREF16 . In the rest of this paper, all studies will be performed on LSTM-RNNLM with neither direct connections nor bias terms, and the result of this model in Table TABREF9 will be used as the baseline for the rest studies."
],
[
"Inspired by the contrastive divergence model BIBREF22 , BIBREF23 proposed a sampling-based method to speed up the training of neural network language models. In order to apply this method, the outputs of neural network should be normalized in following way instead of using a softmax function: INLINEFORM0 ",
"then, neural network language models can be treated as a special case of energy-based probability models.",
"The main idea of sampling based method is to approximate the average of log-likelihood gradient with respect to the parameters INLINEFORM0 by samples rather than computing the gradient explicitly. The log-likelihood gradient for the parameters set INLINEFORM1 can be generally represented as the sum of two parts: positive reinforcement for target word INLINEFORM2 and negative reinforcement for all word INLINEFORM3 , weighted by INLINEFORM4 : INLINEFORM5 ",
"Three sampling approximation algorithms were presented by BIBREF23 : Monte-Carlo Algorithm, Independent Metropolis-Hastings Algorithm and Importance Sampling Algorithm. However, only importance sampling worked well with neural network language model. In fact, 19-fold speed-up was achieved during training while no degradation of the perplexities was observed on both training and test data BIBREF23 .",
"Importance sampling is a Monte-Carlo scheme using an existing proposal distribution, and its estimator can be represented as: INLINEFORM0 ",
"where, INLINEFORM0 is an existing proposal distribution, INLINEFORM1 is the number of samples from INLINEFORM2 , INLINEFORM3 is the set of samples from INLINEFORM4 . Appling importance sampling to the average log-likelihood gradient of negative samples and the denominator of INLINEFORM5 , then the overall estimator for example INLINEFORM6 using INLINEFORM7 samples from distribution INLINEFORM8 is: INLINEFORM9 ",
"In order to avoid divergence, the sample size INLINEFORM0 should be increased as training processes which is measured by the effective sample size of importance sampling: INLINEFORM1 ",
"At every iteration, sampling is done block by block with a constant size until the effective sample size INLINEFORM0 becomes greater than a minimum value, and a full back-propagation will be performed when the sampling size INLINEFORM1 is greater than a certain threshold.",
"The introduction of importance sampling is just posted here for completeness and no further studies will be performed on it. Because a quick statistical language model which is well trained, like n-gram based language model, is needed to implement importance sampling. In addition, it cannot be applied into RNNLM or LSTM-RNNLM directly and other simpler and more efficient speed-up techniques have been proposed now."
],
[
"Before the idea of word classes was introduced to NNLM, it had been used in LM extensively for improving perplexities or increasing speed BIBREF24 , BIBREF25 . With word classes, every word in vocabulary is assigned to a unique class, and the conditional probability of a word given its history can be decomposed into the probability of the word's class given its history and the probability of the word given its class and history, this is: INLINEFORM0 ",
"where INLINEFORM0 is the class of word INLINEFORM1 . The architecture of class based LSTM-RNNLM is illustrated in Figure FIGREF12 , and INLINEFORM2 , INLINEFORM3 are the lower and upper index of words in a class respectively.",
" BIBREF26 extended word classes to a hierarchical binary clustering of words and built a hierarchical neural network language model. In hierarchical neural network language model, instead of assigning every word in vocabulary with a unique class, a hierarchical binary tree of words is built according to the word similarity information extracted from WordNet BIBREF27 , and every word in vocabulary is assigned with a bit vector INLINEFORM0 , INLINEFORM1 . When INLINEFORM2 are given, INLINEFORM3 indicates that word INLINEFORM4 belongs to the sub-group 0 of current node and INLINEFORM5 indicates it belongs to the other one. The conditional probability of every word is represented as: INLINEFORM6 ",
"Theoretically, an exponential speed-up, on the order of INLINEFORM0 , can be achieved with this hierarchical architecture. In BIBREF26 , impressive speed-up during both training and test, which were less than the theoretical one, were obtained but an obvious increase in PPL was also observed. One possible explanation for this phenomenon is that the introduction of hierarchical architecture or word classes impose negative influence on the word classification by neural network language models. As is well known, a distribution representation for words, which can be used to represent the similarities between words, is formed by neural network language models during training. When words are clustered into classes, the similarities between words from different classes cannot be recognized directly. For a hierarchical clustering of words, words are clustered more finely which might lead to worse performance, i.e., higher perplexity, and deeper the hierarchical architecture is, worse the performance would be.",
"To explore this point further, hierarchical LSTM-NNLMs with different number of hierarchical layers were built. In these hierarchical LSTM-NNLMs, words were clustered randomly and uniformly instead of according to any word similarity information. The results of experiment on these models are showed in Table TABREF13 which strongly support the above hypothesis. When words are clustered into hierarchical word classes, the speed of both training and test increase, but the effect of speed-up decreases and the performance declines dramatically as the number of hierarchical layers increases. Lower perplexity can be expected if some similarity information of words is used when clustering words into classes. However, because of the ambiguity of words, the degradation of performance is unavoidable by assigning every word with a unique class or path. On the other hand, the similarities among words recognized by neural network is hard to defined, but it is sure that they are not confined to linguistical ones.",
"There is a simpler way to speed up neural network language models using word classes which was proposed by BIBREF12 . Words in vocabulary are arranged in descent order according to their frequencies in training data set, and are assigned to classes one by one using following rule: INLINEFORM0 ",
"where, INLINEFORM0 is the target number of word classes, INLINEFORM1 is the frequency of the INLINEFORM2 -th word in vocabulary, the sum of all words' frequencies INLINEFORM3 . If the above rule is satisfied, the INLINEFORM4 -th word in vocabulary will be assigned to INLINEFORM5 -th class. In this way, the word classes are not uniform, and the first classes hold less words with high frequency and the last ones contain more low-frequency words. This strategy was further optimized by BIBREF16 using following criterion: INLINEFORM6 ",
"where, the sum of all words' sqrt frequencies INLINEFORM0 .",
"The experiment results (Table TABREF13 ) indicate that higher perplexity and a little more training time were obtained when the words in vocabulary were classified according to their frequencies than classified randomly and uniformly. When words are clustered into word classed using their frequency, words with high frequency, which contribute more to final perplexity, are clustered into very small word classes, and this leads to higher perplexity. On the other hand, word classes consist of words with low frequency are much bigger which causes more training time. However, as the experiment results show, both perplexity and training time were improved when words were classified according to their sqrt frequency, because word classes were more uniform when built in this way. All other models in this paper were speeded up using word classes, and words were clustered according to their sqrt frequencies."
],
[
"Like word classes, caching is also a common used optimization technique in LM. The cache language models are based on the assumption that the word in recent history are more likely to appear again. In cache language model, the conditional probability of a word is calculated by interpolating the output of standard language model and the probability evaluated by caching, like: INLINEFORM0 ",
"where, INLINEFORM0 is the output of standard language model, INLINEFORM1 is the probability evaluated using caching, and INLINEFORM2 is a constant, INLINEFORM3 .",
" BIBREF28 combined FNNLM with cache model to enhance the performance of FNNLM in speech recognition, and the cache model was formed based on the previous context as following: INLINEFORM0 ",
"where, INLINEFORM0 means Kronecker delta, INLINEFORM1 is the cache length, i.e., the number of previous words taken as cache, INLINEFORM2 is a coefficient depends on INLINEFORM3 which is the distance between previous word and target word. A cache model with forgetting can be obtained by lowering INLINEFORM4 linearly or exponentially respecting to INLINEFORM5 . A class cache model was also proposed by BIBREF28 for the case in which words are clustered into word classes. In class cache model, the probability of target word given the last recent word classes is determined. However, both word based cache model and class one can be defined as a kind of unigram language model built from previous context, and this caching technique is an approach to combine neural network language model with a unigram model.",
"Another type of caching has been proposed as a speed-up technique for RNNLMs BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 . The main idea of this approach is to store the outputs and states of language models for future prediction given the same contextual history. In BIBREF32 , four caches were proposed, and they were all achieved by hash lookup tables to store key and value pairs: probability INLINEFORM0 and word sequence INLINEFORM1 ; history INLINEFORM2 and its corresponding hidden state vector; history INLINEFORM3 and the denominator of the softmax function for classes; history INLINEFORM4 , class index INLINEFORM5 and the denominator of the softmax function for words. In BIBREF32 , around 50-fold speed-up was reported with this caching technique in speech recognition but, unfortunately, it only works for prediction and cannot be applied during training.",
"Inspired by the first caching technique, if the previous context can be taken into account through the internal states of RNN, the perplexity is expected to decrease. In this paper, all language models are trained sentence by sentence, and the initial states of RNN are initialized using a constant vector. This caching technique can be implemented by simply initializing the initial states using the last states of direct previous sentence in the same article. However, the experiment result (Table TABREF15 ) shows this caching technique did not work as excepted and the perplexity even increased slightly. Maybe, the Brown Corpus is too small and more data is needed to evaluated this caching technique, as more context is taken into account with this caching technique."
],
[
"In BIBREF33 , significant improvement on neural machine translation (NMT) for an English to French translation task was achieved by reversing the order of input word sequence, and the possible explanation given for this phenomenon was that smaller \"minimal time lag\" was obtained in this way. In my opinion, another possible explanation is that a word in word sequence may more statistically depend on the following context than previous one. After all, a number of words are determined by its following words instead of previous ones in some natural languages. Take the articles in English as examples, indefinite article \"an\" is used when the first syllable of next word is a vowel while \"a\" is preposed before words starting with consonant. What's more, if a noun is qualified by an attributive clause, definite article \"the\" should be used before the noun. These examples illustrate that words in a word sequence depends on their following words sometimes. To verify this hypothesis further, an experiment is performed here in which the word order of every input sentence is reversed, and the probability of word sequence INLINEFORM0 is evaluated as following: INLINEFORM1 ",
"However, the experiment result (Table TABREF17 ) shows that almost the same perplexity was achieved by reversing the order of words. This indicates that the same amount statistical information, but not exactly the same statistical information, for a word in a word sequence can be obtained from its following context as from its previous context, at least for English.",
"As a word in word sequence statistically depends on its both previous and following context, it is better to predict a word using context from its both side. Bidirectional recurrent neural network (BiRNN) BIBREF34 was designed to process data in both directions with two separate hidden layers, so better performance can be expected by using BiRNN. BiRNN was introduced to speech recognition by BIBREF35 , and then was evaluated in other NLP tasks, like NMT BIBREF36 , BIBREF3 . In these studies, BiRNN showed more excellent performance than unidirectional RNN. Nevertheless, BiRNN cannot be evaluated in LM directly as unidirectional RNN, because statistical language modeling is based on the chain rule which assumes that word in a word sequence only statistically depends on one side context. BiRNN can be applied in NLP tasks, like speech recognition and machine translation, because the input word sequences in these tasks are treated as a whole and usually encoded as a single vector. The architecture for encoding input word sequences using BiRNN is showed in Figure FIGREF18 . The facts that better performance can be achieved using BiRNN in speech recognition or machine translation indicate that a word in a word sequence is statistically determined by the words of its both side, and it is not a suitable way to deal with word sequence in a natural language word by word in an order."
],
[
"NNLM is state of the art, and has been introduced as a promising approach to various NLP tasks. Numerous researchers from different areas of NLP attempt to improve NNLM, expecting to get better performance in their areas, like lower perplexity on test data, less word error rate (WER) in speech recognition, higher Bilingual Evaluation Understudy (BLEU) score in machine translation and etc. However, few of them spares attention on the limits of NNLM. Without a thorough understanding of NNLM's limits, the applicable scope of NNLM and directions for improving NNLM in different NLP tasks cannot be defined clearly. In this section, the limits of NNLM will be studied from two aspects: model architecture and knowledge representation."
],
[
"In most language models including neural network language models, words are predicated one by one according to their previous context or following one which is believed to simulate the way human deal with natural languages, and, according to common sense, human actually speak or write word by word in a certain order. However, the intrinsic mechanism in human mind of processing natural languages cannot like this way. As mentioned above, it is not always true that words in a word sequence only depend on their previous or following context. In fact, before human speaking or writing, they know what they want to express and map their ideas into word sequence, and the word sequence is already cached in memory when human speaking or writing. In most case, the cached word sequence may be not a complete sentence but at least most part of it. On the other hand, for reading or listening, it is better to know both side context of a word when predicting the meaning of the word or define the grammar properties of the word. Therefore, it is not a good strategy to deal with word sequences in a natural language word by word in a certain order which has also been questioned by the success application of BiRNN in some NLP tasks.",
"Another limit of NNLM caused by model architecture is original from the monotonous architecture of ANN. In ANN, models are trained by updating weight matrixes and vectors which distribute among all nodes. Training will become much more difficult or even unfeasible when increasing the size of model or the variety of connections among nodes, but it is a much efficient way to enhance the performance of ANN. As is well known, ANN is designed by imitating biological neural system, but biological neural system does not share the same limit with ANN. In fact, the strong power of biological neural system is original from the enormous number of neurons and various connections among neurons, including gathering, scattering, lateral and recurrent connections BIBREF37 . In biological neural system, the features of signals are detected by different receptors, and encoded by low-level central neural system (CNS) which is changeless. The encoded signals are integrated by high-level CNS. Inspired by this, an improvement scheme for the architecture of ANN is proposed, as illustrated in Figure FIGREF19 . The features of signal are extracted according to the knowledge in certain field, and every feature is encoded using changeless neural network with careful designed structure. Then, the encoded features are integrated using a trainable neural network which may share the same architecture as existing ones. Because the model for encoding does not need to be trained, the size of this model can be much huge and the structure can be very complexity. If all the parameters of encoding model are designed using binary, it is possible to implement this model using hardware and higher efficiency can be expected."
],
[
"The word \"learn\" appears frequently with NNLM, but what neural network language models learn from training data set is rarely analyzed carefully. The common statement about the knowledge learned by neural network language models is the probabilistic distribution of word sequences in a natural language. Strictly speaking, it is the probabilistic distribution of word sequences from a certain training data set in a natural language, rather than the general one. Hence, the neural network language model trained on data set from a certain field will perform well on data set from the same field, and neural network language model trained on a general data set may show worse performance when tested on data set from a special field. In order to verify this, one million words reviews on electronics and books were extracted from Amazon reviews BIBREF38 , BIBREF39 respectively as data sets from different fields, and 800000 words for training, 100000 words for validation, and the rest for test. In this experiment, two models were trained on training data from electronics reviews and books reviews respectively, and the other one was trained on both. Then, all three models were tested on the two test data sets.",
"The lowest perplexity on each test data set was gained by the model trained on corresponding training data set, instead of the model trained on both training data set (Table TABREF23 ). The results show that the knowledge represented by a neural network language model is the probabilistic distribution of word sequences from training data set which varies from field to field. Except for the probabilistic distribution of word sequences, the feature vectors of words in vocabulary are also formed by neural network during training. Because of the classification function of neural network, the similarities between words can be observed using these feature vectors. However, the similarities between words are evaluated in a multiple dimensional space by feature vectors and it is hard to know which features of words are taken into account when these vectors are formed, which means words cannot be grouped according to any single feature by the feature vectors. In summary, the knowledge represented by neural network language model is the probabilistic distribution of word sequences from certain training data set and feature vectors for words in vocabulary formed in multiple dimensional space. Neither the knowledge of language itself, like grammar, nor the knowledge conveyed by a language can be gained from neural network language models. Therefore, NNLM can be a good choice for NLP tasks in some special fields where language understanding is not necessary. Language understanding cannot be achieved just with the probabilistic distribution of word sequences in a natural language, and new kind of knowledge representation should be raised for language understanding.",
"Since the training of neural network language model is really expensive, it is important for a well-trained neural network language model to keep learning during test or be improved on other training data set separately. However, the neural network language models built so far do not show this capacity. Lower perplexity can be obtained when the parameters of a trained neural network language model are tuned dynamically during test, as showed in Table TABREF21 , but this does not mean neural network language model can learn dynamically during test. ANN is just a numerical approximation method in nature, and it approximate the target function, the probabilistic distribution of word sequences for LM, by tuning parameters when trained on data set. The learned knowledge is saved as weight matrixes and vectors. When a trained neural network language model is expected to adaptive to new data set, it should be retrained on both previous training data set and new one. This is another limit of NNLM because of knowledge representation, i.e., neural network language models cannot learn dynamically from new data set."
],
[
"Various architectures of neural network language models are described and a number of improvement techniques are evaluated in this paper, but there are still something more should be included, like gate recurrent unit (GRU) RNNLM, dropout strategy for addressing overfitting, character level neural network language model and ect. In addition, the experiments in this paper are all performed on Brown Corpus which is a small corpus, and different results may be obtained when the size of corpus becomes larger. Therefore, all the experiments in this paper should be repeated on a much larger corpus.",
"Several limits of NNLM has been explored, and, in order to achieve language understanding, these limits must be overcome. I have not come up with a complete solution yet but some ideas which will be explored further next. First, the architecture showed in Figure FIGREF19 can be used as a general improvement scheme for ANN, and I will try to figure out the structure of changeless neural network for encoder. What's more, word sequences are commonly taken as signals for LM, and it is easy to take linguistical properties of words or sentences as the features of signals. However, it maybe not a proper way to deal with natural languages. Natural languages are not natural but man-made, and linguistical knowledge are also created by human long after natural language appeared. Liguistical knowledge only covers the \"right\" word sequences in a natural language, but it is common to deal with \"wrong\" ones in real world. In nature, every natural language is a mechanism of linking voices or signs with objects, both concrete and abstract. Therefore, the proper way to deal with natural languages is to find the relations between special voices or signs and objects, and the features of voices or signs can be defined easier than a natural language itself. Every voice or sign can be encoded as a unique code, vector or matrix, according to its features, and the similarities among voices or signs are indeed can be recognized from their codes. It is really difficult to model the relation between voices or signs and objects at once, and this work should be split into several steps. The first step is to covert voice or sign into characters, i.e., speech recognition or image recognition, but it is achieved using the architecture described in Figure FIGREF19 ."
],
[
"In this paper, different architectures of neural network language models were described, and the results of comparative experiment suggest RNNLM and LSTM-RNNLM do not show any advantages over FNNLM on small corpus. The improvements over these models, including importance sampling, word classes, caching and BiRNN, were also introduced and evaluated separately, and some interesting findings were proposed which can help us have a better understanding of NNLM.",
"Another significant contribution in this paper is the exploration on the limits of NNLM from the aspects of model architecture and knowledge representation. Although state of the art performance has been achieved using NNLM in various NLP tasks, the power of NNLM has been exaggerated all the time. The main idea of NNLM is to approximate the probabilistic distribution of word sequences in a natural language using ANN. NNLM can be successfully applied in some NLP tasks where the goal is to map input sequences into output sequences, like speech recognition, machine translation, tagging and ect. However, language understanding is another story. For language understanding, word sequences must be linked with any concrete or abstract objects in real world which cannot be achieved just with this probabilistic distribution.",
"All nodes of neural network in a neural network language model have parameters needed to be tunning during training, so the training of the model will become very difficult or even impossible if the model's size is too large. However, an efficient way to enhance the performance of a neural network language model is to increase the size of model. One possible way to address this problem is to implement special functions, like encoding, using changeless neural network with special struture. Not only the size of the changeless neural network can be very large, but also the structure can be very complexity. The performance of NNLM, both perplexity and training time, is expected to be improved dramatically in this way."
]
],
"section_name": [
"Introduction",
"Basic Neural Network Language Models",
"Feed-forward Neural Network Language Model, FNNLM",
"Recurrent Neural Network Language Model, RNNLM",
"Long Short Term Memory RNNLM, LSTM-RNNLM",
"Comparison of Neural Network Language Models",
"Importance Sampling",
"Word Classes",
"Caching",
"Bidirectional Recurrent Neural Network",
"Limits of Neural Network Language Modeling",
"Model Architecture",
"Knowledge Representation",
"Future Work",
"Conclusion"
]
} | {
"answers": [
{
"annotation_id": [
"a6b5b6abbdfcb39e1e2daa0e4f525e12b5551ef4",
"faa6db19dcaab95ad5b9d01c0e855c3332582984"
],
"answer": [
{
"evidence": [
"In BIBREF33 , significant improvement on neural machine translation (NMT) for an English to French translation task was achieved by reversing the order of input word sequence, and the possible explanation given for this phenomenon was that smaller \"minimal time lag\" was obtained in this way. In my opinion, another possible explanation is that a word in word sequence may more statistically depend on the following context than previous one. After all, a number of words are determined by its following words instead of previous ones in some natural languages. Take the articles in English as examples, indefinite article \"an\" is used when the first syllable of next word is a vowel while \"a\" is preposed before words starting with consonant. What's more, if a noun is qualified by an attributive clause, definite article \"the\" should be used before the noun. These examples illustrate that words in a word sequence depends on their following words sometimes. To verify this hypothesis further, an experiment is performed here in which the word order of every input sentence is reversed, and the probability of word sequence INLINEFORM0 is evaluated as following: INLINEFORM1"
],
"extractive_spans": [
"English",
"French"
],
"free_form_answer": "",
"highlighted_evidence": [
"In BIBREF33 , significant improvement on neural machine translation (NMT) for an English to French translation task was achieved by reversing the order of input word sequence, and the possible explanation given for this phenomenon was that smaller \"minimal time lag\" was obtained in this way. In my opinion, another possible explanation is that a word in word sequence may more statistically depend on the following context than previous one."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"23e91d6b8814f3488dfd400c3e9c43ef864c1446",
"7d299421ae4b5a1bf4303408c90eef9f671eae22"
],
"answer": [
{
"evidence": [
"Like word classes, caching is also a common used optimization technique in LM. The cache language models are based on the assumption that the word in recent history are more likely to appear again. In cache language model, the conditional probability of a word is calculated by interpolating the output of standard language model and the probability evaluated by caching, like: INLINEFORM0",
"where, INLINEFORM0 is the output of standard language model, INLINEFORM1 is the probability evaluated using caching, and INLINEFORM2 is a constant, INLINEFORM3 ."
],
"extractive_spans": [
"The cache language models are based on the assumption that the word in recent history are more likely to appear again",
"conditional probability of a word is calculated by interpolating the output of standard language model and the probability evaluated by caching"
],
"free_form_answer": "",
"highlighted_evidence": [
"Like word classes, caching is also a common used optimization technique in LM. The cache language models are based on the assumption that the word in recent history are more likely to appear again. In cache language model, the conditional probability of a word is calculated by interpolating the output of standard language model and the probability evaluated by caching, like: INLINEFORM0\n\nwhere, INLINEFORM0 is the output of standard language model, INLINEFORM1 is the probability evaluated using caching, and INLINEFORM2 is a constant, INLINEFORM3 ."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Another type of caching has been proposed as a speed-up technique for RNNLMs BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 . The main idea of this approach is to store the outputs and states of language models for future prediction given the same contextual history. In BIBREF32 , four caches were proposed, and they were all achieved by hash lookup tables to store key and value pairs: probability INLINEFORM0 and word sequence INLINEFORM1 ; history INLINEFORM2 and its corresponding hidden state vector; history INLINEFORM3 and the denominator of the softmax function for classes; history INLINEFORM4 , class index INLINEFORM5 and the denominator of the softmax function for words. In BIBREF32 , around 50-fold speed-up was reported with this caching technique in speech recognition but, unfortunately, it only works for prediction and cannot be applied during training."
],
"extractive_spans": [
"store the outputs and states of language models for future prediction given the same contextual history"
],
"free_form_answer": "",
"highlighted_evidence": [
"Another type of caching has been proposed as a speed-up technique for RNNLMs BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 . The main idea of this approach is to store the outputs and states of language models for future prediction given the same contextual history. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"413d7d853162b253ae3e172c1352e1691833aa35",
"f3dff017edfb6ba5daa98449ca09e2432c61f896"
],
"answer": [
{
"evidence": [
"Since this study focuses on NNLM itself and does not aim at raising a state of the art language model, the techniques of combining neural network language models with other kind of language models, like N-gram based language models, maximum entropy (ME) language models and etc., will not be included. The rest of this paper is organized as follows: In next section, the basic neural network language models - feed-forward neural network language model (FNNLM), recurrent neural network language model (RNNLM) and long-short term memory (LSTM) RNNLM, will be introduced, including the training and evaluation of these models. In the third section, the details of some important NNLM techniques, including importance sampling, word classes, caching and bidirectional recurrent neural network (BiRNN), will be described, and experiments will be performed on them to examine their advantages and disadvantages separately. The limits of NNLM, mainly about the aspects of model architecture and knowledge representation, will be explored in the fourth section. A further work section will also be given to represent some further researches on NNLM. In last section, a conclusion about the findings in this paper will be made."
],
"extractive_spans": [
"FNNLM",
"RNNLM",
"BiRNN",
"LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
" The rest of this paper is organized as follows: In next section, the basic neural network language models - feed-forward neural network language model (FNNLM), recurrent neural network language model (RNNLM) and long-short term memory (LSTM) RNNLM, will be introduced, including the training and evaluation of these models. In the third section, the details of some important NNLM techniques, including importance sampling, word classes, caching and bidirectional recurrent neural network (BiRNN), will be described, and experiments will be performed on them to examine their advantages and disadvantages separately. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The experiment results are showed in Table TABREF9 which suggest that, on a small corpus likes the Brown Corpus, RNNLM and LSTM-RNN did not show a remarkable advantage over FNNLM, instead a bit higher perplexity was achieved by LSTM-RNNLM. Maybe more data is needed to train RNNLM and LSTM-RNNLM because longer dependencies are taken into account by RNNLM and LSTM-RNNLM when predicting next word. LSTM-RNNLM with bias terms or direct connections was also evaluated here. When the direct connections between input layer and output layer of LSTM-RNN are enabled, a slightly higher perplexity but shorter training time were obtained. An explanation given for this phenomenon by BIBREF10 is that direct connections provide a bit more capacity and faster learning of the \"linear\" part of mapping from inputs to outputs but impose a negative effect on generalization. For bias terms, no significant improvement on performance was gained by adding bias terms which was also observed on RNNLM by BIBREF16 . In the rest of this paper, all studies will be performed on LSTM-RNNLM with neither direct connections nor bias terms, and the result of this model in Table TABREF9 will be used as the baseline for the rest studies."
],
"extractive_spans": [
"RNNLM",
"LSTM-RNN",
"FNNLM"
],
"free_form_answer": "",
"highlighted_evidence": [
"The experiment results are showed in Table TABREF9 which suggest that, on a small corpus likes the Brown Corpus, RNNLM and LSTM-RNN did not show a remarkable advantage over FNNLM, instead a bit higher perplexity was achieved by LSTM-RNNLM."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"2aaa7436b78e26cac4328f32b342cee2f1316e1a",
"e0c6629a9962718b7862bdde634a69fb2833d8a9"
],
"answer": [
{
"evidence": [
"Several limits of NNLM has been explored, and, in order to achieve language understanding, these limits must be overcome. I have not come up with a complete solution yet but some ideas which will be explored further next. First, the architecture showed in Figure FIGREF19 can be used as a general improvement scheme for ANN, and I will try to figure out the structure of changeless neural network for encoder. What's more, word sequences are commonly taken as signals for LM, and it is easy to take linguistical properties of words or sentences as the features of signals. However, it maybe not a proper way to deal with natural languages. Natural languages are not natural but man-made, and linguistical knowledge are also created by human long after natural language appeared. Liguistical knowledge only covers the \"right\" word sequences in a natural language, but it is common to deal with \"wrong\" ones in real world. In nature, every natural language is a mechanism of linking voices or signs with objects, both concrete and abstract. Therefore, the proper way to deal with natural languages is to find the relations between special voices or signs and objects, and the features of voices or signs can be defined easier than a natural language itself. Every voice or sign can be encoded as a unique code, vector or matrix, according to its features, and the similarities among voices or signs are indeed can be recognized from their codes. It is really difficult to model the relation between voices or signs and objects at once, and this work should be split into several steps. The first step is to covert voice or sign into characters, i.e., speech recognition or image recognition, but it is achieved using the architecture described in Figure FIGREF19 ."
],
"extractive_spans": [],
"free_form_answer": "Improved architecture for ANN, use of linguistical properties of words or sentences as features.",
"highlighted_evidence": [
"First, the architecture showed in Figure FIGREF19 can be used as a general improvement scheme for ANN, and I will try to figure out the structure of changeless neural network for encoder. What's more, word sequences are commonly taken as signals for LM, and it is easy to take linguistical properties of words or sentences as the features of signals."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Various architectures of neural network language models are described and a number of improvement techniques are evaluated in this paper, but there are still something more should be included, like gate recurrent unit (GRU) RNNLM, dropout strategy for addressing overfitting, character level neural network language model and ect. In addition, the experiments in this paper are all performed on Brown Corpus which is a small corpus, and different results may be obtained when the size of corpus becomes larger. Therefore, all the experiments in this paper should be repeated on a much larger corpus."
],
"extractive_spans": [
"gate recurrent unit (GRU) RNNLM, dropout strategy for addressing overfitting, character level neural network language model and ect."
],
"free_form_answer": "",
"highlighted_evidence": [
"Various architectures of neural network language models are described and a number of improvement techniques are evaluated in this paper, but there are still something more should be included, like gate recurrent unit (GRU) RNNLM, dropout strategy for addressing overfitting, character level neural network language model and ect."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What languages are used for the experiments?",
"What is the caching mechanism?",
"What language model architectures are examined?",
"What directions are suggested to improve language models?"
],
"question_id": [
"22375aac4cbafd252436b756bdf492a05f97eed8",
"d2f91303cec132750a416192f67c8ac1d3cf6fc0",
"9f065e787a0d40bb4550be1e0d64796925459005",
"e6f5444b7c08d79d4349e35d5298a63bb30e7004"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1: Feed-forward neural network language model",
"Figure 2: Recurrent neural network language model",
"Table 1: Camparative results of different neural network language models",
"Figure 3: Architecture of class based LSTM-RNNLM",
"Table 2: Results for class-based models",
"Table 3: Results of Cached LSTM-RNNLM",
"Figure 4: Encode word sequence using bidirectional recurrent neural network",
"Table 4: Reverse the order of word sequence",
"Figure 5: A possible scheme for the architecture of ANN",
"Table 5: Evaluating NNLM on data sets from different fields",
"Table 6: Examine the learning ability of neural network"
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png",
"6-Table1-1.png",
"8-Figure3-1.png",
"9-Table2-1.png",
"11-Table3-1.png",
"12-Figure4-1.png",
"12-Table4-1.png",
"13-Figure5-1.png",
"14-Table5-1.png",
"16-Table6-1.png"
]
} | [
"What directions are suggested to improve language models?"
] | [
[
"1708.07252-Future Work-0",
"1708.07252-Future Work-1"
]
] | [
"Improved architecture for ANN, use of linguistical properties of words or sentences as features."
] | 400 |
1808.07733 | Revisiting the Importance of Encoding Logic Rules in Sentiment Classification | We analyze the performance of different sentiment classification models on syntactically complex inputs like A-but-B sentences. The first contribution of this analysis addresses reproducible research: to meaningfully compare different models, their accuracies must be averaged over far more random seeds than what has traditionally been reported. With proper averaging in place, we notice that the distillation model described in arXiv:1603.06318v4 [cs.LG], which incorporates explicit logic rules for sentiment classification, is ineffective. In contrast, using contextualized ELMo embeddings (arXiv:1802.05365v2 [cs.CL]) instead of logic rules yields significantly better performance. Additionally, we provide analysis and visualizations that demonstrate ELMo's ability to implicitly learn logic rules. Finally, a crowdsourced analysis reveals how ELMo outperforms baseline models even on sentences with ambiguous sentiment labels. | {
"paragraphs": [
[
"In this paper, we explore the effectiveness of methods designed to improve sentiment classification (positive vs. negative) of sentences that contain complex syntactic structures. While simple bag-of-words or lexicon-based methods BIBREF1 , BIBREF2 , BIBREF3 achieve good performance on this task, they are unequipped to deal with syntactic structures that affect sentiment, such as contrastive conjunctions (i.e., sentences of the form “A-but-B”) or negations. Neural models that explicitly encode word order BIBREF4 , syntax BIBREF5 , BIBREF6 and semantic features BIBREF7 have been proposed with the aim of improving performance on these more complicated sentences. Recently, hu2016harnessing incorporate logical rules into a neural model and show that these rules increase the model's accuracy on sentences containing contrastive conjunctions, while PetersELMo2018 demonstrate increased overall accuracy on sentiment analysis by initializing a model with representations from a language model trained on millions of sentences.",
"In this work, we carry out an in-depth study of the effectiveness of the techniques in hu2016harnessing and PetersELMo2018 for sentiment classification of complex sentences. Part of our contribution is to identify an important gap in the methodology used in hu2016harnessing for performance measurement, which is addressed by averaging the experiments over several executions. With the averaging in place, we obtain three key findings: (1) the improvements in hu2016harnessing can almost entirely be attributed to just one of their two proposed mechanisms and are also less pronounced than previously reported; (2) contextualized word embeddings BIBREF0 incorporate the “A-but-B” rules more effectively without explicitly programming for them; and (3) an analysis using crowdsourcing reveals a bigger picture where the errors in the automated systems have a striking correlation with the inherent sentiment-ambiguity in the data."
],
[
"Here we briefly review background from hu2016harnessing to provide a foundation for our reanalysis in the next section. We focus on a logic rule for sentences containing an “A-but-B” structure (the only rule for which hu2016harnessing provide experimental results). Intuitively, the logic rule for such sentences is that the sentiment associated with the whole sentence should be the same as the sentiment associated with phrase “B”.",
"More formally, let $p_\\theta (y|x)$ denote the probability assigned to the label $y\\in \\lbrace +,-\\rbrace $ for an input $x$ by the baseline model using parameters $\\theta $ . A logic rule is (softly) encoded as a variable $r_\\theta (x,y)\\in [0,1]$ indicating how well labeling $x$ with $y$ satisfies the rule. For the case of A-but-B sentences, $r_\\theta (x,y)=p_\\theta (y|B)$ if $x$ has the structure A-but-B (and 1 otherwise). Next, we discuss the two techniques from hu2016harnessing for incorporating rules into models: projection, which directly alters a trained model, and distillation, which progressively adjusts the loss function during training."
],
[
"In this section we reanalyze the effectiveness of the techniques of hu2016harnessing and find that most of the performance gain is due to projection and not knowledge distillation. The discrepancy with the original analysis can be attributed to the relatively small dataset and the resulting variance across random initializations. We start by analyzing the baseline CNN by kim2014convolutional to point out the need for an averaged analysis."
],
[
"We run the baseline CNN by kim2014convolutional across 100 random seeds, training on sentence-level labels. We observe a large amount of variation from run-to-run, which is unsurprising given the small dataset size. The inset density plot in [fig:variation]Figure fig:variation shows the range of accuracies (83.47 to 87.20) along with 25, 50 and 75 percentiles. The figure also shows how the variance persists even after the average converges: the accuracies of 100 models trained for 20 epochs each are plotted in gray, and their average is shown in red.",
"We conclude that, to be reproducible, only averaged accuracies should be reported in this task and dataset. This mirrors the conclusion from a detailed analysis by reimers2017reporting in the context of named entity recognition."
],
[
"We carry out an averaged analysis of the publicly available implementation of hu2016harnessing. Our analysis reveals that the reported performance of their two mechanisms (projection and distillation) is in fact affected by the high variability across random seeds. Our more robust averaged analysis yields a somewhat different conclusion of their effectiveness.",
"In [fig:hu-performance]Figure fig:hu-performance, the first two columns show the reported accuracies in hu2016harnessing for models trained with and without distillation (corresponding to using values $\\pi =1$ and $\\pi =0.95^t$ in the $t^\\text{th}$ epoch, respectively). The two rows show the results for models with and without a final projection into the rule-regularized space. We keep our hyper-parameters identical to hu2016harnessing.",
"The baseline system (no-project, no-distill) is identical to the system of kim2014convolutional. All the systems are trained on the phrase-level SST2 dataset with early stopping on the development set. The number inside each arrow indicates the improvement in accuracy by adding either the projection or the distillation component to the training algorithm. Note that the reported figures suggest that while both components help in improving accuracy, the distillation component is much more helpful than the projection component.",
"The next two columns, which show the results of repeating the above analysis after averaging over 100 random seeds, contradict this claim. The averaged figures show lower overall accuracy increases, and, more importantly, they attribute these improvements almost entirely to the projection component rather than the distillation component. To confirm this result, we repeat our averaged analysis restricted to only “A-but-B” sentences targeted by the rule (shown in the last two columns). We again observe that the effect of projection is pronounced, while distillation offers little or no advantage in comparison."
],
[
"Traditional context-independent word embeddings like word2vec BIBREF8 or GloVe BIBREF9 are fixed vectors for every word in the vocabulary. In contrast, contextualized embeddings are dynamic representations, dependent on the current context of the word. We hypothesize that contextualized word embeddings might inherently capture these logic rules due to increasing the effective context size for the CNN layer in kim2014convolutional. Following the recent success of ELMo BIBREF0 in sentiment analysis, we utilize the TensorFlow Hub implementation of ELMo and feed these contextualized embeddings into our CNN model. We fine-tune the ELMo LSTM weights along with the CNN weights on the downstream CNN task. As in [sec:hu]Section sec:hu, we check performance with and without the final projection into the rule-regularized space.",
"We present our results in [tab:elmo]Table tab:elmo.",
"Switching to ELMo word embeddings improves performance by 2.9 percentage points on an average, corresponding to about 53 test sentences. Of these, about 32 sentences (60% of the improvement) correspond to A-but-B and negation style sentences, which is substantial when considering that only 24.5% of test sentences include these discourse relations ([tab:sst2]Table tab:sst2). As further evidence that ELMo helps on these specific constructions, the non-ELMo baseline model (no-project, no-distill) gets 255 sentences wrong in the test corpus on average, only 89 (34.8%) of which are A-but-B style or negations."
],
[
"We conduct a crowdsourced analysis that reveals that SST2 data has significant levels of ambiguity even for human labelers. We discover that ELMo's performance improvements over the baseline are robust across varying levels of ambiguity, whereas the advantage of hu2016harnessing is reversed in sentences of low ambiguity (restricting to A-but-B style sentences).",
"Our crowdsourced experiment was conducted on Figure Eight. Nine workers scored the sentiment of each A-but-B and negation sentence in the test SST2 split as 0 (negative), 0.5 (neutral) or 1 (positive). (SST originally had three crowdworkers choose a sentiment rating from 1 to 25 for every phrase.) More details regarding the crowd experiment's parameters have been provided in [appendix:appcrowd]Appendix appendix:appcrowd.",
"We average the scores across all users for each sentence. Sentences with a score in the range $(x, 1]$ are marked as positive (where $x\\in [0.5,1)$ ), sentences in $[0, 1-x)$ marked as negative, and sentences in $[1-x, x]$ are marked as neutral. For instance, “flat , but with a revelatory performance by michelle williams” (score=0.56) is neutral when $x=0.6$ . We present statistics of our dataset in [tab:crowdall]Table tab:crowdall. Inter-annotator agreement was computed using Fleiss' Kappa ( $\\kappa $ ). As expected, inter-annotator agreement is higher for higher thresholds (less ambiguous sentences). According to landis1977measurement, $\\kappa \\in (0.2, 0.4]$ corresponds to “fair agreement”, whereas $\\kappa \\in (0.4, 0.6]$ corresponds to “moderate agreement”.",
"We next compute the accuracy of our model for each threshold by removing the corresponding neutral sentences. Higher thresholds correspond to sets of less ambiguous sentences. [tab:crowdall]Table tab:crowdall shows that ELMo's performance gains in [tab:elmo]Table tab:elmo extends across all thresholds. In [fig:crowd]Figure fig:crowd we compare all the models on the A-but-B sentences in this set. Across all thresholds, we notice trends similar to previous sections: 1) ELMo performs the best among all models on A-but-B style sentences, and projection results in only a slight improvement; 2) models in hu2016harnessing (with and without distillation) benefit considerably from projection; but 3) distillation offers little improvement (with or without projection). Also, as the ambiguity threshold increases, we see decreasing gains from projection on all models. In fact, beyond the 0.85 threshold, projection degrades the average performance, indicating that projection is useful for more ambiguous sentences."
],
[
"We present an analysis comparing techniques for incorporating logic rules into sentiment classification systems. Our analysis included a meta-study highlighting the issue of stochasticity in performance across runs and the inherent ambiguity in the sentiment classification task itself, which was tackled using an averaged analysis and a crowdsourced experiment identifying ambiguous sentences. We present evidence that a recently proposed contextualized word embedding model (ELMo) BIBREF0 implicitly learns logic rules for sentiment classification of complex sentences like A-but-B sentences. Future work includes a fine-grained quantitative study of ELMo word vectors for logically complex sentences along the lines of peters2018dissecting."
],
[
"Crowd workers residing in five English-speaking countries (United States, United Kingdom, New Zealand, Australia and Canada) were hired. Each crowd worker had a Level 2 or higher rating on Figure Eight, which corresponds to a “group of more experienced, higher accuracy contributors”. Each contributor had to pass a test questionnaire to be eligible to take part in the experiment. Test questions were also hidden throughout the task and untrusted contributions were removed from the final dataset. For greater quality control, an upper limit of 75 judgments per contributor was enforced.",
"Crowd workers were paid a total of $1 for 50 judgments. An internal unpaid workforce (including the first and second author of the paper) of 7 contributors was used to speed up data collection."
]
],
"section_name": [
"Introduction",
"Logic Rules in Sentiment Classification",
"A Reanalysis",
"Importance of Averaging",
"Performance of hu2016harnessing",
"Contextualized Word Embeddings",
"Crowdsourced Experiments",
"Conclusion",
"Crowdsourcing Details"
]
} | {
"answers": [
{
"annotation_id": [
"d964ee252b07c12511f632517c13ea10086ed016",
"e9121c250e2cedee1a7acc8731eb2aa905bef292"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Average performance (across 100 seeds) of ELMo on the SST2 task. We show performance on A-but-B sentences (“but”), negations (“neg”)."
],
"extractive_spans": [],
"free_form_answer": "1).But 2).Eng 3). A-But-B",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Average performance (across 100 seeds) of ELMo on the SST2 task. We show performance on A-but-B sentences (“but”), negations (“neg”)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Switching to ELMo word embeddings improves performance by 2.9 percentage points on an average, corresponding to about 53 test sentences. Of these, about 32 sentences (60% of the improvement) correspond to A-but-B and negation style sentences, which is substantial when considering that only 24.5% of test sentences include these discourse relations ([tab:sst2]Table tab:sst2). As further evidence that ELMo helps on these specific constructions, the non-ELMo baseline model (no-project, no-distill) gets 255 sentences wrong in the test corpus on average, only 89 (34.8%) of which are A-but-B style or negations."
],
"extractive_spans": [
"A-but-B and negation"
],
"free_form_answer": "",
"highlighted_evidence": [
"Switching to ELMo word embeddings improves performance by 2.9 percentage points on an average, corresponding to about 53 test sentences. Of these, about 32 sentences (60% of the improvement) correspond to A-but-B and negation style sentences, which is substantial when considering that only 24.5% of test sentences include these discourse relations "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"db4f6f1ac73349bcebd4f6bf06de67906f18db9b",
"02c984a519661853b8ece0a27ce665d854551f7c"
]
},
{
"annotation_id": [
"245ba797a892cd244032900b2cc154b6c02ab254",
"6a9a9cbda9e502802308c80d416bd7c43d5c765e"
],
"answer": [
{
"evidence": [
"Switching to ELMo word embeddings improves performance by 2.9 percentage points on an average, corresponding to about 53 test sentences. Of these, about 32 sentences (60% of the improvement) correspond to A-but-B and negation style sentences, which is substantial when considering that only 24.5% of test sentences include these discourse relations ([tab:sst2]Table tab:sst2). As further evidence that ELMo helps on these specific constructions, the non-ELMo baseline model (no-project, no-distill) gets 255 sentences wrong in the test corpus on average, only 89 (34.8%) of which are A-but-B style or negations."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Switching to ELMo word embeddings improves performance by 2.9 percentage points on an average, corresponding to about 53 test sentences. Of these, about 32 sentences (60% of the improvement) correspond to A-but-B and negation style sentences, which is substantial when considering that only 24.5% of test sentences include these discourse relations ([tab:sst2]Table tab:sst2). As further evidence that ELMo helps on these specific constructions, the non-ELMo baseline model (no-project, no-distill) gets 255 sentences wrong in the test corpus on average, only 89 (34.8%) of which are A-but-B style or negations."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"We present an analysis comparing techniques for incorporating logic rules into sentiment classification systems. Our analysis included a meta-study highlighting the issue of stochasticity in performance across runs and the inherent ambiguity in the sentiment classification task itself, which was tackled using an averaged analysis and a crowdsourced experiment identifying ambiguous sentences. We present evidence that a recently proposed contextualized word embedding model (ELMo) BIBREF0 implicitly learns logic rules for sentiment classification of complex sentences like A-but-B sentences. Future work includes a fine-grained quantitative study of ELMo word vectors for logically complex sentences along the lines of peters2018dissecting."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
" We present evidence that a recently proposed contextualized word embedding model (ELMo) BIBREF0 implicitly learns logic rules for sentiment classification of complex sentences like A-but-B sentences."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"db4f6f1ac73349bcebd4f6bf06de67906f18db9b",
"45b212ff3348e2473d3e5504ca1200bcf85fcbf5"
]
}
],
"nlp_background": [
"infinity",
"infinity"
],
"paper_read": [
"no",
"no"
],
"question": [
"What logic rules can be learned using ELMo?",
"Does Elmo learn all possible logic rules?"
],
"question_id": [
"59f41306383dd6e201bded0f1c7c959ec4f61c5a",
"b3432f52af0b95929e6723dd1f01ce029d90a268"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
""
],
"topic_background": [
"research",
"research"
]
} | {
"caption": [
"Figure 1: Variation in models trained on SST-2 (sentenceonly). Accuracies of 100 randomly initialized models are plotted against the number of epochs of training (in gray), along with their average accuracies (in red, with 95% confidence interval error bars). The inset density plot shows the distribution of accuracies when trained with early stopping.",
"Table 1: Statistics of SST2 dataset. Here “Discourse” includes both A-but-B and negation sentences. The mean length of sentences is in terms of the word count.",
"Figure 2: Comparison of the accuracy improvements reported in Hu et al. (2016) and those obtained by averaging over 100 random seeds. The last two columns show the (averaged) accuracy improvements for A-but-B style sentences. All models use the publicly available implementation of Hu et al. (2016) trained on phrase-level SST2 data.",
"Table 2: Average performance (across 100 seeds) of ELMo on the SST2 task. We show performance on A-but-B sentences (“but”), negations (“neg”).",
"Figure 3: Heat map showing the cosine similarity between pairs of word vectors within a single sentence. The left figure has fine-tuned word2vec embeddings. The middle figure contains the original ELMo embeddings without any fine-tuning. The right figure contains fine-tuned ELMo embeddings. For better visualization, the cosine similarity between identical words has been set equal to the minimum value in the heat map.",
"Table 3: Number of sentences in the crowdsourced study (447 sentences) which got marked as neutral and which got the opposite of their labels in the SST2 dataset, using various thresholds. Inter-annotator agreement is computed using Fleiss’ Kappa. Average accuracies of the baseline and ELMo (over 100 seeds) on non-neutral sentences are also shown.",
"Figure 4: Average performance on the A-but-B part of the crowd-sourced dataset (210 sentences, 100 seeds)). For each threshold, only non-neutral sentences are used for evaluation."
],
"file": [
"2-Figure1-1.png",
"2-Table1-1.png",
"3-Figure2-1.png",
"4-Table2-1.png",
"5-Figure3-1.png",
"5-Table3-1.png",
"5-Figure4-1.png"
]
} | [
"What logic rules can be learned using ELMo?"
] | [
[
"1808.07733-4-Table2-1.png",
"1808.07733-Contextualized Word Embeddings-2"
]
] | [
"1).But 2).Eng 3). A-But-B"
] | 401 |
1912.04979 | Advances in Online Audio-Visual Meeting Transcription | This paper describes a system that generates speaker-annotated transcripts of meetings by using a microphone array and a 360-degree camera. The hallmark of the system is its ability to handle overlapped speech, which has been an unsolved problem in realistic settings for over a decade. We show that this problem can be addressed by using a continuous speech separation approach. In addition, we describe an online audio-visual speaker diarization method that leverages face tracking and identification, sound source localization, speaker identification, and, if available, prior speaker information for robustness to various real world challenges. All components are integrated in a meeting transcription framework called SRD, which stands for "separate, recognize, and diarize". Experimental results using recordings of natural meetings involving up to 11 attendees are reported. The continuous speech separation improves a word error rate (WER) by 16.1% compared with a highly tuned beamformer. When a complete list of meeting attendees is available, the discrepancy between WER and speaker-attributed WER is only 1.0%, indicating accurate word-to-speaker association. This increases marginally to 1.6% when 50% of the attendees are unknown to the system. | {
"paragraphs": [
[
"",
"The goal of meeting transcription is to have machines generate speaker-annotated transcripts of natural meetings based on their audio and optionally video recordings. Meeting transcription and analytics would be a key to enhancing productivity as well as improving accessibility in the workplace. It can also be used for conversation transcription in other domains such as healthcare BIBREF0. Research in this space was promoted in the 2000s by NIST Rich Transcription Evaluation series and public release of relevant corpora BIBREF1, BIBREF2, BIBREF3. While systems developed in the early days yielded high error rates, advances have been made in individual component technology fields, including conversational speech recognition BIBREF4, BIBREF5, far-field speech processing BIBREF6, BIBREF7, BIBREF8, and speaker identification and diarization BIBREF9, BIBREF10, BIBREF11. When cameras are used in addition to microphones to capture the meeting conversations, speaker identification quality could be further improved thanks to the computer vision technology. These trends motivated us to build an end-to-end audio-visual meeting transcription system to identify and address unsolved challenges. This report describes our learning, with focuses on overall architecture design, overlapped speech recognition, and audio-visual speaker diarization.",
"When designing meeting transcription systems, different constraints must be taken into account depending on targeted scenarios. In some cases, microphone arrays are used as an input device. If the names of expected meeting attendees are known beforehand, the transcription system should be able to provide each utterance with the true identity (e.g., “Alice” or “Bob”) instead of a randomly generated label like “Speaker1”. It is often required to show the transcription in near real time, which makes the task more challenging.",
"This work assumes the following scenario. We consider a scheduled meeting setting, where an organizer arranges a meeting in advance and sends invitations to attendees. The transcription system has access to the invitees' names. However, actual attendees may not completely match those invited to the meeting. The users are supposed to enroll themselves in the system beforehand so that their utterances in the meeting can be associated with their names. The meeting is recorded with an audio-visual device equipped with a seven-element circular microphone array and a fisheye camera. Transcriptions must be shown with a latency of up to a few seconds.",
"This paper investigates three key challenges.",
"",
"Speech overlaps: Recognizing overlapped speech has been one of the main challenges in meeting transcription with limited tangible progress. Numerous multi-channel speech separation methods were proposed based on independent component analysis or spatial clustering BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17. However, there was little successful effort to apply these methods to natural meetings. Neural network-based single-channel separation methods using techniques like permutation invariant training (PIT) BIBREF18 or deep clustering (DC) BIBREF19 are known to be vulnerable to various types of acoustic distortion, including reverberation and background noise BIBREF20. In addition, these methods were tested almost exclusively on small-scale segmented synthetic data and have not been applied to continuous conversational speech audio. Although the recently held CHiME-5 challenge helped the community make a step forward to a realistic setting, it still allowed the use of ground-truth speaker segments BIBREF21, BIBREF22.",
"We address this long-standing problem with a continuous speech separation (CSS) approach, which we proposed in our latest conference papers BIBREF23, BIBREF24. It is based on an observation that the maximum number of simultaneously active speakers is usually limited even in a large meeting. According to BIBREF25, two or fewer speakers are active for more than 98% of the meeting time. Thus, given continuous multi-channel audio observation, we generate a fixed number, say $N$, of time-synchronous signals. Each utterance is separated from overlapping voices and background noise. Then, the separated utterance is spawned from one of the $N$ output channels. For periods where the number of active speakers is fewer than $N$, the extra channels generate zeros. We show how continuous speech separation can fit in with an overall meeting transcription architecture to generate speaker-annotated transcripts.",
"Note that our speech separation system does not make use of a camera signal. While much progress has been made in audio-visual speech separation, the challenge of dealing with all kinds of image variations remains unsolved BIBREF26, BIBREF27, BIBREF28.",
"",
"Extensible framework: It is desirable that a single transcription system be able to support various application settings for both maintenance and scalability purposes. While this report focuses on the audio-visual setting, our broader work covers an audio-only setting as well as the scenario where no prior knowledge of meeting attendees is available. A modular and versatile architecture is desired to encompass these different settings.",
"To this end, we propose a framework called SRD, which stands for “separate, recognize, and diarize”, where CSS, speech recognition, and speaker diarization takes place in tandem. Performing CSS at the beginning allows the other modules to operate on overlap-free signals. Diarization is carried out after speech recognition because its implementation can vary significantly depending on the application settings. By choosing an appropriate diarization module for each setting, multiple use cases can be supported without changing the rest of the system. This architecture also allows transcriptions to be displayed in real time without speaker information. Speaker identities for each utterance may be shown after a couple of seconds.",
"",
"Audio-visual speaker diarization: Speaker diarization, a process of segmenting input audio and assigning speaker labels to the individual segments, can benefit from a camera signal. The phenomenal improvements that have been made to face detection and identification algorithms by convolutional neural networks (CNNs) BIBREF29, BIBREF30, BIBREF31 make the camera signal very appealing for speaker diarization. While much prior work assumes the batch processing scenario where the entire meeting recording can be processed multiple times, several studies deal with online processing BIBREF32, BIBREF33, BIBREF34, BIBREF35. However, no previous studies comprehensively address the challenges that one might encounter in real meetings. BIBREF32, BIBREF33 do not cope with speech overlaps. While the methods proposed in BIBREF34, BIBREF35 address the overlap issue, they rely solely on spatial cues and thus are not applicable when multiple speakers sit side by side.",
"Our diarization method handles overlapping utterances as well as co-located speakers by utilizing the time-frequency (TF) masks generated by CSS in speaker identification and sound source localization (SSL). In addition, several enhancements are made to face identification to improve robustness to image variations caused by face occlusions, extreme head pose, lighting conditions, and so on."
],
[
"",
"Our audio-visual diarization approach leverages spatial information and thus requires the audio and video angles to align. Because existing meeting corpora do not meet this requirement, we collected audio-visual English meeting recordings at Microsoft Speech and Language Group with an experimental recording device.",
"Our device has a cone shape and is approximately 30 centimeters high, slightly higher than a typical laptop. At the top of the device is a fisheye camera, providing a 360-degree field of view. Around the middle of the device, there is a horizontal seven-channel circular microphone array. The first microphone is placed at the center of the array board while the other microphones are arranged along the perimeter with an equal angle spacing. The board is about 10 cm wide.",
"The meetings were recorded in various conference rooms. The recording device was placed at a random position on a table in each room. We had meeting attendees sign up for the data collection program and go through audio and video enrollment steps. For each attendee, we obtained approximately a voice recording of 20 to 30 seconds and 10 or fewer close-up photos from different angles. A total of 26 meetings were recorded for the evaluation purpose. Each meeting had a different number of attendees, ranging from 2 to 11. The total number of unique participants were 62. No constraint was imposed on seating arrangements.",
"Two test sets were created: a gold standard test set and an extended test set. They were manually transcribed in different ways. The gold standard test set consisted of seven meetings and was 4.0 hours long in total. Those meetings were recorded both with the device described above and headset microphones. Professional transcribers were asked to provide initial transcriptions by using the headset and far-field audio recordings as well as the video. Then, automatic segmentation was performed with forced alignment. Finally, the segment boundaries and transcriptions were reviewed and corrected. Significant effort was made to fine-tune timestamps of the segmentation boundaries. While being very accurate, this transcription process requires headset recordings and therefore is not scalable. The extended test set contained 19 meetings totaling 6.4 hours. It covered a wider variety of conditions. These additional meetings were recorded only with the audio-visual device, i.e., the participants were not tethered to headsets. In addition to the audio-visual recordings, the transcribers were provided with outputs of our prototype system to bootstrap the transcription process."
],
[
"",
"Figure FIGREF1 shows a processing flow of the SRD framework for generating speaker-annotated transcripts. First, multi-input multi-output dereverberation is performed in real time BIBREF36. This is followed by CSS, which generates $N$ distinct signals (the diagram shows the case of $N$ being 2). Each signal has little overlapped speech, which allows for the use of conventional speech recognition and speaker diarization modules. After CSS, speech recognition is performed using each separated signal. This generates a sequence of speech events, where each event consists of a sequence of time-marked recognized words. The generated speech events are fed to a speaker diarization module to label each recognized word with the corresponding speaker identity. The speaker labels may be taken from a meeting invitee list or automatically generated by the system, like \"Speaker1\". Finally, the speaker-annotated transcriptions from the $N$ streams are merged.",
"",
"Comparison with other architectures: Most prior work in multi-microphone-based meeting transcription performs acoustic beamforming to generate a single enhanced audio signal, which is then processed with speaker diarization and speech recognition BIBREF37. This scheme fails in transcription in overlapped regions which typically make up more than 10% of the speech period. It is also noteworthy that beamforming and speaker diarization tend to suffer if speakers exchange turns quickly one after another even when their utterances do not overlap.",
"The system presented in BIBREF33 uses speaker-attributed beamforming, which generates a separate signal for each speaker. The speaker-attributed signals are processed with speech recognition to generate transcriptions for each speaker. This requires accurate speaker diarization to be performed in real time before beamforming, which is challenging in natural meetings.",
"By contrast, by performing CSS at the beginning, the SRD approach can handle overlaps of up to $N$ speakers without special overlap handling in speech recognition or speaker diarization. We also found that performing diarization after speech recognition resulted in more accurate transcriptions than the conventional way of performing diarization before speech recognition. One reason is that the “post-SR” diarization can take advantage of the improved speech activity detection capability offered by the speech recognition module. Also, the speaker change positions can be restricted to word boundaries. The same observation was reported in BIBREF9."
],
[
"",
"The objective of CSS is to render an input multi-channel signal containing overlaps into multiple overlap-free signals. Conceptually, CSS monitors the input audio stream; when overlapping utterances are found, it isolates these utterances and distributes them to different output channels. Non-overlapped utterances can be output from one of the channels. We want to achieve this in a streaming fashion without explicitly performing segmentation or overlap detection.",
"We perform CSS by using a speech separation network trained with PIT as we first proposed in BIBREF23. Figure FIGREF2 shows our proposed CSS processing flow for the case of $N=2$. First, single- and multi-channel features are extracted for each short time frame from an input seven-channel signal. The short time magnitude spectral coefficients of the center microphone and the inter-channel phase differences (IPDs) with reference to the center microphone are used as the single- and multi-channel features, respectively. The features are mean-normalized with a sliding window of four seconds and then fed to a speech separation network, which yields $N$ different speech masks as well as a noise mask for each TF bin. A bidirectional long short time memory (BLSTM) network is employed to leverage long term acoustic dependency. Finally, for each $n \\in \\lbrace 0, \\cdots , N-1\\rbrace $, the $n$th separated speech signal is generated by enhancing the speech component articulated by the $n$th speech TF masks while suppressing those represented by the other masks. To generate the TF masks in a streaming fashion with the bidirectional model, this is repeated every 0.8 seconds by using a 2.4-second segment. It should be noted that the speech separation network may change the order of the $N$ speech outputs when processing different data segments. In order to align the output order of the current segment with that of the previous segment, the best order is estimated by examining all possible permutations. The degree of “goodness” of each permutation is measured as the mean squared error between the masked magnitude spectrograms calculated over the frames shared by the two adjacent segments.",
"Given the $N+1$ TF masks ($N$ for speech, one for noise), we generate each of the $N$ output signals with mask-based minimum variance distortionless response (MVDR) beamforming BIBREF23. The MVDR filter for each output channel is updated periodically, every 0.8 seconds in our implementation. We follow the MVDR formula of equation (24) of BIBREF39. This scheme requires the spatial covariance matrices (SCMs) of the target and interference signals, where the interference signal means the sum of all non-target speakers and the background noise. To estimate these statistics, we continuously estimate the target SCMs for all the output channels as well as the noise SCM, with a refresh rate of 0.8 seconds. The noise SCM is computed by using a long window of 10 seconds, considering the fact that the background noise tends to be stationary in conference rooms. On the other hand, the target SCMs are computed with a relatively short window of 2.4 seconds. The interference SCM for the $n$th output channel is then obtained by adding up the noise SCM and all the target SCMs except that of the $n$th channel.",
"",
"Separation model details: Our speech separation model is comprised of a three-layer 1024-unit BLSTM. The input features are transformed by a 1024-unit projection layer with ReLU nonlinearity before being fed to the BLSTM. On top of the last BLSTM layer, there is a three-headed fully connected sigmoid layer assuming $N$to be 2, where each head produces TF masks for either speech or noise.",
"The model is trained on 567 hours of artificially generated noisy and reverberant speech mixtures. Source speech signals are taken from WSJ SI-284 and LibriSpeech. Each training sample is created as follows. First, the number of speakers (1 or 2) is randomly chosen. For the two-speaker case, the start and end times of each utterance is randomly determined so that we have a balanced combination of the four mixing configurations described in BIBREF40. The source signals are reverberated with the image method BIBREF41, mixed together in the two-speaker case, and corrupted by additive noise. The multi-channel additive noise signals are simulated by assuming a spherically isotropic noise field. Long training samples are clipped to 10 seconds. The model is trained to minimize the PIT-MSE between the source magnitude spectra and the masked versions of the observed magnitude spectra. As noted in BIBREF23, PIT is applied only to the two speech masks."
],
[
"",
"Following the SRD framework, each CSS output signal is processed with speech recognition and then speaker diarization. The input to speaker diarization is a speech event, a sequence of recognized words between silent periods in addition to the audio and video signals of the corresponding time segment. The speaker diarization module attributes each word to the person who is supposed to have spoken that word. Note that speaker diarization often refers to a process of assigning anonymous (or relative BIBREF42) speaker labels BIBREF43. Here, we use this term in a broader way: we use true identities, i.e., real names, when they are invited through the conferencing system.",
"Speaker diarization is often performed in two steps: segmentation and speaker attribution. The segmentation step decomposes the received speech event into speaker-homogeneous subsegments. Preliminary experiments showed that our system was not very sensitive to the choice of a segmentation method. This is because, even when two persons speak one after the other, their signals are likely to be assigned to different CSS output channels BIBREF40. In other words, CSS undertakes the segmentation to some extent. Therefore, in this paper, we simply use a hidden Markov model-based method that is similar to the one proposed in BIBREF32.",
"The speaker attribution step finds the most probable speaker ID for a given segment by using the audio and video signals. This is formalized as",
"$A$ and $V$ are the audio and video signals, respectively. $M$ is the set of the TF masks of the current CSS channel within the input segment. The speaker ID inventory, $\\mathcal {H}$, consists of the invited speaker names (e.g., `Alice' or `Bob') and anonymous `guest' IDs produced by the vision module (e.g., `Speaker1' or `Speaker2'). In what follows, we propose a model for combining face tracking, face identification, speaker identification, SSL, and the TF masks generated by the preceding CSS module to calculate the speaker ID posterior probability of equation (DISPLAY_FORM5). The integration of these complementary cues would make speaker attribution robust to real world challenges, including speech overlaps, speaker co-location, and the presence of guest speakers.",
"First, by treating the face position trajectory of the speaking person as a latent variable, the speaker ID posterior probability can be represented as",
"where $\\mathcal {R}$ includes all face position trajectories detected by the face tracking module within the input period. We call a face position trajectory a tracklet. The joint posterior probability on the right hand side (RHS) can be factorized as",
"The RHS first term, or the tracklet-conditioned speaker ID posterior, can be further decomposed as",
"The RHS first term, calculating the speaker ID posterior given the video signal and the tracklet calls for a face identification model because the video signal and the tracklet combine to specify a single speaker's face. On the other hand, the likelihood term on the RHS can be calculated as",
"where we have assumed the spatial and magnitude features of the audio, represented as $A_s$ and $A_m$, respectively, to be independent of each other. The RHS first term, $p(A_s | h; M)$, is a spatial speaker model, measuring the likelihood of speaker $h$ being active given spatial features $A_s$. We make no assumption on the speaker positions. Hence, $p(A_s | h; M)$ is constant and can be ignored. The RHS second term, $p(A_m | h; M)$, is a generative model for speaker identification.",
"Returning to (DISPLAY_FORM8), the RHS second term, describing the probability of the speaking person's face being $r$ (recall that each tracklet captures a single person's face), may be factorized as",
"The first term is the likelihood of tracklet $r$ generating a sound with spatial features $A_s$ and therefore related to SSL. The second term is the probability with which the tracklet $r$ is active given the audio magnitude features and the video. Calculating this requires lip sync to be performed for each tracklet, which is hard in our application due to low resolution resulting from speaker-to-camera distances and compression artifacts. Thus, we ignore this term.",
"Putting the above equations together, the speaker-tracklet joint posterior needed in (DISPLAY_FORM7) can be obtained as",
"where the ingredients of the RHS relate to face identification, speaker identification, and SSL, respectively, in the order of appearance. The rest of this section describes our implementations of these models."
],
[
"",
"The SSL generative model, $p(A_s | r; M)$, is defined by using a complex angular central Gaussian model (CACGM) BIBREF45. The SSL generative model can be written as follows:",
"where $\\omega $ is a discrete-valued latent variable representing the sound direction. It should be noted that the strongest sound direction may be mismatched with the face direction to a varying degree due to sound reflections on tables, diffraction on obstacles, face orientation variability, and so on. $P(\\omega | r)$ is introduced to represent this mismatch and modeled as a uniform distribution with a width of 25 degrees centered at the face position for $r$. The likelihood term, $p(A_s | \\omega ; M)$, is modeled with the CACGM and the log likelihood reduces to the following form BIBREF24: $ \\log p(A_s | \\omega ;M) = -\\sum _{t,f} m_{t,f} \\log (1 - || \\mathbf {z}_{t,f}^H \\mathbf {h}_{f,\\omega } ||^2 / (1 + \\epsilon ) ), $ where $\\mathbf {z}_{t,f}$ is a magnitude-normalized multi-channel observation vector constituting $A_s$, $m_{t,f}$ a TF mask, $\\mathbf {h}_{f, \\omega }$ a steering vector corresponding to sound direction $\\omega $, and $\\epsilon $ a small flooring constant."
],
[
"",
"As regards the speaker identification model, $p(A_m | h; M)$, we squash the observations to a fixed-dimensional representation, i.e. speaker embedding. The proximity in the embedding space measures the similarity between speakers.",
"Our model consists of multiple convolutional layers augmented by residual blocks BIBREF46 and has a bottleneck layer. The model is trained to reduce classification errors for a set of known identities. For inference, the output layer of the model is removed and the activation of the bottleneck layer is extracted as a speaker embedding, which is expected to generalize to any speakers beyond those included in the training set. In our system, the speaker embedding has 128 dimensions. VoxCeleb corpus BIBREF47, BIBREF48 is used for training. Our system was confirmed to outperform the state-of-the-art on the VoxCeleb test set.",
"We assume an embedding vector of each speaker to follow a von Mises-Fisher distribution with a shared concentration parameter. If we ignore a constant term, this leads to the following equation: $\\log p(A_m | h; M) = \\mathbf {p}_h^T \\mathbf {d}_M$, where $\\mathbf {d}_M$ is the embedding extracted from the signal enhanced with the TF masks in $M$, and $\\mathbf {p}_h$ is speaker $h$'s mean direction in the embedding space. This is equivalent to measuring the proximity of the input audio segment to speaker $h$ by using a cosine similarity in the embedding space BIBREF49.",
"The mean direction of a speaker can be regarded as a voice signature of that person. It is calculated as follows. When speaker $h$ is an invited speaker, the system has the enrollment audio of this person. Embedding vectors are extracted from the enrollment sound with a sliding window and averaged to produce the mean direction vector. For a guest speaker detected by the vision module, no enrollment audio is available at the beginning. The speaker log likelihood, $\\log p (A_m | h; M)$, is assumed to have a constant value which is determined by a separate speaker verification experiment on a development set. For both cases, $\\mathbf {p}_h$, the voice signature of speaker $h$, is updated during the meeting every time a new segment is attributed to that person."
],
[
"",
"Our vision processing module (see Fig. FIGREF1) locates and identifies all persons in a room for each frame captured by the camera. The unconstrained meeting scenario involves many challenges, including face occlusions, extreme head pose, lighting conditions, compression artifacts, low resolution due to device-to-person distances, motion blur. Therefore, any individual frame may not contain necessary information. For example, a face may not be detectable in some frames. Even if it is detectable, it may not be recognizable.",
"To handle this variability, we integrate information across time using face tracking as implied by our formulation of $P(h | r, V)$, which requires face identification to be performed only at a tracklet level. Our face tracking uses face detection and low-level tracking to maintain a set of tracklets, where each tracklet is defined as a sequence of faces in time that belong to the same person. We use a method similar to that in BIBREF50 with several adaptions to our specific setting, such as exploiting the stationarity of the camera for detecting motion, performing the low-level tracking by color based mean-shift instead of gray-level based normalized correlation, tuning the algorithm to minimize the risk of tracklet mergers (which in our context are destructive), etc. Also, the faces in each tracklet are augmented with attributes such as face position, dimensions, head pose, and face feature vectors. The tracklet set defines $\\mathcal {R}$ of equation (DISPLAY_FORM7).",
"Face identification calculates person ID posterior probabilities for each tracklet. Guest IDs (e.g., 'Speaker1') are produced online, each representing a unique person in the meeting who is not on the invitee list. We utilize a discriminative face embedding which converts face images into fixed-dimensional feature vectors, or 128-dimensional vectors obtained as output layer activations of a convolutional neural network. For the face embedding and detection components, we use the algorithms from Microsoft Cognitive Services Face API BIBREF51, BIBREF52. Face identification of a tracklet is performed by comparing the set of face features extracted from its face instances, to the set of features from a gallery of each person's faces. For invited people, the galleries are taken from their enrollment videos, while for guests, the gallery pictures are accumulated online from the meeting video. We next describe our set-to-set similarity measure designed to perform this comparison.",
"Our set-to-set similarity is designed to utilize information from multiple frames while remaining robust to head pose, lighting conditions, blur and other misleading factors. We follow the matched background similarity (MBGS) approach of BIBREF53 and make crucial adaptations to it that increase accuracy significantly for our problem. As with MBGS, we train a discriminative classifier for each identity $h$ in $\\mathcal {H}$. The gallery of $h$ is used as positive examples, while a separate fixed background set $B$ is used as negative examples. This approach has two important benefits. First, it allows us to train a classifier adapted to a specific person. Second, the use of a background set $B$ lets us account for misleading sources of variation e.g. if a blurry or poorly lit face from $B$ is similar to one of the positive examples, the classifier's decision boundary can be chosen accordingly. During meeting initialization, an support vector machine (SVM) classifier is trained to distinguish between the positive and negative sets for each invitee. At test time, we are given a tracklet $T=\\big \\lbrace \\mathbf {t}_1,...,\\mathbf {t}_N\\big \\rbrace $ represented as a set of face feature vectors $\\mathbf {t}_i\\in {\\mathbb {R}^d}$, and we classify each member $\\mathbf {t}_i$ with the classifier of each identity $h$ and obtain a set of classification confidences $\\big \\lbrace s\\big (T\\big )_{i,h}\\big \\rbrace $. Hereinafter, we omit argument $T$ for brevity. We now aggregate the scores of each identity to obtain the final identity scores $s_h=\\text{stat}\\big (\\big \\lbrace s_{i,h}\\big \\rbrace _{i=1}^N\\big )$ where $\\text{stat}(\\cdot )$ represents aggregation by e.g. taking the mean confidence. When $s=\\max _{h} s_h$ is smaller than a threshold, a new guest identity is added to $\\mathcal {H}$, where the classifier for this person is trained by using $T$ as positive examples. $\\lbrace s_h\\rbrace _{h \\in \\mathcal {H}}$ is converted to a set of posterior probabilities $\\lbrace P(h | r, V)\\rbrace _{h \\in \\mathcal {H}}$ with a trained regression model.",
"The adaptations we make over the original MBGS are as follows.",
"During SVM training we place a high weight over negative examples. The motivation here is to force training to classify regions of confusion as negatives e.g. if blurry positive and negative images get mapped to the same region in feature space we prefer to have negative confidence in this region.",
"We set $\\text{stat}(\\cdot )$ to be the function returning the 95th percentile instead of the originally proposed mean function. The effect of this together with the previous bullet is that the final identity score is impacted by the most confident face instances in the tracklet and not the confusing ones, thereby mining the highest quality frames.",
"We augment an input feature vector with the cosine similarity score between the input and a face signature, which results in a classification function of the form of $\\langle \\mathbf {x},\\mathbf {w}^h_{1:d} \\rangle + w^h_{d+1}\\cos \\big (\\mathbf {x}, \\mathbf {q}_h\\big )-b^h,$ where $\\mathbf {x}\\in {\\mathbb {R}^d}$, $\\mathbf {q}_h$ is $h$'s face signature obtained as the mean of the gallery face features of $h$, $\\text{cos}(\\cdot )$ is the cosine similarity, and $\\big (\\mathbf {w}^h,b^h\\big )$ are linear weights and bias. We note that more complex rules tend to overfit due to the small size of enrollment, which typically consists of no more than 10 images."
],
[
"",
"We now report experimental results for the data described in Section SECREF2. We first investigate certain aspects of the system by using the gold standard test set. Then, we show the results on the extended test set. The WERs were calculated with the NIST asclite tool. Speaker-attributed (SA) WERs were also calculated by scoring system outputs for individual speakers against the corresponding speakers' reference transcriptions.",
"For speech recognition, we used a conventional hybrid system, consisting of a latency-controlled bidirectional long short-term memory (LSTM) acoustic model (AM) BIBREF54 and a weighted finite state transducer decoder. Our AM was trained on 33K hours of in-house audio data, including close-talking, distant-microphone, and artificially noise-corrupted speech. Decoding was performed with a 5-gram language model (LM) trained on 100B words. Whenever a silence segment longer than 300 ms was detected, the decoder generated an n-best list, which was rescored with an LSTM-LM which consisted of two 2048-unit recurrent layers and was trained on 2B words. To help calibrate the difficulty of the task, we note that the same models were used in our recent paper BIBREF55, where results on NIST RT-07 were shown.",
"The first row of Table TABREF22 shows the proposed system's WERs for the gold standard test set. The WERs were calculated over all segments as well as those not containing overlapped periods. The second row shows the WERs of a conventional approach using single-output beamforming. Specifically, we replaced CSS in Fig. FIGREF1 by a differential beamformer which was optimized for our device and ran speech recognition on the beamformed signal. In BIBREF56, we verified that our beamformer slightly outperformed a state-of-the-art mask-based MVDR beamformer. The proposed system achieved a WER of 18.7%, outperforming the system without CSS by 3.6 percentage points, or 16.1% relative. For single-speaker segments, the two systems yielded similar WERs, close to 15%. From these results, we can see that CSS improved the recognition accuracy for overlapped segments, which accounted for about 50% of all the segments.",
"Table TABREF22 shows SA-WERs for two different diarization configurations and two different experiment setups. In the first setup, we assumed all attendees were invited to the meetings and therefore their face and voice signatures were available in advance. In the second setup, we used precomputed face and voice signatures for 50% of the attendees and the other speakers were treated as `guests'. A diarization system using only face identification and SSL may be regarded as a baseline as this approach was widely used in previous audio-visual diarization studies BIBREF33, BIBREF34, BIBREF35. The results show that the use of speaker identification substantially improved the speaker attribution accuracy. The SA-WERs were improved by 11.6% and 6.0% when the invited/guest ratios were 100/0 and 50/50, respectively. The small differences between the SA-WERs from Table TABREF22 and the WER from Table TABREF22 indicate very accurate speaker attribution.",
"One noteworthy observation is that, if only face identification and SSL were used, a lower SA-WER was achieved when only 50% of the attendees were known to the system. This was because matching incoming cropped face pictures against face snapshots taken separately under different conditions (invited speakers) tended to be more difficult than performing the matching against face images extracted from the same meeting (guest speakers).",
"Finally, Table TABREF22 shows the WER and SA-WER of the proposed system on the extended test set. For this experiment, we introduced approximations to the vision processing module to keep the real time factor smaller than one regardless of the number of faces detected. We can still observe similar WER and SA-WER numbers to those seen in the previous experiments, indicating the robustness of our proposed system."
],
[
"",
"This paper described an online audio-visual meeting transcription system that can handle overlapped speech and achieve accurate diarization by combining multiple cues from different modalities. The SRD meeting transcription framework was proposed to take advantage of CSS. To the best of our knowledge, this is the first paper that demonstrated the benefit of speech separation in an end-to-end meeting transcription setting. As for diarization, a new audio-visual approach was proposed, which consumes the results of face tracking, face identification, SSL, and speaker identification as well as the TF masks generated by CSS for robust speaker attribution. Our improvements to face identification were also described. In addition to these technical contributions, we believe our results also helped clarify where the current technology stands."
],
[
"",
"We thank Mike Emonts and Candace McKenna for data collection; Michael Zeng, Andreas Stolcke, and William Hinthorn for discussions; Microsoft Face Team for sharing their algorithms."
]
],
"section_name": [
"Introduction",
"Device and Data",
"Separate-Recognize-Diarize Framework",
"Continuous Speech Separation",
"Speaker Diarization",
"Speaker Diarization ::: Sound source localization",
"Speaker Diarization ::: Speaker identification",
"Speaker Diarization ::: Face tracking and identification",
"Experimental Results",
"Conclusion",
"Acknowledgement"
]
} | {
"answers": [
{
"annotation_id": [
"678907d17834bcdac0f0710a6b4a06edae1ff0fc",
"d9d930ea80bc9a6c7eb47a6063e7d9aab519590b"
],
"answer": [
{
"evidence": [
"Our vision processing module (see Fig. FIGREF1) locates and identifies all persons in a room for each frame captured by the camera. The unconstrained meeting scenario involves many challenges, including face occlusions, extreme head pose, lighting conditions, compression artifacts, low resolution due to device-to-person distances, motion blur. Therefore, any individual frame may not contain necessary information. For example, a face may not be detectable in some frames. Even if it is detectable, it may not be recognizable.",
"To handle this variability, we integrate information across time using face tracking as implied by our formulation of $P(h | r, V)$, which requires face identification to be performed only at a tracklet level. Our face tracking uses face detection and low-level tracking to maintain a set of tracklets, where each tracklet is defined as a sequence of faces in time that belong to the same person. We use a method similar to that in BIBREF50 with several adaptions to our specific setting, such as exploiting the stationarity of the camera for detecting motion, performing the low-level tracking by color based mean-shift instead of gray-level based normalized correlation, tuning the algorithm to minimize the risk of tracklet mergers (which in our context are destructive), etc. Also, the faces in each tracklet are augmented with attributes such as face position, dimensions, head pose, and face feature vectors. The tracklet set defines $\\mathcal {R}$ of equation (DISPLAY_FORM7).",
"Face identification calculates person ID posterior probabilities for each tracklet. Guest IDs (e.g., 'Speaker1') are produced online, each representing a unique person in the meeting who is not on the invitee list. We utilize a discriminative face embedding which converts face images into fixed-dimensional feature vectors, or 128-dimensional vectors obtained as output layer activations of a convolutional neural network. For the face embedding and detection components, we use the algorithms from Microsoft Cognitive Services Face API BIBREF51, BIBREF52. Face identification of a tracklet is performed by comparing the set of face features extracted from its face instances, to the set of features from a gallery of each person's faces. For invited people, the galleries are taken from their enrollment videos, while for guests, the gallery pictures are accumulated online from the meeting video. We next describe our set-to-set similarity measure designed to perform this comparison.",
"Our set-to-set similarity is designed to utilize information from multiple frames while remaining robust to head pose, lighting conditions, blur and other misleading factors. We follow the matched background similarity (MBGS) approach of BIBREF53 and make crucial adaptations to it that increase accuracy significantly for our problem. As with MBGS, we train a discriminative classifier for each identity $h$ in $\\mathcal {H}$. The gallery of $h$ is used as positive examples, while a separate fixed background set $B$ is used as negative examples. This approach has two important benefits. First, it allows us to train a classifier adapted to a specific person. Second, the use of a background set $B$ lets us account for misleading sources of variation e.g. if a blurry or poorly lit face from $B$ is similar to one of the positive examples, the classifier's decision boundary can be chosen accordingly. During meeting initialization, an support vector machine (SVM) classifier is trained to distinguish between the positive and negative sets for each invitee. At test time, we are given a tracklet $T=\\big \\lbrace \\mathbf {t}_1,...,\\mathbf {t}_N\\big \\rbrace $ represented as a set of face feature vectors $\\mathbf {t}_i\\in {\\mathbb {R}^d}$, and we classify each member $\\mathbf {t}_i$ with the classifier of each identity $h$ and obtain a set of classification confidences $\\big \\lbrace s\\big (T\\big )_{i,h}\\big \\rbrace $. Hereinafter, we omit argument $T$ for brevity. We now aggregate the scores of each identity to obtain the final identity scores $s_h=\\text{stat}\\big (\\big \\lbrace s_{i,h}\\big \\rbrace _{i=1}^N\\big )$ where $\\text{stat}(\\cdot )$ represents aggregation by e.g. taking the mean confidence. When $s=\\max _{h} s_h$ is smaller than a threshold, a new guest identity is added to $\\mathcal {H}$, where the classifier for this person is trained by using $T$ as positive examples. $\\lbrace s_h\\rbrace _{h \\in \\mathcal {H}}$ is converted to a set of posterior probabilities $\\lbrace P(h | r, V)\\rbrace _{h \\in \\mathcal {H}}$ with a trained regression model.",
"The SSL generative model, $p(A_s | r; M)$, is defined by using a complex angular central Gaussian model (CACGM) BIBREF45. The SSL generative model can be written as follows:",
"Speaker Diarization ::: Sound source localization",
"$A$ and $V$ are the audio and video signals, respectively. $M$ is the set of the TF masks of the current CSS channel within the input segment. The speaker ID inventory, $\\mathcal {H}$, consists of the invited speaker names (e.g., `Alice' or `Bob') and anonymous `guest' IDs produced by the vision module (e.g., `Speaker1' or `Speaker2'). In what follows, we propose a model for combining face tracking, face identification, speaker identification, SSL, and the TF masks generated by the preceding CSS module to calculate the speaker ID posterior probability of equation (DISPLAY_FORM5). The integration of these complementary cues would make speaker attribution robust to real world challenges, including speech overlaps, speaker co-location, and the presence of guest speakers.",
"First, by treating the face position trajectory of the speaking person as a latent variable, the speaker ID posterior probability can be represented as",
"where $\\mathcal {R}$ includes all face position trajectories detected by the face tracking module within the input period. We call a face position trajectory a tracklet. The joint posterior probability on the right hand side (RHS) can be factorized as",
"The RHS first term, or the tracklet-conditioned speaker ID posterior, can be further decomposed as",
"The RHS first term, calculating the speaker ID posterior given the video signal and the tracklet calls for a face identification model because the video signal and the tracklet combine to specify a single speaker's face. On the other hand, the likelihood term on the RHS can be calculated as",
"where we have assumed the spatial and magnitude features of the audio, represented as $A_s$ and $A_m$, respectively, to be independent of each other. The RHS first term, $p(A_s | h; M)$, is a spatial speaker model, measuring the likelihood of speaker $h$ being active given spatial features $A_s$. We make no assumption on the speaker positions. Hence, $p(A_s | h; M)$ is constant and can be ignored. The RHS second term, $p(A_m | h; M)$, is a generative model for speaker identification.",
"Returning to (DISPLAY_FORM8), the RHS second term, describing the probability of the speaking person's face being $r$ (recall that each tracklet captures a single person's face), may be factorized as",
"The first term is the likelihood of tracklet $r$ generating a sound with spatial features $A_s$ and therefore related to SSL. The second term is the probability with which the tracklet $r$ is active given the audio magnitude features and the video. Calculating this requires lip sync to be performed for each tracklet, which is hard in our application due to low resolution resulting from speaker-to-camera distances and compression artifacts. Thus, we ignore this term.",
"Putting the above equations together, the speaker-tracklet joint posterior needed in (DISPLAY_FORM7) can be obtained as",
"where the ingredients of the RHS relate to face identification, speaker identification, and SSL, respectively, in the order of appearance. The rest of this section describes our implementations of these models."
],
"extractive_spans": [],
"free_form_answer": "Face tracking is performed in an automatic tracklet module, face identification is performed by creating a face embedding from the output of a CNN, the embedding is then compared to a gallery of each person's face using a discriminative classifier (SVM) and localization is modelled with a complex angular central Gaussian model. All are merged in a statistical model. ",
"highlighted_evidence": [
"The unconstrained meeting scenario involves many challenges, including face occlusions, extreme head pose, lighting conditions, compression artifacts, low resolution due to device-to-person distances, motion blur. Therefore, any individual frame may not contain necessary information. For example, a face may not be detectable in some frames. Even if it is detectable, it may not be recognizable.\n\nTo handle this variability, we integrate information across time using face tracking as implied by our formulation of $P(h | r, V)$, which requires face identification to be performed only at a tracklet level. Our face tracking uses face detection and low-level tracking to maintain a set of tracklets, where each tracklet is defined as a sequence of faces in time that belong to the same person. We use a method similar to that in BIBREF50 with several adaptions to our specific setting, such as exploiting the stationarity of the camera for detecting motion, performing the low-level tracking by color based mean-shift instead of gray-level based normalized correlation, tuning the algorithm to minimize the risk of tracklet mergers (which in our context are destructive), etc. Also, the faces in each tracklet are augmented with attributes such as face position, dimensions, head pose, and face feature vectors. The tracklet set defines $\\mathcal {R}$ of equation (DISPLAY_FORM7).",
"ace identification calculates person ID posterior probabilities for each tracklet. Guest IDs (e.g., 'Speaker1') are produced online, each representing a unique person in the meeting who is not on the invitee list. We utilize a discriminative face embedding which converts face images into fixed-dimensional feature vectors, or 128-dimensional vectors obtained as output layer activations of a convolutional neural network. ",
"Face identification calculates person ID posterior probabilities for each tracklet. Guest IDs (e.g., 'Speaker1') are produced online, each representing a unique person in the meeting who is not on the invitee list. We utilize a discriminative face embedding which converts face images into fixed-dimensional feature vectors, or 128-dimensional vectors obtained as output layer activations of a convolutional neural network. ",
" For the face embedding and detection components, we use the algorithms from Microsoft Cognitive Services Face API BIBREF51, BIBREF52. Face identification of a tracklet is performed by comparing the set of face features extracted from its face instances, to the set of features from a gallery of each person's faces.",
"Our set-to-set similarity is designed to utilize information from multiple frames while remaining robust to head pose, lighting conditions, blur and other misleading factors. We follow the matched background similarity (MBGS) approach of BIBREF53 and make crucial adaptations to it that increase accuracy significantly for our problem. As with MBGS, we train a discriminative classifier for each identity $h$ in $\\mathcal {H}$. ",
"During meeting initialization, an support vector machine (SVM) classifier is trained to distinguish between the positive and negative sets for each invitee. ",
"At test time, we are given a tracklet $T=\\big \\lbrace \\mathbf {t}_1,...,\\mathbf {t}_N\\big \\rbrace $ represented as a set of face feature vectors $\\mathbf {t}_i\\in {\\mathbb {R}^d}$, and we classify each member $\\mathbf {t}_i$ with the classifier of each identity $h$ and obtain a set of classification confidences $\\big \\lbrace s\\big (T\\big )_{i,h}\\big \\rbrace $. Hereinafter, we omit argument $T$ for brevity. We now aggregate the scores of each identity to obtain the final identity scores $s_h=\\text{stat}\\big (\\big \\lbrace s_{i,h}\\big \\rbrace _{i=1}^N\\big )$ where $\\text{stat}(\\cdot )$ represents aggregation by e.g. taking the mean confidence. When $s=\\max _{h} s_h$ is smaller than a threshold, a new guest identity is added to $\\mathcal {H}$, where the classifier for this person is trained by using $T$ as positive examples. $\\lbrace s_h\\rbrace _{h \\in \\mathcal {H}}$ is converted to a set of posterior probabilities $\\lbrace P(h | r, V)\\rbrace _{h \\in \\mathcal {H}}$ with a trained regression model.",
"The SSL generative model, $p(A_s | r; M)$, is defined by using a complex angular central Gaussian model (CACGM) BIBREF45. ",
"Speaker Diarization ::: Sound source localization\nThe SSL generative model, $p(A_s | r; M)$, is defined by using a complex angular central Gaussian model (CACGM) BIBREF45. ",
"In what follows, we propose a model for combining face tracking, face identification, speaker identification, SSL, and the TF masks generated by the preceding CSS module to calculate the speaker ID posterior probability of equation (DISPLAY_FORM5). ",
"First, by treating the face position trajectory of the speaking person as a latent variable, the speaker ID posterior probability can be represented as\n\nwhere $\\mathcal {R}$ includes all face position trajectories detected by the face tracking module within the input period. We call a face position trajectory a tracklet. The joint posterior probability on the right hand side (RHS) can be factorized as\n\nThe RHS first term, or the tracklet-conditioned speaker ID posterior, can be further decomposed as\n\nThe RHS first term, calculating the speaker ID posterior given the video signal and the tracklet calls for a face identification model because the video signal and the tracklet combine to specify a single speaker's face. On the other hand, the likelihood term on the RHS can be calculated as\n\nwhere we have assumed the spatial and magnitude features of the audio, represented as $A_s$ and $A_m$, respectively, to be independent of each other. The RHS first term, $p(A_s | h; M)$, is a spatial speaker model, measuring the likelihood of speaker $h$ being active given spatial features $A_s$. We make no assumption on the speaker positions. Hence, $p(A_s | h; M)$ is constant and can be ignored. The RHS second term, $p(A_m | h; M)$, is a generative model for speaker identification.\n\nReturning to (DISPLAY_FORM8), the RHS second term, describing the probability of the speaking person's face being $r$ (recall that each tracklet captures a single person's face), may be factorized as\n\nThe first term is the likelihood of tracklet $r$ generating a sound with spatial features $A_s$ and therefore related to SSL. The second term is the probability with which the tracklet $r$ is active given the audio magnitude features and the video. Calculating this requires lip sync to be performed for each tracklet, which is hard in our application due to low resolution resulting from speaker-to-camera distances and compression artifacts. Thus, we ignore this term.\n\nPutting the above equations together, the speaker-tracklet joint posterior needed in (DISPLAY_FORM7) can be obtained as\n\nwhere the ingredients of the RHS relate to face identification, speaker identification, and SSL, respectively, in the order of appearance."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Audio-visual speaker diarization: Speaker diarization, a process of segmenting input audio and assigning speaker labels to the individual segments, can benefit from a camera signal. The phenomenal improvements that have been made to face detection and identification algorithms by convolutional neural networks (CNNs) BIBREF29, BIBREF30, BIBREF31 make the camera signal very appealing for speaker diarization. While much prior work assumes the batch processing scenario where the entire meeting recording can be processed multiple times, several studies deal with online processing BIBREF32, BIBREF33, BIBREF34, BIBREF35. However, no previous studies comprehensively address the challenges that one might encounter in real meetings. BIBREF32, BIBREF33 do not cope with speech overlaps. While the methods proposed in BIBREF34, BIBREF35 address the overlap issue, they rely solely on spatial cues and thus are not applicable when multiple speakers sit side by side.",
"$A$ and $V$ are the audio and video signals, respectively. $M$ is the set of the TF masks of the current CSS channel within the input segment. The speaker ID inventory, $\\mathcal {H}$, consists of the invited speaker names (e.g., `Alice' or `Bob') and anonymous `guest' IDs produced by the vision module (e.g., `Speaker1' or `Speaker2'). In what follows, we propose a model for combining face tracking, face identification, speaker identification, SSL, and the TF masks generated by the preceding CSS module to calculate the speaker ID posterior probability of equation (DISPLAY_FORM5). The integration of these complementary cues would make speaker attribution robust to real world challenges, including speech overlaps, speaker co-location, and the presence of guest speakers."
],
"extractive_spans": [],
"free_form_answer": "Input in ML model",
"highlighted_evidence": [
"Audio-visual speaker diarization: Speaker diarization, a process of segmenting input audio and assigning speaker labels to the individual segments, can benefit from a camera signal. The phenomenal improvements that have been made to face detection and identification algorithms by convolutional neural networks (CNNs) BIBREF29, BIBREF30, BIBREF31 make the camera signal very appealing for speaker diarization.",
"$A$ and $V$ are the audio and video signals, respectively. $M$ is the set of the TF masks of the current CSS channel within the input segment. The speaker ID inventory, $\\mathcal {H}$, consists of the invited speaker names (e.g., `Alice' or `Bob') and anonymous `guest' IDs produced by the vision module (e.g., `Speaker1' or `Speaker2'). In what follows, we propose a model for combining face tracking, face identification, speaker identification, SSL, and the TF masks generated by the preceding CSS module to calculate the speaker ID posterior probability of equation (DISPLAY_FORM5). The integration of these complementary cues would make speaker attribution robust to real world challenges, including speech overlaps, speaker co-location, and the presence of guest speakers."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"24a37dcad8ba3ab540f78b26c0141000caf1dc96",
"684f21c9781c8ee0ac6d4645db350494f9c835a8"
],
"answer": [
{
"evidence": [
"Table TABREF22 shows SA-WERs for two different diarization configurations and two different experiment setups. In the first setup, we assumed all attendees were invited to the meetings and therefore their face and voice signatures were available in advance. In the second setup, we used precomputed face and voice signatures for 50% of the attendees and the other speakers were treated as `guests'. A diarization system using only face identification and SSL may be regarded as a baseline as this approach was widely used in previous audio-visual diarization studies BIBREF33, BIBREF34, BIBREF35. The results show that the use of speaker identification substantially improved the speaker attribution accuracy. The SA-WERs were improved by 11.6% and 6.0% when the invited/guest ratios were 100/0 and 50/50, respectively. The small differences between the SA-WERs from Table TABREF22 and the WER from Table TABREF22 indicate very accurate speaker attribution."
],
"extractive_spans": [
"A diarization system using only face identification and SSL"
],
"free_form_answer": "",
"highlighted_evidence": [
" diarization system using only face identification and SSL may be regarded as a baseline as this approach was widely used in previous audio-visual diarization studies BIBREF33, BIBREF34, BIBREF35. The results show that the use of speaker identification substantially improved the speaker attribution accuracy. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The first row of Table TABREF22 shows the proposed system's WERs for the gold standard test set. The WERs were calculated over all segments as well as those not containing overlapped periods. The second row shows the WERs of a conventional approach using single-output beamforming. Specifically, we replaced CSS in Fig. FIGREF1 by a differential beamformer which was optimized for our device and ran speech recognition on the beamformed signal. In BIBREF56, we verified that our beamformer slightly outperformed a state-of-the-art mask-based MVDR beamformer. The proposed system achieved a WER of 18.7%, outperforming the system without CSS by 3.6 percentage points, or 16.1% relative. For single-speaker segments, the two systems yielded similar WERs, close to 15%. From these results, we can see that CSS improved the recognition accuracy for overlapped segments, which accounted for about 50% of all the segments."
],
"extractive_spans": [],
"free_form_answer": "The baseline system was a conventional speech recognition approach using single-output beamforming.",
"highlighted_evidence": [
"The first row of Table TABREF22 shows the proposed system's WERs for the gold standard test set. The WERs were calculated over all segments as well as those not containing overlapped periods. The second row shows the WERs of a conventional approach using single-output beamforming. Specifically, we replaced CSS in Fig. FIGREF1 by a differential beamformer which was optimized for our device and ran speech recognition on the beamformed signal."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"Are face tracking, identification, localization etc multimodal inputs in some ML model or system is programmed by hand?",
"What are baselines used?"
],
"question_id": [
"5f25b57a1765682331e90a46c592a4cea9e3a336",
"2ba2ff6c21a16bd295b07af1ef635b3b4c5bd17e"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Processing flow diagram of SRD framework for two stream configuration. To run the whole system online, the video processing and SR modules are assigned their own dedicated resources. WPE: weighted prediction error minimization for dereverberation. CSS: continuous speech separation. SR: speech recognition. SD: speaker diarization. SSL: sound source localization.",
"Fig. 2. Speech separation processing flow diagram.",
"Table 1. WERs on gold standard test set.",
"Table 3. WER and SA-WER on extended test set."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"6-Table1-1.png",
"6-Table3-1.png"
]
} | [
"Are face tracking, identification, localization etc multimodal inputs in some ML model or system is programmed by hand?",
"What are baselines used?"
] | [
[
"1912.04979-Speaker Diarization-7",
"1912.04979-Speaker Diarization ::: Face tracking and identification-2",
"1912.04979-Speaker Diarization ::: Sound source localization-1",
"1912.04979-Speaker Diarization ::: Face tracking and identification-4",
"1912.04979-Speaker Diarization-13",
"1912.04979-Speaker Diarization-4",
"1912.04979-Speaker Diarization ::: Face tracking and identification-3",
"1912.04979-Speaker Diarization-8",
"1912.04979-Speaker Diarization-12",
"1912.04979-Speaker Diarization-5",
"1912.04979-Speaker Diarization-6",
"1912.04979-Introduction-13",
"1912.04979-Speaker Diarization-9",
"1912.04979-Speaker Diarization ::: Face tracking and identification-1",
"1912.04979-Speaker Diarization-11",
"1912.04979-Speaker Diarization-10"
],
[
"1912.04979-Experimental Results-3",
"1912.04979-Experimental Results-4"
]
] | [
"Input in ML model",
"The baseline system was a conventional speech recognition approach using single-output beamforming."
] | 403 |
1712.00733 | Incorporating External Knowledge to Answer Open-Domain Visual Questions with Dynamic Memory Networks | Visual Question Answering (VQA) has attracted much attention since it offers insight into the relationships between the multi-modal analysis of images and natural language. Most of the current algorithms are incapable of answering open-domain questions that require to perform reasoning beyond the image contents. To address this issue, we propose a novel framework which endows the model capabilities in answering more complex questions by leveraging massive external knowledge with dynamic memory networks. Specifically, the questions along with the corresponding images trigger a process to retrieve the relevant information in external knowledge bases, which are embedded into a continuous vector space by preserving the entity-relation structures. Afterwards, we employ dynamic memory networks to attend to the large body of facts in the knowledge graph and images, and then perform reasoning over these facts to generate corresponding answers. Extensive experiments demonstrate that our model not only achieves the state-of-the-art performance in the visual question answering task, but can also answer open-domain questions effectively by leveraging the external knowledge. | {
"paragraphs": [
[
"Visual Question Answering (VQA) is a ladder towards a better understanding of the visual world, which pushes forward the boundaries of both computer vision and natural language processing. A system in VQA tasks is given a text-based question about an image, which is expected to generate a correct answer corresponding to the question. In general, VQA is a kind of Visual Turing Test, which rigorously assesses whether a system is able to achieve human-level semantic analysis of images BIBREF0 , BIBREF1 . A system could solve most of the tasks in computer vision if it performs as well as or better than humans in VQA. In this case, it has garnered increasing attentions due to its numerous potential applications BIBREF2 , such as providing a more natural way to improve human-computer interaction, enabling the visually impaired individuals to get information about images, etc.",
"To fulfill VQA tasks, it requires to endow the responder to understand intention of the question, reason over visual elements of the image, and sometimes have general knowledge about the world. Most of the present methods solve VQA by jointly learning interactions and performing inference over the question and image contents based on the recent success of deep learning BIBREF3 , BIBREF2 , BIBREF4 , BIBREF5 , BIBREF6 , which can be further improved by introducing the attention mechanisms BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . However, most of questions in the current VQA dataset are quite simple, which are answerable by analyzing the question and image alone BIBREF2 , BIBREF13 . It can be debated whether the system can answer questions that require prior knowledge ranging common sense to subject-specific and even expert-level knowledge. It is attractive to develop methods that are capable of deeper image understanding by answering open-domain questions BIBREF13 , which requires the system to have the mechanisms in connecting VQA with structured knowledge, as is shown in Fig. 1 . Some efforts have been made in this direction, but most of them can only handle a limited number of predefined types of questions BIBREF14 , BIBREF15 .",
"Different from the text-based QA problem, it is unfavourable to conduct the open-domain VQA based on the knowledge-based reasoning, since it is inevitably incomplete to describe an image with structured forms BIBREF16 . The recent availability of large training datasets BIBREF13 makes it feasible to train a complex model in an end-to-end fashion by leveraging the recent advances in deep neural networks (DNN) BIBREF2 , BIBREF5 , BIBREF7 , BIBREF10 , BIBREF12 . Nevertheless, it is non-trivial to integrate knowledge into DNN-based methods, since the knowledge are usually represented in a symbol-based or graph-based manner (e.g., Freebase BIBREF17 , DBPedia BIBREF18 ), which is intrinsically different from the DNN-based features. A few attempts are made in this direction BIBREF19 , but it may involve much irrelevant information and fail to implement multi-hop reasoning over several facts.",
"The memory networks BIBREF20 , BIBREF21 , BIBREF22 offer an opportunity to address these challenges by reading from and writing to the external memory module, which is modeled by the actions of neural networks. Recently, it has demonstrated the state-of-the-art performance in numerous NLP applications, including the reading comprehension BIBREF23 and textual question answering BIBREF24 , BIBREF22 . Some seminal efforts are also made to implement VQA based on dynamic memory networks BIBREF25 , but it does not involve the mechanism to incorporate the external knowledge, making it incapable of answering open-domain visual questions. Nevertheless, the attractive characteristics motivate us to leverage the memory structures to encode the large-scale structured knowledge and fuse it with the image features, which offers an approach to answer open domain visual questions."
],
[
"To address the aforementioned issues, we propose a novel Knowledge-incorporated Dynamic Memory Network framework (KDMN), which allows to introduce the massive external knowledge to answer open-domain visual questions by exploiting the dynamic memory network. It endows a system with an capability to answer a broad class of open-domain questions by reasoning over the image content incorporating the massive knowledge, which is conducted by the memory structures.",
"Different from most of existing techniques that focus on answering visual questions solely on the image content, we propose to address a more challenging scenario which requires to implement reasoning beyond the image content. The DNN-based approaches BIBREF2 , BIBREF5 , BIBREF7 are therefore not sufficient, since they can only capture information present in the training images. Recent advances witness several attempts to link the knowledge to VQA methods BIBREF14 , BIBREF15 , which make use of structured knowledge graphs and reason about an image on the supporting facts. Most of these algorithms first extract the visual concepts from a given image, and implement reasoning over the structured knowledge bases explicitly. However, it is non-trivial to extract sufficient visual attributes, since an image lacks the structure and grammatical rules as language. To address this issue, we propose to retrieve a bath of candidate knowledge corresponding to the given image and related questions, and feed them to the deep neural network implicitly. The proposed approach provides a general pipeline that simultaneously preserves the advantages of DNN-based approaches BIBREF2 , BIBREF5 , BIBREF7 and knowledge-based techniques BIBREF14 , BIBREF15 .",
"In general, the underlying symbolic nature of a Knowledge Graph (KG) makes it difficult to integrate with DNNs. The usual knowledge graph embedding models such as TransE BIBREF26 focus on link prediction, which is different from VQA task aiming to fuse knowledge. To tackle this issue, we propose to embed the entities and relations of a KG into a continuous vector space, such that the factual knowledge can be used in a more simple manner. Each knowledge triple is treated as a three-word SVO $(subject, verb, object)$ phrase, and embedded into a feature space by feeding its word-embedding through an RNN architecture. In this case, the proposed knowledge embedding feature shares a common space with other textual elements (questions and answers), which provides an additional advantage to integrate them more easily.",
"Once the massive external knowledge is integrated into the model, it is imperative to provide a flexible mechanism to store a richer representation. The memory network, which contains scalable memory with a learning component to read from and write to it, allows complex reasoning by modeling interaction between multiple parts of the data BIBREF20 , BIBREF25 . In this paper, we adopt the most recent advance of Improved Dynamic Memory Networks (DMN+) BIBREF25 to implement the complex reasoning over several facts. Our model provides a mechanism to attend to candidate knowledge embedding in an iterative manner, and fuse it with the multi-modal data including image, text and knowledge triples in the memory component. The memory vector therefore memorizes useful knowledge to facilitate the prediction of the final answer. Compared with the DMN+ BIBREF25 , we introduce the external knowledge into the memory network, and endows the system an ability to answer open-domain question accordingly.",
"To summarize, our framework is capable of conducting the multi-modal data reasoning including the image content and external knowledge, such that the system is endowed with a more general capability of image interpretation. Our main contributions are as follows:",
"To our best knowledge, this is the first attempt to integrating the external knowledge and image representation with a memory mechanism, such that the open-domain visual question answering can be conducted effectively with the massive knowledge appropriately harnessed;",
"We propose a novel structure-preserved method to embed the knowledge triples into a common space with other textual data, making it flexible to integrate different modalities of data in an implicit manner such as image, text and knowledge triples;",
"We propose to exploit the dynamic memory network to implement multi-hop reasonings, which has a capability to automatically retrieve the relevant information in the knowledge bases and infer the most probable answers accordingly."
],
[
"In this section, we outline our model to implement the open-domain visual question answering. In order to conduct the task, we propose to incorporate the image content and external knowledge by exploiting the most recent advance of dynamic memory network BIBREF22 , BIBREF25 , yielding three main modules in Fig. 2 . The system is therefore endowed with an ability to answer arbitrary questions corresponding to a specific image.",
"Considering of the fact that most of existing VQA datasets include a minority of questions that require prior knowledge, the performance therefore cannot reflect the particular capabilities. We automatically produce a collection of more challenging question-answer pairs, which require complex reasoning beyond the image contents by incorporating the external knowledge. We hope that it can serve as a benchmark for evaluating the capability of various VQA models on the open-domain scenarios .",
"Given an image, we apply the Fast-RCNN BIBREF27 to detect the visual objects of the input image, and extract keywords of the corresponding questions with syntax analysis. Based on these information, we propose to learn a mechanism to retrieve the candidate knowledge by querying the large-scale knowledge graph, yielding a subgraph of relevant knowledge to facilitate the question answering. During the past years, a substantial amount of large-scale knowledge bases have been developed, which store common sense and factual knowledge in a machine readable fashion. In general, each piece of structured knowledge is represented as a triple $(subject, rel, object)$ with $subject$ and $object$ being two entities or concepts, and $rel$ corresponding to the specific relationship between them. In this paper, we adopt external knowledge mined from ConceptNet BIBREF28 , an open multilingual knowledge graph containing common-sense relationships between daily words, to aid the reasoning of open-domain VQA.",
"Our VQA model provides a novel mechanism to integrate image information with that extracted from the ConceptNet within a dynamic memory network. In general, it is non-trivial to integrate the structured knowledge with the DNN features due to their different modalities. To address this issue, we embed the entities and relations of the subgraph into a continuous vector space, which preserves the inherent structure of the KGs. The feature embedding provides a convenience to fuse with the image representation in a dynamic memory network, which builds on the attention mechanism and the memory update mechanism. The attention mechanism is responsible to produce the contextual vector with relevance inferred by the question and previous memory status. The memory update mechanism then renews the memory status based on the contextual vector, which can memorize useful information for predicting the final answer. The novelty lies the fact that these disparate forms of information are embedded into a common space based on memory network, which facilities the subsequent answer reasoning.",
"Finally, we generate a predicted answer by reasoning over the facts in the memory along with the image contents. In this paper, we focus on the task of multi-choice setting, where several multi-choice candidate answers are provided along with a question and a corresponding image. For each question, we treat every multi-choice answer as input, and predict whether the image-question-answer triplet is correct. The proposed model tries to choose one candidate answer with the highest probability by inferring the cross entropy error on the answers through the entire network."
],
[
"In this section, we elaborate on the details and formulations of our proposed model for answering open-domain visual questions. We first retrieve an appropriate amount of candidate knowledge from the large-scale ConceptNet by analyzing the image content and the corresponding questions; afterward, we propose a novel framework based on dynamic memory network to embed these symbolic knowledge triples into a continuous vector space and store it in a memory bank; finally, we exploit these information to implement the open-domain VQA by fusing the knowledge with image representation."
],
[
"In order to answer the open-domain visual questions, we should sometime access information not present in the image by retrieving the candidate knowledge in the KBs. A desirable knowledge retrieval should include most of the useful information while ignore the irrelevant ones, which is essential to avoid model misleading and reduce the computation cost. To this end, we take the following three principles in consideration as (1) entities appeared in images and questions (key entities) are critical; (2) the importance of entities that have direct or indirect links to key entities decays as the number of link hops increases; (3) edges between these entities are potentially useful knowledge.",
"Following these principles, we propose a three-step procedure to retrieve that candidate knowledge that are relevant to the context of images and questions. The retrieval procedure pays more attention on graph nodes that are linked to semantic entities, which also takes account of graph structure for measuring edge importance.",
"In order to retrieve the most informative knowledge, we first extract the candidate nodes in the ConceptNet by analyzing the prominent visual objects in images with Fast-RCNN BIBREF27 , and textual keywords with the Natural Language Toolkit BIBREF29 . Both of them are then associated with the corresponding semantic entities in ConceptNet BIBREF28 by matching all possible n-grams of words. Afterwards, we retrieve the first-order subgraph using these selected nodes from ConceptNet BIBREF28 , which includes all edges connecting with at least one candidate node. It is assumed that the resultant subgraph contains the most relevant information, which is sufficient to answer questions by reducing the redundancy. The resultant first-order knowledge subgraph is denoted as $G$ .",
"Finally, we compress the subgraph $G$ by evaluating and ranking the importance of edges in $G$ using a designed score function, and carefully select the top- $N$ edges along with the nodes for subsequent task. Specifically, we first assign initial weights $w_{i}$ for each subgraph node, e.g., the initial weights for visual object can be proportional to their corresponding bounding-box area such that the dominant objects receive more attention, the textual keywords are treated equally. Then, we calculate the importance score of each node in $G$ by traversing each edge and propagating node weights to their neighbors with a decay factor $r\\in (0,1)$ as ",
"$$score(i)=w_{i}+\\sum _{j \\in G \\backslash i} r ^n w_{j},$$ (Eq. 8) ",
" where $n$ is the number of link hops between the entity $i$ and $j$ . For simplicity, we ignore the edge direction and edge type (relation type), and define the importance of edge $w_{i,j}$ as the weights sum of two connected nodes as ",
"$$w_{i,j}=score(i)+score(j), \\quad \\forall (i,j) \\in G.$$ (Eq. 9) ",
"In this paper, we take the top- $N$ edges ranked by $w_{i,j}$ as the final candidate knowledge for the given context, denoted as $G^\\ast $ ."
],
[
"The candidate knowledge that we have extracted is represented in a symbolic triplet format, which is intrinsically incompatible with DNNs. This fact urges us to embed the entities and relation of knowledge triples into a continuous vector space. Moreover, we regard each entity-relation-entity triple as one knowledge unit, since each triple naturally represents one piece of fact. The knowledge units can be stored in memory slots for reading and writing, and distilled through an attention mechanism for the subsequent tasks.",
"In order to embed the symbolic knowledge triples into memory vector slots, we treat the entities and relations as words, and map them into a continuous vector space using word embedding BIBREF30 . Afterwards, the embedded knowledge is encoded into a fixed-size vector by feeding it to a recurrent neural network (RNN). Specifically, we initialize the word-embedding matrix with a pre-trained GloVe word-embedding BIBREF30 , and refine it simultaneously with the rest of procedure of question and candidate answer embedding. In this case, the entities and relations share a common embedding space with other textual elements (questions and answers), which makes them much more flexible to fuse later.",
"Afterwards, the knowledge triples are treated as SVO phrases of $(subject, verb, object)$ , and fed to to a standard two-layer stacked LSTM as ",
"$$&C^{(t)}_{i} = \\text{LSTM}\\left(\\mathbf {L}[w^{t}_{i}], C^{(t-1)}_{i}\\right), \\\\\n& t=\\lbrace 1,2,3\\rbrace , \\text{ and } i=1, \\cdots , N,\\nonumber $$ (Eq. 11) ",
" where $w^{t}_{i}$ is the $t_{\\text{th}}$ word of the $i_{\\text{th}}$ SVO phrase, $(w^{1}_{i},w^{2}_{i},w^{3}_{i}) \\in G^\\ast $ , $\\mathbf {L}$ is the word embedding matrix BIBREF30 , and $C_{i}$ is the internal state of LSTM cell when forwarding the $i_{\\text{th}}$ SVO phrase. The rationale lies in the fact that the LSTM can capture the semantic meanings effectively when the knowledge triples are treated as SVO phrases.",
"For each question-answering context, we take the LSTM internal states of the relevant knowledge triples as memory vectors, yielding the embedded knowledge stored in memory slots as ",
"$$\\mathbf {M}=\\left[C^{(3)}_{i}\\right],$$ (Eq. 12) ",
"where $\\mathbf {M}(i)$ is the $i_{\\text{th}}$ memory slot corresponding to the $i_{\\text{th}}$ knowledge triples, which can be used for further answer inference. Note that the method is different from the usual knowledge graph embedding models, since our model aims to fuse knowledge with the latent features of images and text, whereas the alternative models such as TransE BIBREF26 focus on link prediction task."
],
[
"We have stored $N$ relevant knowledge embeddings in memory slots for a given question-answer context, which allows to incorporate massive knowledge when $N$ is large. The external knowledge overwhelms other contextual information in quantity, making it imperative to distill the useful information from the candidate knowledge. The Dynamic Memory Network (DMN) BIBREF22 , BIBREF25 provides a mechanism to address the problem by modeling interactions among multiple data channels. In the DMN module, an episodic memory vector is formed and updated during an iterative attention process, which memorizes the most useful information for question answering. Moreover, the iterative process brings a potential capability of multi-hop reasoning.",
"This DMN consists of an attention component which generates a contextual vector using the previous memory vector, and an episodic memory updating component which updates itself based on the contextual vector. Specifically, we propose a novel method to generate the query vector $\\mathbf {q}$ by feeding visual and textual features to a non-linear fully-connected layer to capture question-answer context information as ",
"$$\\mathbf {q} = \\tanh \\left(\\mathbf {W}_{1}\n\\left[\\mathbf {f}^{(I)};\\mathbf {f}^{(Q)};\\mathbf {f}^{(A)}\\right]+\\mathbf {b}_{1}\\right),$$ (Eq. 14) ",
"where $\\mathbf {W}_1$ and $\\mathbf {b}_{1}$ are the weight matrix and bias vector, respectively; and, $\\mathbf {f}^{(I)}$ , $\\mathbf {f}^{(Q)}$ and $\\mathbf {f}^{(A)}$ are denoted as DNN features corresponding to the images, questions and multi-choice answers, respectively. The query vector $\\mathbf {q}$ captures information from question-answer context. During the training process, the query vector $\\mathbf {q}$ initializes an episodic memory vector $\\mathbf {m}^{(0)}$ as $\\mathbf {m}^{(0)}=\\mathbf {q}$ . A iterative attention process is then triggered, which gradually refines the episodic memory $\\mathbf {m}$ until the maximum number of iterations steps $\\mathbf {b}_{1}$0 is reached. By the $\\mathbf {b}_{1}$1 iteration, the episodic memory $\\mathbf {b}_{1}$2 will memorize useful visual and external information to answer the question. Attention component. At the $\\mathbf {b}_{1}$3 iteration, we concatenate each knowledge embedding $\\mathbf {b}_{1}$4 with last iteration episodic memory $\\mathbf {b}_{1}$5 and query vector $\\mathbf {b}_{1}$6 , then apply the basic soft attention procedure to obtain the $\\mathbf {b}_{1}$7 context vector $\\mathbf {b}_{1}$8 as ",
"$$\\mathbf {z}_{i}^{(t)} &= \\left[\\mathbf {M}_{i};\\mathbf {m}^{(t-1)};\\mathbf {q}\\right] \\\\\n\\alpha ^{(t)} &= softmax\\left(\\mathbf {w}\\tanh \\left(\\mathbf {W}_{2}\\mathbf {z}_{i}^{(t)}+\\mathbf {b}_{2}\\right) \\right) \\\\\n\\mathbf {c}^{(t)}&=\\sum _{i=1}^{N}\\alpha _{i}^{(t)}\\mathbf {M}_{i} \\quad t=1, \\cdots , T,$$ (Eq. 15) ",
" where $\\mathbf {z}_{i}^{(t)}$ is the concatenated vector for the $i_{\\text{th}}$ candidate memory at the $t_{\\text{th}}$ iteration; $\\alpha _{i}^{(t)}$ is the $i_{\\text{th}}$ element of $\\alpha ^{(t)}$ representing the normalized attention weight for $\\mathbf {M}_{i}$ at the $t_{\\text{th}}$ iteration; and, $\\mathbf {w}$ , $\\mathbf {W}_{2}$ and $i_{\\text{th}}$0 are parameters to be optimized in deep neural networks.",
"Hereby, we obtain the contextual vector $\\mathbf {c}^{(t)}$ , which captures useful external knowledge for updating episodic memory $\\mathbf {m}^{(t-1)}$ and providing the supporting facts to answer the open-domain questions.",
"Episodic memory updating component. We apply the memory update mechanism BIBREF21 , BIBREF25 as ",
"$$\\mathbf {m}^{(t)}=ReLU\\left(\\mathbf {W}_{3}\n\\left[\\mathbf {m}^{(t-1)};\\mathbf {c}^{(t)};\\mathbf {q}\\right]+\\mathbf {b}_{3}\\right),$$ (Eq. 16) ",
"where $\\mathbf {W}_{3}$ and $\\mathbf {b}_{3}$ are parameters to be optimized. After the iteration, the episodic memory $\\mathbf {m}^{(T)}$ memorizes useful knowledge information to answer the open-domain question.",
"Compared with the DMN+ model implemented in BIBREF25 , we allows the dynamic memory network to incorporate the massive external knowledge into procedure of VQA reasoning. It endows the system with the capability to answer more general visual questions relevant but beyond the image contents, which is more attractive in practical applications.",
"Fusion with episodic memory and inference. Finally, we embed visual features $\\mathbf {f}^{(I)}$ along with the textual features $\\mathbf {f}^{(Q)}$ and $\\mathbf {f}^{(A)}$ to a common space, and fuse them together using Hadamard product (element-wise multiplication) as ",
"$$&\\mathbf {e}^{(k)}=\\tanh \\left(\\mathbf {W}^{(k)}\\mathbf {f}^{(k)}+\\mathbf {b}^{(k)}\\right), k \\in \\lbrace I, Q, A\\rbrace \\\\\n&\\mathbf {h} =\\mathbf {e}^{(I)} \\odot \\mathbf {e}^{(Q)} \\odot \\mathbf {e}^{(A)},$$ (Eq. 17) ",
" where $\\mathbf {e}^{(I)}$ , $\\mathbf {e}^{(Q)}$ and $\\mathbf {e}^{(A)}$ are embedded features for image, question and answer, respectively; $\\mathbf {h}$ is the fused feature in this common space; and, $\\mathbf {W}^{(I)}$ , $\\mathbf {W}^{(Q)}$ and $\\mathbf {W}^{(A)}$ are corresponding to the parameters in neural networks.",
"The final episodic memory $\\mathbf {m}^{(T)}$ are concatenated with the fused feature $\\mathbf {h}$ to predict the probability of whether the multi-choice candidate answer is correct as ",
"$$ans^* = \\operatornamewithlimits{arg\\,max}_{ans \\in \\lbrace 1,2,3,4\\rbrace }\nsoftmax\\left(\\mathbf {W}_{4}\\left[\\mathbf {h}_{ans};\\mathbf {m}^{(T)}_{ans}\\right]+\\mathbf {b}_{4}\\right),$$ (Eq. 18) ",
"where $ans$ represents the index of multi-choice candidate answers; the supported knowledge triples are stored in $\\mathbf {m}^{(T)}_{ans}$ ; and, $\\mathbf {W}_{4}$ and $\\mathbf {b}_{4}$ are the parameters to be optimized in the DNNs. The final choice are consequentially obtained once we have $ans^\\ast $ .",
"Our training objective is to learn parameters based on a cross-entropy loss function as ",
"$$\\mathcal {L} = -\\frac{1}{D}\\sum _{i}^{D}\\big (y_{i}\\log \\hat{y_{i}}+(1-y_{i})\\log (1-\\hat{y_{i}})\\big ),$$ (Eq. 19) ",
"where $\\hat{y_{i}}=p_{i}(A^{(i)}|I^{(i)},Q^{(i)},K^{(i)};\\theta )$ represents the probability of predicting the answer $A^{(i)}$ , given the $i_{\\text{th}}$ image $I^{(i)}$ , question $Q^{(i)}$ and external knowledge $K^{(i)}$ ; $\\theta $ represents the model parameters; $D$ is the number of training samples; and $y_{i}$ is the label for the $i_{\\text{th}}$ sample. The model can be trained in an end-to-end manner once we have the candidate knowledge triples are retrieved from the original knowledge graph."
],
[
"In this section, we conduct extensive experiments to evaluate performance of our proposed model, and compare it with its variants and the alternative methods. We specifically implement the evaluation on a public benchmark dataset (Visual7W) BIBREF7 for the close-domain VQA task, and also generate numerous arbitrary question-answers pairs automatically to evaluate the performance on open-domain VQA. In this section, we first briefly review the dataset and the implementation details, and then report the performance of our proposed method comparing with several baseline models on both close-domain and open-domain VQA tasks."
],
[
"We train and evaluate our model on a public available large-scale visual question answering datasets, the Visual7W dataset BIBREF7 , due to the diversity of question types. Besides, since there is no public available open-domain VQA dataset for evaluation now, we automatically build a collection of open-domain visual question-answer pairs to examine the potentiality of our model for answering open-domain visual questions.",
"The Visual7W dataset BIBREF7 is built based on a subset of images from Visual Genome BIBREF31 , which includes questions in terms of (what, where, when, who, why, which and how) along with the corresponding answers in a multi-choice format. Similar as BIBREF7 , we divide the dataset into training, validation and test subsets, with totally 327,939 question-answer pairs on 47,300 images. Compared with the alternative dataset, Visual7W has a diverse type of question-answer and image content BIBREF13 , which provides more opportunities to assess the human-level capability of a system on the open-domain VQA.",
"In this paper, we automatically generate numerous question-answer pairs by considering the image content and relevant background knowledge, which provides a test bed for the evaluation of a more realistic VQA task. Specifically, we generate a collection automatically based on the test image in the Visual7W by filling a set of question-answer templates, which means that the information is not present during the training stage. To make the task more challenging, we selectively sample the question-answer pairs that need to reasoning on both visual concept in the image and the external knowledge, making it resemble the scenario of the open-domain visual question answering. In this paper, we generate 16,850 open-domain question-answer pairs on images in Visual7W test split. More details on the QA generation and relevant information can be found in the supplementary material."
],
[
"In our experiments, we fix the joint-embedding common space dimension as 1024, word-embedding dimension as 300 and the dimension of LSTM internal states as 512. We use a pre-trained ResNet-101 BIBREF32 model to extract image feature, and select 20 candidate knowledge triples for each QA pair through the experiments. Empirical study demonstrates it is sufficient in our task although more knowledge triples are also allowed. The iteration number of a dynamic memory network update is set to 2, and the dimension of episodic memory is set to 2048, which is equal to the dimension of memory slots.",
"In this paper, we combine the candidate Question-Answer pair to generate a hypothesis, and formulate the multi-choice VQA problem as a classification task. The correct answer can be determined by choosing the one with the largest probability. In each iteration, we randomly sample a batch of 500 QA pairs, and apply stochastic gradient descent algorithm with a base learning rate of 0.0001 to tune the model parameters. The candidate knowledge is first retrieved, and other modules are trained in an end-to-end manner.",
"In order to analyze the contributions of each component in our knowledge-enhanced, memory-based model, we ablate our full model as follows:",
"KDMN-NoKG: baseline version of our model. No external knowledge involved in this model. Other parameters are set the same as full model.",
"KDMN-NoMem: a version without memory network. External knowledge triples are used by one-pass soft attention.",
"KDMN: our full model. External knowledge triples are incorporated in Dynamic Memory Network.",
"We also compare our method with several alternative VQA methods including (1) LSTM-Att BIBREF7 , a LSTM model with spatial attention; (2) MemAUG BIBREF33 : a memory-augmented model for VQA; (3) MCB+Att BIBREF6 : a model combining multi-modal features by Multimodal Compact Bilinear pooling; (4) MLAN BIBREF11 : an advanced multi-level attention model."
],
[
"In this section, we report the quantitative evaluation along with representative samples of our method, compared with our ablative models and the state-of-the-art method for both the conventional (close-domain) VQA task and open-domain VQA.",
"In this section, we report the quantitative accuracy in Table 1 along with the sample results in 3 . The overall results demonstrate that our algorithm obtains different boosts compared with the competitors on various kinds of questions, e.g., significant improvements on the questions of Who ( $5.9\\%$ ), and What ( $4.9\\%$ ) questions, and slightly boost on the questions of When ( $1.4\\%$ ) and How ( $2.0\\%$ ). After inspecting the success and failure cases, we found that the Who and What questions have larger diversity in questions and multi-choice answers compared to other types, therefore benefit more from external background knowledge. Note that compared with the method of MemAUG BIBREF33 in which a memory mechanism is also adopted, our algorithm still gain significant improvement, which further confirms our belief that the background knowledge provides critical supports.",
"We further make comprehensive comparisons among our ablative models. To make it fair, all the experiments are implemented on the same basic network structure and share the same hyper-parameters. In general, our KDMN model on average gains $1.6\\%$ over the KDMN-NoMem model and $4.0\\%$ over the KDMN-NoKG model, which further implies the effectiveness of dynamic memory networks in exploiting external knowledge. Through iterative attention processes, the episodic memory vector captures background knowledge distilled from external knowledge embeddings. The KDMN-NoMem model gains $2.4\\%$ over the KDMN-NoKG model, which implies that the incorporated external knowledge brings additional advantage, and act as a supplementary information for predicting the final answer. The indicative examples in Fig. 3 also demonstrate the impact of external knowledge, such as the 4th example of “Why is the light red?”. It would be helpful if we could retrieve the function of the traffic lights from the external knowledge effectively.",
"In this section, we report the quantitative performance of open-domain VQA in Table 2 along with the sample results in Fig. 4 . Since most of the alternative methods do not provide the results in the open-domain scenario, we make comprehensive comparison with our ablative models. As expected, we observe that a significant improvement ( $12.7\\%$ ) of our full KDMN model over the KDMN-NoKG model, where $6.8\\%$ attributes to the involvement of external knowledge and $5.9\\%$ attributes to the usage of memory network. Examples in Fig. 4 further provide some intuitive understanding of our algorithm. It is difficult or even impossible for a system to answer the open domain question when comprehensive reasoning beyond image content is required, e.g., the background knowledge for prices of stuff is essential for a machine when inferring the expensive ones. The larger performance improvement on open-domain dataset supports our belief that background knowledge is essential to answer general visual questions. Note that the performance can be further improved if the technique of ensemble is allowed. We fused the results of several KDMN models which are trained from different initializations. Experiments demonstrate that we can further obtain an improvement about $3.1\\%$ ."
],
[
"In this paper, we proposed a novel framework named knowledge-incorporate dynamic memory network (KDMN) to answer open-domain visual questions by harnessing massive external knowledge in dynamic memory network. Context-relevant external knowledge triples are retrieved and embedded into memory slots, then distilled through a dynamic memory network to jointly inference final answer with visual features. The proposed pipeline not only maintains the superiority of DNN-based methods, but also acquires the ability to exploit external knowledge for answering open-domain visual questions. Extensive experiments demonstrate that our method achieves competitive results on public large-scale dataset, and gain huge improvement on our generated open-domain dataset."
],
[
"We obey several principles when building the open-domain VQA dataset for evaluation: (1) The question-answer pairs should be generated automatically; (2) Both of visual information and external knowledge should be required when answering these generated open-domain visual questions; (3) The dataset should in multi-choices setting, in accordance with the Visual7W dataset for fair comparison.",
"The open-domain question-answer pairs are generated based on a subset of images in Visual7W BIBREF7 standard test split, so that the test images are not present during the training stage. For one particular image that we need to generate open-domain question-answer pairs about, we firstly extract several prominent visual objects and randomly select one visual object. After linked to a semantic entity in ConceptNet BIBREF28 , the visual object connects other entities in ConceptNet through various relations, e.g. UsedFor, CapableOf, and forms amount of knowledge triples $(head, relation, tail)$ , where either $head$ or $tail$ is the visual object. Again, we randomly select one knowledge triple, and fill into a $relation$ -related question-answer template to obtain the question-answer pair. These templates assume that the correct answer satisfies knowledge requirement as well as appear in the image, as shown in table 3 .",
"For each open-domain question-answer pair, we generate three additional confusing items as candidate answers. These candidate answers are randomly sampled from a collection of answers, which is composed of answers from other question-answer pairs belonging to the same $relation$ type. In order to make the open-domain dataset more challenging, we selectively sample confusing answers, which either satisfy knowledge requirement or appear in the image, but not satisfy both of them as the ground-truth answers do. Specifically, one of the confusing answers satisfies knowledge requirement but not appears in image, so that the model must attend to visual objects in image; another one of the confusing answers appears in the image but not satisfies knowledge requirement, so that the model must reason on external knowledge to answer these open-domain questions. Please see examples in Figure 5 .",
"In total, we generate 16,850 open-domain question-answer pairs based on 8,425 images in Visual7W test split."
]
],
"section_name": [
"Introduction",
"Our Proposal",
"Overview",
"Answer Open-Domain Visual Questions",
"Candidate Knowledge Retrieval ",
"Knowledge Embedding in Memories",
"Attention-based Knowledge Fusion with DNNs",
"Experiments",
"Datasets",
"Implementation Details",
"Results and Analysis",
"Conclusion",
"Details of our Open-domain Dataset Generation"
]
} | {
"answers": [
{
"annotation_id": [
"62586625f55ac7e9d3273b01967711c882f71cc5",
"9b9c11f4510e0e214add684e185578da0e84bb31",
"eb558d610df82d74f297adaf1a571b175b02dd28"
],
"answer": [
{
"evidence": [
"We also compare our method with several alternative VQA methods including (1) LSTM-Att BIBREF7 , a LSTM model with spatial attention; (2) MemAUG BIBREF33 : a memory-augmented model for VQA; (3) MCB+Att BIBREF6 : a model combining multi-modal features by Multimodal Compact Bilinear pooling; (4) MLAN BIBREF11 : an advanced multi-level attention model."
],
"extractive_spans": [
"LSTM-Att BIBREF7 , a LSTM model with spatial attention",
"MemAUG BIBREF33 : a memory-augmented model for VQA",
"MCB+Att BIBREF6 : a model combining multi-modal features by Multimodal Compact Bilinear pooling",
"MLAN BIBREF11 : an advanced multi-level attention model"
],
"free_form_answer": "",
"highlighted_evidence": [
"We also compare our method with several alternative VQA methods including (1) LSTM-Att BIBREF7 , a LSTM model with spatial attention; (2) MemAUG BIBREF33 : a memory-augmented model for VQA; (3) MCB+Att BIBREF6 : a model combining multi-modal features by Multimodal Compact Bilinear pooling; (4) MLAN BIBREF11 : an advanced multi-level attention model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In order to analyze the contributions of each component in our knowledge-enhanced, memory-based model, we ablate our full model as follows:",
"KDMN-NoKG: baseline version of our model. No external knowledge involved in this model. Other parameters are set the same as full model.",
"KDMN-NoMem: a version without memory network. External knowledge triples are used by one-pass soft attention.",
"KDMN: our full model. External knowledge triples are incorporated in Dynamic Memory Network.",
"We also compare our method with several alternative VQA methods including (1) LSTM-Att BIBREF7 , a LSTM model with spatial attention; (2) MemAUG BIBREF33 : a memory-augmented model for VQA; (3) MCB+Att BIBREF6 : a model combining multi-modal features by Multimodal Compact Bilinear pooling; (4) MLAN BIBREF11 : an advanced multi-level attention model."
],
"extractive_spans": [],
"free_form_answer": "Ablated versions of the full model (without external knowledge, without memory network); alternative VQA methods: LSTM-Att, MemAUG, MCB+Att, MLAN",
"highlighted_evidence": [
"In order to analyze the contributions of each component in our knowledge-enhanced, memory-based model, we ablate our full model as follows:\n\nKDMN-NoKG: baseline version of our model. No external knowledge involved in this model. Other parameters are set the same as full model.\n\nKDMN-NoMem: a version without memory network. External knowledge triples are used by one-pass soft attention.\n\nKDMN: our full model. External knowledge triples are incorporated in Dynamic Memory Network.",
"We also compare our method with several alternative VQA methods including (1) LSTM-Att BIBREF7 , a LSTM model with spatial attention; (2) MemAUG BIBREF33 : a memory-augmented model for VQA; (3) MCB+Att BIBREF6 : a model combining multi-modal features by Multimodal Compact Bilinear pooling; (4) MLAN BIBREF11 : an advanced multi-level attention model."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We also compare our method with several alternative VQA methods including (1) LSTM-Att BIBREF7 , a LSTM model with spatial attention; (2) MemAUG BIBREF33 : a memory-augmented model for VQA; (3) MCB+Att BIBREF6 : a model combining multi-modal features by Multimodal Compact Bilinear pooling; (4) MLAN BIBREF11 : an advanced multi-level attention model."
],
"extractive_spans": [],
"free_form_answer": "LSTM with attention, memory augmented model, ",
"highlighted_evidence": [
"We also compare our method with several alternative VQA methods including (1) LSTM-Att BIBREF7 , a LSTM model with spatial attention; (2) MemAUG BIBREF33 : a memory-augmented model for VQA; (3) MCB+Att BIBREF6 : a model combining multi-modal features by Multimodal Compact Bilinear pooling; (4) MLAN BIBREF11 : an advanced multi-level attention model."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c7d4a630661cd719ea504dba56393f78278b296b",
"6edf8b2bd1b6e03a535504401e6969c850269632"
]
},
{
"annotation_id": [
"498ec7b92121482d063ed7a0ae6cc1b257247f93",
"be5693b24f636cbcd18ee5e4f4fea7e6a588c40e"
],
"answer": [
{
"evidence": [
"In this section, we conduct extensive experiments to evaluate performance of our proposed model, and compare it with its variants and the alternative methods. We specifically implement the evaluation on a public benchmark dataset (Visual7W) BIBREF7 for the close-domain VQA task, and also generate numerous arbitrary question-answers pairs automatically to evaluate the performance on open-domain VQA. In this section, we first briefly review the dataset and the implementation details, and then report the performance of our proposed method comparing with several baseline models on both close-domain and open-domain VQA tasks.",
"We train and evaluate our model on a public available large-scale visual question answering datasets, the Visual7W dataset BIBREF7 , due to the diversity of question types. Besides, since there is no public available open-domain VQA dataset for evaluation now, we automatically build a collection of open-domain visual question-answer pairs to examine the potentiality of our model for answering open-domain visual questions."
],
"extractive_spans": [
"Visual7W",
"a collection of open-domain visual question-answer pairs"
],
"free_form_answer": "",
"highlighted_evidence": [
"We specifically implement the evaluation on a public benchmark dataset (Visual7W) BIBREF7 for the close-domain VQA task, and also generate numerous arbitrary question-answers pairs automatically to evaluate the performance on open-domain VQA.",
"Besides, since there is no public available open-domain VQA dataset for evaluation now, we automatically build a collection of open-domain visual question-answer pairs to examine the potentiality of our model for answering open-domain visual questions."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We train and evaluate our model on a public available large-scale visual question answering datasets, the Visual7W dataset BIBREF7 , due to the diversity of question types. Besides, since there is no public available open-domain VQA dataset for evaluation now, we automatically build a collection of open-domain visual question-answer pairs to examine the potentiality of our model for answering open-domain visual questions."
],
"extractive_spans": [],
"free_form_answer": "Visual7W and an automatically constructed open-domain VQA dataset",
"highlighted_evidence": [
"We train and evaluate our model on a public available large-scale visual question answering datasets, the Visual7W dataset BIBREF7 , due to the diversity of question types. Besides, since there is no public available open-domain VQA dataset for evaluation now, we automatically build a collection of open-domain visual question-answer pairs to examine the potentiality of our model for answering open-domain visual questions."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"f840a836eee0180d2c976457f8b3052d8e78050c"
]
},
{
"annotation_id": [
"d4569f3e6d3fb6adab5757ccbffbf6ed38160c4b"
],
"answer": [
{
"evidence": [
"Given an image, we apply the Fast-RCNN BIBREF27 to detect the visual objects of the input image, and extract keywords of the corresponding questions with syntax analysis. Based on these information, we propose to learn a mechanism to retrieve the candidate knowledge by querying the large-scale knowledge graph, yielding a subgraph of relevant knowledge to facilitate the question answering. During the past years, a substantial amount of large-scale knowledge bases have been developed, which store common sense and factual knowledge in a machine readable fashion. In general, each piece of structured knowledge is represented as a triple $(subject, rel, object)$ with $subject$ and $object$ being two entities or concepts, and $rel$ corresponding to the specific relationship between them. In this paper, we adopt external knowledge mined from ConceptNet BIBREF28 , an open multilingual knowledge graph containing common-sense relationships between daily words, to aid the reasoning of open-domain VQA.",
"In general, the underlying symbolic nature of a Knowledge Graph (KG) makes it difficult to integrate with DNNs. The usual knowledge graph embedding models such as TransE BIBREF26 focus on link prediction, which is different from VQA task aiming to fuse knowledge. To tackle this issue, we propose to embed the entities and relations of a KG into a continuous vector space, such that the factual knowledge can be used in a more simple manner. Each knowledge triple is treated as a three-word SVO $(subject, verb, object)$ phrase, and embedded into a feature space by feeding its word-embedding through an RNN architecture. In this case, the proposed knowledge embedding feature shares a common space with other textual elements (questions and answers), which provides an additional advantage to integrate them more easily."
],
"extractive_spans": [],
"free_form_answer": "Word embeddings from knowledge triples (subject, rel, object) from ConceptNet are fed to an RNN",
"highlighted_evidence": [
"In general, each piece of structured knowledge is represented as a triple $(subject, rel, object)$ with $subject$ and $object$ being two entities or concepts, and $rel$ corresponding to the specific relationship between them. In this paper, we adopt external knowledge mined from ConceptNet BIBREF28 , an open multilingual knowledge graph containing common-sense relationships between daily words, to aid the reasoning of open-domain VQA.",
"Each knowledge triple is treated as a three-word SVO $(subject, verb, object)$ phrase, and embedded into a feature space by feeding its word-embedding through an RNN architecture. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b"
]
},
{
"annotation_id": [
"24b626f9d71e792e97ebc66b1de36f275ec40754",
"5db5821f41aacf6d80565bf98cb6953b6b9b322f"
],
"answer": [
{
"evidence": [
"Given an image, we apply the Fast-RCNN BIBREF27 to detect the visual objects of the input image, and extract keywords of the corresponding questions with syntax analysis. Based on these information, we propose to learn a mechanism to retrieve the candidate knowledge by querying the large-scale knowledge graph, yielding a subgraph of relevant knowledge to facilitate the question answering. During the past years, a substantial amount of large-scale knowledge bases have been developed, which store common sense and factual knowledge in a machine readable fashion. In general, each piece of structured knowledge is represented as a triple $(subject, rel, object)$ with $subject$ and $object$ being two entities or concepts, and $rel$ corresponding to the specific relationship between them. In this paper, we adopt external knowledge mined from ConceptNet BIBREF28 , an open multilingual knowledge graph containing common-sense relationships between daily words, to aid the reasoning of open-domain VQA."
],
"extractive_spans": [],
"free_form_answer": "ConceptNet, which contains common-sense relationships between daily words",
"highlighted_evidence": [
" In this paper, we adopt external knowledge mined from ConceptNet BIBREF28 , an open multilingual knowledge graph containing common-sense relationships between daily words, to aid the reasoning of open-domain VQA."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Given an image, we apply the Fast-RCNN BIBREF27 to detect the visual objects of the input image, and extract keywords of the corresponding questions with syntax analysis. Based on these information, we propose to learn a mechanism to retrieve the candidate knowledge by querying the large-scale knowledge graph, yielding a subgraph of relevant knowledge to facilitate the question answering. During the past years, a substantial amount of large-scale knowledge bases have been developed, which store common sense and factual knowledge in a machine readable fashion. In general, each piece of structured knowledge is represented as a triple $(subject, rel, object)$ with $subject$ and $object$ being two entities or concepts, and $rel$ corresponding to the specific relationship between them. In this paper, we adopt external knowledge mined from ConceptNet BIBREF28 , an open multilingual knowledge graph containing common-sense relationships between daily words, to aid the reasoning of open-domain VQA."
],
"extractive_spans": [
"an open multilingual knowledge graph containing common-sense relationships between daily words"
],
"free_form_answer": "",
"highlighted_evidence": [
"In this paper, we adopt external knowledge mined from ConceptNet BIBREF28 , an open multilingual knowledge graph containing common-sense relationships between daily words, to aid the reasoning of open-domain VQA."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c7d4a630661cd719ea504dba56393f78278b296b",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What are the baselines for this paper?",
"What VQA datasets are used for evaluating this task? ",
"How do they model external knowledge? ",
"What type of external knowledge has been used for this paper? "
],
"question_id": [
"74acaa205a5998af4ad7edbed66837a6f2b5c58b",
"cfcf94b81589e7da215b4f743a3f8de92a6dda7a",
"d147117ef24217c43252d917d45dff6e66ff807c",
"1a2b69dfa81dfeadd67b133229476086f2cc74a8"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"Figure 1: A real case of open-domain visual question answering based on internal representation of an image and external knowledge. Recent success of deep learning provides a good opportunity to implement the closed-domain VQAs, but it is incapable of answering open-domain questions when external knowledge is needed. In this example, the system should recognize the giraffes and then query the knowledge bases for the main diet of giraffes. In this paper, we propose to explore the external knowledge along with the image representation based on a dynamic memory network, which allows a multi-hop reasoning over several facts.",
"Figure 2: Overall architecture of our proposed KDMN network. Given an image and the corresponding questions, the visual objects of the input image and key words of the corresponding questions are extracted using the Fast-RCNN and syntax analysis, respectively. Afterwards, we propose to assess the importance of entities in the knowledge graph and retrieve the most informative context-relevant knowledge triples, which are fed to the memory network after embedding the candidate knowledge into a continuous feature space. Consequentially, we integrate the representations of images and extracted knowledge into a common space, and store the features in a dynamic memory module. The open-domain VQA can be implemented by interpreting the joint representation under attention mechanism.",
"Figure 3: Example results on the Visual7W dataset for (close-domain) VQA tasks. Given an image and the corresponding question, we report the corresponding answers obtained via our algorithm. Specifically, pr denotes the predicted probability generated by our model, and pr-NoKG is the predicted probability by the ablative model of KDMN-NoKG. We make the predicted choices bold accordingly. The external knowledge triples are also provided if they are retrieved to support the joint reasoning by our method automatically. As is observed, the external knowledge is essential even for the conventional VQA tasks, e.g., in the 5th example, it is much easier to infer the place accordingly by incorporating external knowledge when a giraffe is recognized.",
"Figure 4: Example results of open-domain visual question answering based on our proposed knowledge-incorporate dynamic memory network. Given an images, we automatically generate the open-domain question-answer pair by considering of the image content and the relevant background knowledge. We report the corresponding answers obtained via our algorithm. Specifically, pr denotes the predicted probability generated by our model, and pr-NoKG is the predicted probability by the ablative model of KDMN-NoKG. The results demonstrate that external knowledge plays an essential role in answer open-questions. A system is incapable of inferring the food in the 1st example and the stuff prices in the 3rd example.",
"Table 1: Accuracy on Visual7W dataset",
"Table 2: Accuracy on our generated open-domain dataset.",
"Table 3: Templates for generate open-domain question-answer pairs. {visual} is the KG entity representing visual object. {other} is the KG entity that has a connection with {visual}. We take {visual} as the generated ground-truth answer.",
"Figure 5: Examples from our generated open-domain dataset. We mark ground-truth answers green. The bottom KG triples just provide insights into the generation process, and will not be included in the dataset. The candidate answers can be quite confusing in some questions, e.g., in the 1st example, the ground-truth “candle” appearing in the image can be used for light, while “cake” also appears in the image but cannot be used for light, “sun” can also be used for light but not appear in the image."
],
"file": [
"1-Figure1-1.png",
"3-Figure2-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png",
"8-Table1-1.png",
"8-Table2-1.png",
"11-Table3-1.png",
"11-Figure5-1.png"
]
} | [
"What are the baselines for this paper?",
"What VQA datasets are used for evaluating this task? ",
"How do they model external knowledge? ",
"What type of external knowledge has been used for this paper? "
] | [
[
"1712.00733-Implementation Details-2",
"1712.00733-Implementation Details-5",
"1712.00733-Implementation Details-3",
"1712.00733-Implementation Details-6",
"1712.00733-Implementation Details-4"
],
[
"1712.00733-Experiments-0",
"1712.00733-Datasets-0"
],
[
"1712.00733-Overview-2",
"1712.00733-Our Proposal-2"
],
[
"1712.00733-Overview-2"
]
] | [
"LSTM with attention, memory augmented model, ",
"Visual7W and an automatically constructed open-domain VQA dataset",
"Word embeddings from knowledge triples (subject, rel, object) from ConceptNet are fed to an RNN",
"ConceptNet, which contains common-sense relationships between daily words"
] | 404 |
1905.07894 | Abusive Language Detection in Online Conversations by Combining Content-and Graph-based Features | In recent years, online social networks have allowed worldwide users to meet and discuss. As guarantors of these communities, the administrators of these platforms must prevent users from adopting inappropriate behaviors. This verification task, mainly done by humans, is more and more difficult due to the ever growing amount of messages to check. Methods have been proposed to automatize this moderation process, mainly by providing approaches based on the textual content of the exchanged messages. Recent work has also shown that characteristics derived from the structure of conversations, in the form of conversational graphs, can help detecting these abusive messages. In this paper, we propose to take advantage of both sources of information by proposing fusion methods integrating content-and graph-based features. Our experiments on raw chat logs show that the content of the messages, but also of their dynamics within a conversation contain partially complementary information, allowing performance improvements on an abusive message classification task with a final F-measure of 93.26%. | {
"paragraphs": [
[
"In recent years, online social networks have allowed world-wide users to meet and discuss. As guarantors of these communities, the administrators of these platforms must prevent users from adopting inappropriate behaviors. This verification task, mainly done by humans, is more and more difficult due to the ever growing amount of messages to check. Methods have been proposed to automatize this moderation process, mainly by providing approaches based on the textual content of the exchanged messages. Recent work has also shown that characteristics derived from the structure of conversations, in the form of conversational graphs, can help detecting these abusive messages. In this paper, we propose to take advantage of both sources of information by proposing fusion methods integrating content- and graph-based features. Our experiments on raw chat logs show that the content of the messages, but also of their dynamics within a conversation contain partially complementary information, allowing performance improvements on an abusive message classification task with a final INLINEFORM0 -measure of 93.26%."
],
[
"Automatic abuse detection, Content analysis, Conversational graph, Online conversations, Social networks "
],
[
"Internet widely impacted the way we communicate. Online communities, in particular, have grown to become important places for interpersonal communications. They get more and more attention from companies to advertise their products or from governments interested in monitoring public discourse. Online communities come in various shapes and forms, but they are all exposed to abusive behavior. The definition of what exactly is considered as abuse depends on the community, but generally includes personal attacks, as well as discrimination based on race, religion or sexual orientation.",
"Abusive behavior is a risk, as it is likely to make important community members leave, therefore endangering the community, and even trigger legal issues in some countries. Moderation consists in detecting users who act abusively, and in taking action against them. Currently this moderation work is mainly a manual process, and since it implies high human and financial costs, companies have a keen interest in its automation. One way of doing so is to consider this task as a classification problem consisting in automatically determining if a user message is abusive or not.",
"A number of works have tackled this problem, or related ones, in the literature. Most of them focus only on the content of the targeted message to detect abuse or similar properties. For instance, BIBREF0 applies this principle to detect hostility, BIBREF1 for cyberbullying, and BIBREF2 for offensive language. These approaches rely on a mix of standard NLP features and manually crafted application-specific resources (e.g. linguistic rules). We also proposed a content-based method BIBREF3 using a wide array of language features (Bag-of-Words, INLINEFORM0 - INLINEFORM1 scores, sentiment scores). Other approaches are more machine learning intensive, but require larger amounts of data. Recently, BIBREF4 created three datasets containing individual messages collected from Wikipedia discussion pages, annotated for toxicity, personal attacks and aggression, respectively. They have been leveraged in recent works to train Recursive Neural Network operating on word embeddings and character INLINEFORM2 -gram features BIBREF5 , BIBREF6 . However, the quality of these direct content-based approaches is very often related to the training data used to learn abuse detection models. In the case of online social networks, the great variety of users, including very different language registers, spelling mistakes, as well as intentional users obfuscation, makes it almost impossible to have models robust enough to be applied in all cases. BIBREF7 have then shown that it is very easy to bypass automatic toxic comment detection systems by making the abusive content difficult to detect (intentional spelling mistakes, uncommon negatives...).",
"Because the reactions of other users to an abuse case are completely beyond the abuser's control, some authors consider the content of messages occurring around the targeted message, instead of focusing only on the targeted message itself. For instance, BIBREF8 use features derived from the sentences neighboring a given message to detect harassment on the Web. BIBREF9 take advantage of user features such as the gender, the number of in-game friends or the number of daily logins to detect abuse in the community of an online game. In our previous work BIBREF10 , we proposed a radically different method that completely ignores the textual content of the messages, and relies only on a graph-based modeling of the conversation. This is the only graph-based approach ignoring the linguistic content proposed in the context of abusive messages detection. Our conversational network extraction process is inspired from other works leveraging such graphs for other purposes: chat logs BIBREF11 or online forums BIBREF12 interaction modeling, user group detection BIBREF13 . Additional references on abusive message detection and conversational network modeling can be found in BIBREF10 .",
"In this paper, based on the assumption that the interactions between users and the content of the exchanged messages convey different information, we propose a new method to perform abuse detection while leveraging both sources. For this purpose, we take advantage of the content- BIBREF14 and graph-based BIBREF10 methods that we previously developed. We propose three different ways to combine them, and compare their performance on a corpus of chat logs originating from the community of a French multiplayer online game. We then perform a feature study, finding the most informative ones and discussing their role. Our contribution is twofold: the exploration of fusion methods, and more importantly the identification of discriminative features for this problem.",
"The rest of this article is organized as follows. In Section SECREF4 , we describe the methods and strategies used in this work. In Section SECREF5 we present our dataset, the experimental setup we use for this classification task, and the performances we obtained. Finally, we summarize our contributions in Section SECREF6 and present some perspectives for this work."
],
[
"In this section, we summarize the content-based method from BIBREF14 (Section SECREF2 ) and the graph-based method from BIBREF10 (Section SECREF3 ). We then present the fusion method proposed in this paper, aiming at taking advantage of both sources of information (Section SECREF6 ). Figure FIGREF1 shows the whole process, and is discussed through this section."
],
[
"This method corresponds to the bottom-left part of Figure FIGREF1 (in green). It consists in extracting certain features from the content of each considered message, and to train a Support Vector Machine (SVM) classifier to distinguish abusive (Abuse class) and non-abusive (Non-abuse class) messages BIBREF14 . These features are quite standard in Natural Language Processing (NLP), so we only describe them briefly here.",
"We use a number of morphological features. We use the message length, average word length, and maximal word length, all expressed in number of characters. We count the number of unique characters in the message. We distinguish between six classes of characters (letters, digits, punctuation, spaces, and others) and compute two features for each one: number of occurrences, and proportion of characters in the message. We proceed similarly with capital letters. Abusive messages often contain a lot of copy/paste. To deal with such redundancy, we apply the Lempel–Ziv–Welch (LZW) compression algorithm BIBREF15 to the message and take the ratio of its raw to compress lengths, expressed in characters. Abusive messages also often contain extra-long words, which can be identified by collapsing the message: extra occurrences of letters repeated more than two times consecutively are removed. For instance, “looooooool” would be collapsed to “lool”. We compute the difference between the raw and collapsed message lengths.",
"We also use language features. We count the number of words, unique words and bad words in the message. For the latter, we use a predefined list of insults and symbols considered as abusive, and we also count them in the collapsed message. We compute two overall INLINEFORM0 – INLINEFORM1 scores corresponding to the sums of the standard INLINEFORM2 – INLINEFORM3 scores of each individual word in the message. One is processed relatively to the Abuse class, and the other to the Non-abuse class. We proceed similarly with the collapsed message. Finally, we lower-case the text and strip punctuation, in order to represent the message as a basic Bag-of-Words (BoW). We then train a Naive Bayes classifier to detect abuse using this sparse binary vector (as represented in the very bottom part of Figure FIGREF1 ). The output of this simple classifier is then used as an input feature for the SVM classifier."
],
[
"This method corresponds to the top-left part of Figure FIGREF1 (in red). It completely ignores the content of the messages, and only focuses on the dynamics of the conversation, based on the interactions between its participants BIBREF10 . It is three-stepped: 1) extracting a conversational graph based on the considered message as well as the messages preceding and/or following it; 2) computing the topological measures of this graph to characterize its structure; and 3) using these values as features to train an SVM to distinguish between abusive and non-abusive messages. The vertices of the graph model the participants of the conversation, whereas its weighted edges represent how intensely they communicate.",
"The graph extraction is based on a number of concepts illustrated in Figure FIGREF4 , in which each rectangle represents a message. The extraction process is restricted to a so-called context period, i.e. a sub-sequence of messages including the message of interest, itself called targeted message and represented in red in Figure FIGREF4 . Each participant posting at least one message during this period is modeled by a vertex in the produced conversational graph. A mobile window is slid over the whole period, one message at a time. At each step, the network is updated either by creating new links, or by updating the weights of existing ones. This sliding window has a fixed length expressed in number of messages, which is derived from ergonomic constraints relative to the online conversation platform studied in Section SECREF5 . It allows focusing on a smaller part of the context period. At a given time, the last message of the window (in blue in Figure FIGREF4 ) is called current message and its author current author. The weight update method assumes that the current message is aimed at the authors of the other messages present in the window, and therefore connects the current author to them (or strengthens their weights if the edge already exists). It also takes chronology into account by favoring the most recent authors in the window. Three different variants of the conversational network are extracted for one given targeted message: the Before network is based on the messages posted before the targeted message, the After network on those posted after, and the Full network on the whole context period. Figure FIGREF5 shows an example of such networks obtained for a message of the corpus described in Section SECREF7 .",
"Once the conversational networks have been extracted, they must be described through numeric values in order to feed the SVM classifier. This is done through a selection of standard topological measures allowing to describe a graph in a number of distinct ways, focusing on different scales and scopes. The scale denotes the nature of the characterized entity. In this work, the individual vertex and the whole graph are considered. When considering a single vertex, the measure focuses on the targeted author (i.e. the author of the targeted message). The scope can be either micro-, meso- or macroscopic: it corresponds to the amount of information considered by the measure. For instance, the graph density is microscopic, the modularity is mesoscopic, and the diameter is macroscopic. All these measures are computed for each graph, and allow describing the conversation surrounding the message of interest. The SVM is then trained using these values as features. In this work, we use exactly the same measures as in BIBREF10 ."
],
[
"We now propose a new method seeking to take advantage of both previously described ones. It is based on the assumption that the content- and graph-based features convey different information. Therefore, they could be complementary, and their combination could improve the classification performance. We experiment with three different fusion strategies, which are represented in the right-hand part of Figure FIGREF1 .",
"The first strategy follows the principle of Early Fusion. It consists in constituting a global feature set containing all content- and graph-based features from Sections SECREF2 and SECREF3 , then training a SVM directly using these features. The rationale here is that the classifier has access to the whole raw data, and must determine which part is relevant to the problem at hand.",
"The second strategy is Late Fusion, and we proceed in two steps. First, we apply separately both methods described in Sections SECREF2 and SECREF3 , in order to obtain two scores corresponding to the output probability of each message to be abusive given by the content- and graph-based methods, respectively. Second, we fetch these two scores to a third SVM, trained to determine if a message is abusive or not. This approach relies on the assumption that these scores contain all the information the final classifier needs, and not the noise present in the raw features.",
"Finally, the third fusion strategy can be considered as Hybrid Fusion, as it seeks to combine both previous proposed ones. We create a feature set containing the content- and graph-based features, like with Early Fusion, but also both scores used in Late Fusion. This whole set is used to train a new SVM. The idea is to check whether the scores do not convey certain useful information present in the raw features, in which case combining scores and features should lead to better results."
],
[
"In this section, we first describe our dataset and the experimental protocol followed in our experiments (Section SECREF7 ). We then present and discuss our results, in terms of classification performance (Sections SECREF9 ) and feature selection (Section SECREF11 )."
],
[
"The dataset is the same as in our previous publications BIBREF14 , BIBREF10 . It is a proprietary database containing 4,029,343 messages in French, exchanged on the in-game chat of SpaceOrigin, a Massively Multiplayer Online Role-Playing Game (MMORPG). Among them, 779 have been flagged as being abusive by at least one user in the game, and confirmed as such by a human moderator. They constitute what we call the Abuse class. Some inconsistencies in the database prevent us from retrieving the context of certain messages, which we remove from the set. After this cleaning, the Abuse class contains 655 messages. In order to keep a balanced dataset, we further extract the same number of messages at random from the ones that have not been flagged as abusive. This constitutes our Non-abuse class. Each message, whatever its class, is associated to its surrounding context (i.e. messages posted in the same thread).",
"The graph extraction method used to produce the graph-based features requires to set certain parameters. We use the values matching the best performance, obtained during the greedy search of the parameter space performed in BIBREF10 . In particular, regarding the two most important parameters (see Section SECREF3 ), we fix the context period size to 1,350 messages and the sliding window length to 10 messages. Implementation-wise, we use the iGraph library BIBREF16 to extract the conversational networks and process the corresponding features. We use the Sklearn toolkit BIBREF17 to get the text-based features. We use the SVM classifier implemented in Sklearn under the name SVC (C-Support Vector Classification). Because of the relatively small dataset, we set-up our experiments using a 10-fold cross-validation. Each fold is balanced between the Abuse and Non-abuse classes, 70% of the dataset being used for training and 30% for testing."
],
[
"Table TABREF10 presents the Precision, Recall and INLINEFORM0 -measure scores obtained on the Abuse class, for both baselines (Content-based BIBREF14 and Graph-based BIBREF10 ) and all three proposed fusion strategies (Early Fusion, Late Fusion and Hybrid Fusion). It also shows the number of features used to perform the classification, the time required to compute the features and perform the cross validation (Total Runtime) and to compute one message in average (Average Runtime). Note that Late Fusion has only 2 direct inputs (content- and graph-based SVMs), but these in turn have their own inputs, which explains the values displayed in the table.",
"Our first observation is that we get higher INLINEFORM0 -measure values compared to both baselines when performing the fusion, independently from the fusion strategy. This confirms what we expected, i.e. that the information encoded in the interactions between the users differs from the information conveyed by the content of the messages they exchange. Moreover, this shows that both sources are at least partly complementary, since the performance increases when merging them. On a side note, the correlation between the score of the graph- and content-based classifiers is 0.56, which is consistent with these observations.",
"Next, when comparing the fusion strategies, it appears that Late Fusion performs better than the others, with an INLINEFORM0 -measure of 93.26. This is a little bit surprising: we were expecting to get superior results from the Early Fusion, which has direct access to a much larger number of raw features (488). By comparison, the Late Fusion only gets 2 features, which are themselves the outputs of two other classifiers. This means that the Content-Based and Graph-Based classifiers do a good work in summarizing their inputs, without loosing much of the information necessary to efficiently perform the classification task. Moreover, we assume that the Early Fusion classifier struggles to estimate an appropriate model when dealing with such a large number of features, whereas the Late Fusion one benefits from the pre-processing performed by its two predecessors, which act as if reducing the dimensionality of the data. This seems to be confirmed by the results of the Hybrid Fusion, which produces better results than the Early Fusion, but is still below the Late Fusion. This point could be explored by switching to classification algorithm less sensitive to the number of features. Alternatively, when considering the three SVMs used for the Late Fusion, one could see a simpler form of a very basic Multilayer Perceptron, in which each neuron has been trained separately (without system-wide backpropagation). This could indicate that using a regular Multilayer Perceptron directly on the raw features could lead to improved results, especially if enough training data is available.",
"Regarding runtime, the graph-based approach takes more than 8 hours to run for the whole corpus, mainly because of the feature computation step. This is due to the number of features, and to the compute-intensive nature of some of them. The content-based approach is much faster, with a total runtime of less than 1 minute, for the exact opposite reasons. Fusion methods require to compute both content- and graph-based features, so they have the longest runtime."
],
[
"We now want to identify the most discriminative features for all three fusion strategies. We apply an iterative method based on the Sklearn toolkit, which allows us to fit a linear kernel SVM to the dataset and provide a ranking of the input features reflecting their importance in the classification process. Using this ranking, we identify the least discriminant feature, remove it from the dataset, and train a new model with the remaining features. The impact of this deletion is measured by the performance difference, in terms of INLINEFORM0 -measure. We reiterate this process until only one feature remains. We call Top Features (TF) the minimal subset of features allowing to reach INLINEFORM1 of the original performance (when considering the complete feature set).",
"We apply this process to both baselines and all three fusion strategies. We then perform a classification using only their respective TF. The results are presented in Table TABREF10 . Note that the Late Fusion TF performance is obtained using the scores produced by the SVMs trained on Content-based TF and Graph-based TF. These are also used as features when computing the TF for Hybrid Fusion TF (together with the raw content- and graph-based features). In terms of classification performance, by construction, the methods are ranked exactly like when considering all available features.",
"The Top Features obtained for each method are listed in Table TABREF12 . The last 4 columns precise which variants of the graph-based features are concerned. Indeed, as explained in Section SECREF3 , most of these topological measures can handle/ignore edge weights and/or edge directions, can be vertex- or graph-focused, and can be computed for each of the three types of networks (Before, After and Full).",
"There are three Content-Based TF. The first is the Naive Bayes prediction, which is not surprising as it comes from a fully fledged classifier processing BoWs. The second is the INLINEFORM0 - INLINEFORM1 score computed over the Abuse class, which shows that considering term frequencies indeed improve the classification performance. The third is the Capital Ratio (proportion of capital letters in the comment), which is likely to be caused by abusive message tending to be shouted, and therefore written in capitals. The Graph-Based TF are discussed in depth in our previous article BIBREF10 . To summarize, the most important features help detecting changes in the direct neighborhood of the targeted author (Coreness, Strength), in the average node centrality at the level of the whole graph in terms of distance (Closeness), and in the general reciprocity of exchanges between users (Reciprocity).",
"We obtain 4 features for Early Fusion TF. One is the Naive Bayes feature (content-based), and the other three are topological measures (graph-based features). Two of the latter correspond to the Corenessof the targeted author, computed for the Before and After graphs. The third topological measure is his/her Eccentricity. This reflects important changes in the interactions around the targeted author. It is likely caused by angry users piling up on the abusive user after he has posted some inflammatory remark. For Hybrid Fusion TF, we also get 4 features, but those include in first place both SVM outputs from the content- and graph-based classifiers. Those are completed by 2 graph-based features, including Strength (also found in the Graph-based and Late Fusion TF) and Coreness (also found in the Graph-based, Early Fusion and Late Fusion TF).",
"Besides a better understanding of the dataset and classification process, one interesting use of the TF is that they can allow decreasing the computational cost of the classification. In our case, this is true for all methods: we can retain 97% of the performance while using only a handful of features instead of hundreds. For instance, with the Late Fusion TF, we need only 3% of the total Late Fusion runtime."
],
[
"In this article, we tackle the problem of automatic abuse detection in online communities. We take advantage of the methods that we previously developed to leverage message content BIBREF3 and interactions between users BIBREF10 , and create a new method using both types of information simultaneously. We show that the features extracted from our content- and graph-based approaches are complementary, and that combining them allows to sensibly improve the results up to 93.26 ( INLINEFORM0 -measure). One limitation of our method is the computational time required to extract certain features. However, we show that using only a small subset of relevant features allows to dramatically reduce the processing time (down to 3%) while keeping more than 97% of the original performance.",
"Another limitation of our work is the small size of our dataset. We must find some other corpora to test our methods at a much higher scale. However, all the available datasets are composed of isolated messages, when we need threads to make the most of our approach. A solution could be to start from datasets such as the Wikipedia-based corpus proposed by BIBREF4 , and complete them by reconstructing the original conversations containing the annotated messages. This could also be the opportunity to test our methods on an other language than French. Our content-based method may be impacted by this change, but this should not be the case for the graph-based method, as it is independent from the content (and therefore the language). Besides language, a different online community is likely to behave differently from the one we studied before. In particular, its members could react differently differently to abuse. The Wikipedia dataset would therefore allow assessing how such cultural differences affect our classifiers, and identifying which observations made for Space Origin still apply to Wikipedia."
]
],
"section_name": [
null,
"Keywords:",
"Introduction",
"Methods",
"Content-Based Method",
"Graph-Based Method",
"Fusion",
"Experiments",
"Experimental protocol",
"Classification Performance",
"Feature Study",
"Conclusion and Perspectives"
]
} | {
"answers": [
{
"annotation_id": [
"ba7b83588c27681325033ae8203679dbd1280336",
"ebba2332480b3e24329369c115b1c971c2e6845d"
],
"answer": [
{
"evidence": [
"In this paper, based on the assumption that the interactions between users and the content of the exchanged messages convey different information, we propose a new method to perform abuse detection while leveraging both sources. For this purpose, we take advantage of the content- BIBREF14 and graph-based BIBREF10 methods that we previously developed. We propose three different ways to combine them, and compare their performance on a corpus of chat logs originating from the community of a French multiplayer online game. We then perform a feature study, finding the most informative ones and discussing their role. Our contribution is twofold: the exploration of fusion methods, and more importantly the identification of discriminative features for this problem."
],
"extractive_spans": [],
"free_form_answer": "They combine content- and graph-based methods in new ways.",
"highlighted_evidence": [
"For this purpose, we take advantage of the content- BIBREF14 and graph-based BIBREF10 methods that we previously developed. We propose three different ways to combine them, and compare their performance on a corpus of chat logs originating from the community of a French multiplayer online game. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Fusion",
"We now propose a new method seeking to take advantage of both previously described ones. It is based on the assumption that the content- and graph-based features convey different information. Therefore, they could be complementary, and their combination could improve the classification performance. We experiment with three different fusion strategies, which are represented in the right-hand part of Figure FIGREF1 .",
"The first strategy follows the principle of Early Fusion. It consists in constituting a global feature set containing all content- and graph-based features from Sections SECREF2 and SECREF3 , then training a SVM directly using these features. The rationale here is that the classifier has access to the whole raw data, and must determine which part is relevant to the problem at hand.",
"The second strategy is Late Fusion, and we proceed in two steps. First, we apply separately both methods described in Sections SECREF2 and SECREF3 , in order to obtain two scores corresponding to the output probability of each message to be abusive given by the content- and graph-based methods, respectively. Second, we fetch these two scores to a third SVM, trained to determine if a message is abusive or not. This approach relies on the assumption that these scores contain all the information the final classifier needs, and not the noise present in the raw features.",
"Finally, the third fusion strategy can be considered as Hybrid Fusion, as it seeks to combine both previous proposed ones. We create a feature set containing the content- and graph-based features, like with Early Fusion, but also both scores used in Late Fusion. This whole set is used to train a new SVM. The idea is to check whether the scores do not convey certain useful information present in the raw features, in which case combining scores and features should lead to better results."
],
"extractive_spans": [
"Hybrid Fusion",
"Late Fusion",
"Early Fusion"
],
"free_form_answer": "",
"highlighted_evidence": [
"Fusion\nWe now propose a new method seeking to take advantage of both previously described ones. It is based on the assumption that the content- and graph-based features convey different information. Therefore, they could be complementary, and their combination could improve the classification performance. We experiment with three different fusion strategies, which are represented in the right-hand part of Figure FIGREF1 .\n\nThe first strategy follows the principle of Early Fusion. It consists in constituting a global feature set containing all content- and graph-based features from Sections SECREF2 and SECREF3 , then training a SVM directly using these features. The rationale here is that the classifier has access to the whole raw data, and must determine which part is relevant to the problem at hand.\n\nThe second strategy is Late Fusion, and we proceed in two steps. First, we apply separately both methods described in Sections SECREF2 and SECREF3 , in order to obtain two scores corresponding to the output probability of each message to be abusive given by the content- and graph-based methods, respectively. Second, we fetch these two scores to a third SVM, trained to determine if a message is abusive or not. This approach relies on the assumption that these scores contain all the information the final classifier needs, and not the noise present in the raw features.\n\nFinally, the third fusion strategy can be considered as Hybrid Fusion, as it seeks to combine both previous proposed ones. We create a feature set containing the content- and graph-based features, like with Early Fusion, but also both scores used in Late Fusion. This whole set is used to train a new SVM. The idea is to check whether the scores do not convey certain useful information present in the raw features, in which case combining scores and features should lead to better results."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"25aa3219e5b410312a44c588df52be7c6699197e",
"2e161f58164d23943dbd69a59eb23d6161f9ae9b"
],
"answer": [
{
"evidence": [
"Besides a better understanding of the dataset and classification process, one interesting use of the TF is that they can allow decreasing the computational cost of the classification. In our case, this is true for all methods: we can retain 97% of the performance while using only a handful of features instead of hundreds. For instance, with the Late Fusion TF, we need only 3% of the total Late Fusion runtime."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"Besides a better understanding of the dataset and classification process, one interesting use of the TF is that they can allow decreasing the computational cost of the classification. In our case, this is true for all methods: we can retain 97% of the performance while using only a handful of features instead of hundreds. For instance, with the Late Fusion TF, we need only 3% of the total Late Fusion runtime."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"FLOAT SELECTED: TABLE 1 | Comparison of the performances obtained with the methods (Content-based, Graph-based, Fusion) and their subsets of Top Features (TF)."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE 1 | Comparison of the performances obtained with the methods (Content-based, Graph-based, Fusion) and their subsets of Top Features (TF)."
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"9d63ee5c7a06a081d980db88609de7f2bba93620",
"ae5c292af40a3a1ae257de3407dbfd18500bd5d6"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: FIGURE 1 | Representation of our processing pipeline. Existing methods refers to our previous work described in Papegnies et al. (2017b) (content-based method) and Papegnies et al. (2019) (graph-based method), whereas the contribution presented in this article appears on the right side (fusion strategies). Figure available at 10.6084/m9.figshare.7442273 under CC-BY license."
],
"extractive_spans": [],
"free_form_answer": "Early fusion, late fusion, hybrid fusion.",
"highlighted_evidence": [
"FLOAT SELECTED: FIGURE 1 | Representation of our processing pipeline. Existing methods refers to our previous work described in Papegnies et al. (2017b) (content-based method) and Papegnies et al. (2019) (graph-based method), whereas the contribution presented in this article appears on the right side (fusion strategies). Figure available at 10.6084/m9.figshare.7442273 under CC-BY license."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We now propose a new method seeking to take advantage of both previously described ones. It is based on the assumption that the content- and graph-based features convey different information. Therefore, they could be complementary, and their combination could improve the classification performance. We experiment with three different fusion strategies, which are represented in the right-hand part of Figure FIGREF1 .",
"The first strategy follows the principle of Early Fusion. It consists in constituting a global feature set containing all content- and graph-based features from Sections SECREF2 and SECREF3 , then training a SVM directly using these features. The rationale here is that the classifier has access to the whole raw data, and must determine which part is relevant to the problem at hand.",
"The second strategy is Late Fusion, and we proceed in two steps. First, we apply separately both methods described in Sections SECREF2 and SECREF3 , in order to obtain two scores corresponding to the output probability of each message to be abusive given by the content- and graph-based methods, respectively. Second, we fetch these two scores to a third SVM, trained to determine if a message is abusive or not. This approach relies on the assumption that these scores contain all the information the final classifier needs, and not the noise present in the raw features.",
"Finally, the third fusion strategy can be considered as Hybrid Fusion, as it seeks to combine both previous proposed ones. We create a feature set containing the content- and graph-based features, like with Early Fusion, but also both scores used in Late Fusion. This whole set is used to train a new SVM. The idea is to check whether the scores do not convey certain useful information present in the raw features, in which case combining scores and features should lead to better results."
],
"extractive_spans": [
"Early Fusion",
"Late Fusion",
"Hybrid Fusion"
],
"free_form_answer": "",
"highlighted_evidence": [
"We experiment with three different fusion strategies, which are represented in the right-hand part of Figure FIGREF1 .\n\nThe first strategy follows the principle of Early Fusion. It consists in constituting a global feature set containing all content- and graph-based features from Sections SECREF2 and SECREF3 , then training a SVM directly using these features. The rationale here is that the classifier has access to the whole raw data, and must determine which part is relevant to the problem at hand.\n\nThe second strategy is Late Fusion, and we proceed in two steps. First, we apply separately both methods described in Sections SECREF2 and SECREF3 , in order to obtain two scores corresponding to the output probability of each message to be abusive given by the content- and graph-based methods, respectively. Second, we fetch these two scores to a third SVM, trained to determine if a message is abusive or not. This approach relies on the assumption that these scores contain all the information the final classifier needs, and not the noise present in the raw features.\n\nFinally, the third fusion strategy can be considered as Hybrid Fusion, as it seeks to combine both previous proposed ones. We create a feature set containing the content- and graph-based features, like with Early Fusion, but also both scores used in Late Fusion. This whole set is used to train a new SVM. The idea is to check whether the scores do not convey certain useful information present in the raw features, in which case combining scores and features should lead to better results."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"ae28fdc0fc08ac703741dfae8201f79c1d8b9734",
"b58ef015fa26a42d612405868e18ba4c4c4c23ad"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: TABLE 2 | Top features obtained for our 5 methods."
],
"extractive_spans": [],
"free_form_answer": "Coreness Score, PageRank Centrality, Strength Centrality, Vertex Count, Closeness Centrality, Authority Score, Hub Score, Reciprocity, Closeness Centrality",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE 2 | Top features obtained for our 5 methods."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Top Features obtained for each method are listed in Table TABREF12 . The last 4 columns precise which variants of the graph-based features are concerned. Indeed, as explained in Section SECREF3 , most of these topological measures can handle/ignore edge weights and/or edge directions, can be vertex- or graph-focused, and can be computed for each of the three types of networks (Before, After and Full).",
"FLOAT SELECTED: TABLE 2 | Top features obtained for our 5 methods."
],
"extractive_spans": [],
"free_form_answer": "Top graph based features are: Coreness Score, PageRank Centrality, Strength Centrality, Vertex Count, Closeness Centrality, Closeness Centrality, Authority Score, Hub Score, Reciprocity and Closeness Centrality.",
"highlighted_evidence": [
"The Top Features obtained for each method are listed in Table TABREF12 . The last 4 columns precise which variants of the graph-based features are concerned.",
"FLOAT SELECTED: TABLE 2 | Top features obtained for our 5 methods."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"What is the proposed algorithm or model architecture?",
"Do they attain state-of-the-art performance?",
"What fusion methods are applied?",
"What graph-based features are considered?"
],
"question_id": [
"6d6a9b855ec70f170b854baab6d8f7e94d3b5614",
"870358f28a520cb4f01e7f5f780d599dfec510b4",
"98aa86ee948096d6fe16c02c1e49920da00e32d4",
"c463136ba9a312a096034c872b5c74b9d58cef95"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"research",
"research",
"research",
"research"
]
} | {
"caption": [
"FIGURE 1 | Representation of our processing pipeline. Existing methods refers to our previous work described in Papegnies et al. (2017b) (content-based method) and Papegnies et al. (2019) (graph-based method), whereas the contribution presented in this article appears on the right side (fusion strategies). Figure available at 10.6084/m9.figshare.7442273 under CC-BY license.",
"FIGURE 2 | Illustration of the main concepts used during network extraction (see text for details). Figure available at 10.6084/m9.figshare.7442273 under CC-BY license.",
"FIGURE 3 | Example of the three types of conversational networks extracted for a given context period: Before (Left), After (Center), and Full (Right). The author of the targeted message is represented in red. Figure available at 10.6084/m9.figshare.7442273 under CC-BY license.",
"TABLE 1 | Comparison of the performances obtained with the methods (Content-based, Graph-based, Fusion) and their subsets of Top Features (TF).",
"TABLE 2 | Top features obtained for our 5 methods."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"5-Table1-1.png",
"6-Table2-1.png"
]
} | [
"What is the proposed algorithm or model architecture?",
"What fusion methods are applied?",
"What graph-based features are considered?"
] | [
[
"1905.07894-Fusion-0",
"1905.07894-Fusion-3",
"1905.07894-Introduction-4",
"1905.07894-Fusion-2",
"1905.07894-Fusion-1"
],
[
"1905.07894-Fusion-0",
"1905.07894-Fusion-3",
"1905.07894-Fusion-2",
"1905.07894-3-Figure1-1.png",
"1905.07894-Fusion-1"
],
[
"1905.07894-Feature Study-2",
"1905.07894-6-Table2-1.png"
]
] | [
"They combine content- and graph-based methods in new ways.",
"Early fusion, late fusion, hybrid fusion.",
"Top graph based features are: Coreness Score, PageRank Centrality, Strength Centrality, Vertex Count, Closeness Centrality, Closeness Centrality, Authority Score, Hub Score, Reciprocity and Closeness Centrality."
] | 405 |
1610.09722 | Represent, Aggregate, and Constrain: A Novel Architecture for Machine Reading from Noisy Sources | In order to extract event information from text, a machine reading model must learn to accurately read and interpret the ways in which that information is expressed. But it must also, as the human reader must, aggregate numerous individual value hypotheses into a single coherent global analysis, applying global constraints which reflect prior knowledge of the domain. In this work we focus on the task of extracting plane crash event information from clusters of related news articles whose labels are derived via distant supervision. Unlike previous machine reading work, we assume that while most target values will occur frequently in most clusters, they may also be missing or incorrect. We introduce a novel neural architecture to explicitly model the noisy nature of the data and to deal with these aforementioned learning issues. Our models are trained end-to-end and achieve an improvement of more than 12.1 F$_1$ over previous work, despite using far less linguistic annotation. We apply factor graph constraints to promote more coherent event analyses, with belief propagation inference formulated within the transitions of a recurrent neural network. We show this technique additionally improves maximum F$_1$ by up to 2.8 points, resulting in a relative improvement of $50\%$ over the previous state-of-the-art. | {
"paragraphs": [
[
"Recent work in the area of machine reading has focused on learning in a scenario with perfect information. Whether identifying target entities for simple cloze style queries BIBREF0 , BIBREF1 , or reasoning over short passages of artificially generated text BIBREF2 , short stories BIBREF3 , or children's stories BIBREF4 , these systems all assume that the corresponding text is the unique source of information necessary for answering the query – one that not only contains the answer, but does not contain misleading or otherwise contradictory information.",
"For more practical question answering, where an information retrieval (IR) component must first fetch the set of relevant passages, the text sources will be less reliable and this assumption must be discarded. Text sources may vary in terms of their integrity (whether or not they are intentionally misleading or unreliable), their accuracy (as in the case of news events, where a truthful but outdated article may contain incorrect information), or their relevance to the query. These characteristics necessitate not only the creation of high-precision readers, but also the development of effective strategies for aggregating conflicting stories into a single cohesive account of the event.",
"Additionally, while many question answering systems are designed to extract a single answer to a single query, a user may wish to understand many aspects of a single entity or event. In machine reading, this is akin to pairing each text passage with multiple queries. Modeling each query as an independent prediction can lead to analyses that are incoherent, motivating the need to model the dependencies between queries.",
"We study these problems through the development of a novel machine reading architecture, which we apply to the task of event extraction from news cluster data. We propose a modular architecture which decomposes the task into three fundamental sub-problems: (1) representation INLINEFORM0 scoring, (2) aggregation, and (3) global constraints. Each corresponds to an exchangeable component of our model. We explore a number of choices for these components, with our best configuration improving performance by INLINEFORM1 F INLINEFORM2 , a INLINEFORM3 relative improvement, over the previous state-of-the-art."
],
[
"Effective aggregation techniques can be crucial for identifying accurate information from noisy sources. Figure FIGREF1 depicts an example of our problem scenario. An IR component fetches several documents based on the query, and sample sentences are shown for each document. The goal is to extract the correct value, of which there may be many mentions in the text, for each slot. Sentences in INLINEFORM0 express a target slot, the number of fatalities, but the mention corresponds to an incorrect value. This is a common mistake in early news reports. Documents INLINEFORM1 and INLINEFORM2 also express this slot, and with mentions of the correct value, but with less certainty.",
"A model which focuses on a single high-scoring mention, at the expense of breadth, will make an incorrect prediction. In comparison, a model which learns to correctly accumulate evidence for each value across multiple mentions over the entire cluster can identify the correct information, circumventing this problem. Figure FIGREF1 (bottom) shows how this pooling of evidence can produce the correct cluster-level prediction."
],
[
"In this section we describe the three modeling components of our proposed architecture:",
"We begin by defining terminology. A news cluster INLINEFORM0 is a set of documents, INLINEFORM1 , where each document is described by a sequence of words, INLINEFORM2 . A mention is an occurrence of a value in its textual context. For each value INLINEFORM3 , there are potentially many mentions of INLINEFORM4 in the cluster, defined as INLINEFORM5 . These have been annotated in the data using Stanford CoreNLP BIBREF5 ."
],
[
"For each mention INLINEFORM0 we construct a representation INLINEFORM1 of the mention in its context. This representation functions as a general “reading” or encoding of the mention, irrespective of the type of slots for which it will later be considered. This differs from some previous machine reading research where the model provides a query-specific reading of the document, or reads the document multiple times when answering a single query BIBREF0 . As in previous work, an embedding of a mention's context serves as its representation. We construct an embedding matrix INLINEFORM2 , using pre-trained word embeddings, where INLINEFORM3 is the dimensionality of the embeddings and INLINEFORM4 the number of words in the cluster. These are held fixed during training. All mentions are masked and receive the same one-hot vector in place of a pretrained embedding. From this matrix we embed the context using a two-layer convolutional neural network (CNN), with a detailed discussion of the architecture parameters provided in Section SECREF4 . CNNs have been used in a similar manner for a number of information extraction and classification tasks BIBREF6 , BIBREF7 and are capable of producing rich sentence representations BIBREF8 .",
"Having produced a representation INLINEFORM0 for each mention INLINEFORM1 , a slot-specific attention mechanism produces INLINEFORM2 , representing the compatibility of mention INLINEFORM3 with slot INLINEFORM4 . Let INLINEFORM5 be the representation matrix composed of all INLINEFORM6 , and INLINEFORM7 is the index of INLINEFORM8 into INLINEFORM9 . We create a separate embedding, INLINEFORM10 , for each slot INLINEFORM11 , and utilize it to attend over all mentions in the cluster as follows: DISPLAYFORM0 ",
" The mention-level scores reflect an interpretation of the value's encoding with respect to the slot. The softmax normalizes the scores over each slot, supplying the attention, and creating competition between mentions. This encourages the model to attend over mentions with the most characteristic contexts for each slot."
],
[
"For values mentioned repeatedly throughout the news cluster, mention scores must be aggregated to produce a single value-level score. In this section we describe (1) how the right aggregation method can better reflect how the gold labels are applied to the data, (2) how domain knowledge can be incorporated into aggregation, and (3) how aggregation can be used as a dynamic approach to identifying missing information.",
"In the traditional view of distant supervision BIBREF9 , if a mention is found in an external knowledge base it is assumed that the mention is an expression of its role in the knowledge base, and it receives the corresponding label. This assumption does not always hold, and the resulting spurious labels are frequently cited as a source of training noise BIBREF10 , BIBREF11 . However, an aggregation over all mention scores provides a more accurate reflection of how distant supervision labels are applied to the data.",
"If one were to assign a label to each mention and construct a loss using the mention-level scores ( INLINEFORM0 ) directly, it would recreate the hard labeling of the traditional distant supervision training scenario. Instead, we relax the distant supervision assumption by using a loss on the value-level scores ( INLINEFORM1 ), with aggregation to pool beliefs from one to the other. This explicitly models the way in which cluster-wide labels are applied to mentions, and allows for spuriously labeled mentions to receive lower scores, “explaining away” the cluster's label by assigning a higher score to a mention with a more suitable representation.",
"Two natural choices for this aggregation are max and sum. Formally, under max aggregation the value-level scores for a value INLINEFORM0 and slot INLINEFORM1 are computed as: DISPLAYFORM0 ",
"And under sum aggregation: DISPLAYFORM0 ",
"If the most clearly expressed mentions correspond to correct values, max aggregation can be an effective strategy BIBREF10 . If the data set is noisy with numerous spurious mentions, a sum aggregation favoring values which are expressed both clearly and frequently may be the more appropriate choice.",
"The aforementioned aggregation methods combine mention-level scores uniformly, but for many domains one may have prior knowledge regarding which mentions should more heavily contribute to the aggregate score. It is straightforward to augment the proposed aggregation methods with a separate weight INLINEFORM0 for each mention INLINEFORM1 to create, for instance, a weighted sum aggregation: DISPLAYFORM0 ",
"These weights may be learned from data, or they may be heuristically defined based on a priori beliefs. Here we present two such heuristic methods.",
"News articles naturally deviate from the topical event, often including comparisons to related events, and summaries of past incidents. Any such instance introduces additional noise into the system, as the contexts of topical and nontopical mention are often similar. Weighted aggregation provides a natural foothold for incorporating topicality into the model.",
"We assign aggregation weights heuristically with respect to a simple model of discourse. We assume every document begins on topic, and remains so until a sentence mentions a nontopical flight number. This and all successive sentences are considered nontopical, until a sentence reintroduces the topical flight. Mentions in topical sentences receive aggregation weights of INLINEFORM0 , and those in non-topical sentences receive weights of INLINEFORM1 , removing them from consideration completely.",
"In the aftermath of a momentous event, news outlets scramble to release articles, often at the expense of providing accurate information.",
"We hypothesize that the earliest articles in each cluster are the most likely to contain misinformation, which we explore via a measure of information content. We define the information content of an article as the number of correct values which it mentions. Using this measure, we fit a skewed Gaussian distribution over the ordered news articles, assigning INLINEFORM0 , where INLINEFORM1 is the smoothed information content of INLINEFORM2 as drawn from the Gaussian.",
"A difficult machine reading problem unique to noisy text sources, where the correct values may not be present in the cluster, is determining whether to predict any value at all. A common solution for handling such missing values is the use of a threshold, below which the model returns null. However, even a separate threshold for each slot would not fully capture the nature of the problem.",
"Determining whether a value is missing is a trade-off between two factors: (1) how strongly the mention-level scores support a non-null answer, and (2) how much general information regarding that event and that slot is given. We incorporate both factors by extending the definition of INLINEFORM0 and its use in Eq. EQREF9 –Eq. to include not only mentions, but all words. Each non-mention word is treated as a mention of the null value: DISPLAYFORM0 ",
"where INLINEFORM0 is the set of mentions. The resulting null score varies according to both the cluster size and its content. Smaller clusters with fewer documents require less evidence to predict a non-null value, while larger clusters must accumulate more evidence for a particular candidate or a null value will be proposed instead.",
"The exact words contained in the cluster also have an effect. For example, clusters with numerous mentions of killed, died, dead, will have a higher INLINEFORM0 Fatalities INLINEFORM1 , requiring the model to be more confident in its answer for that slot during training. Additionally, this provides a mechanism for driving down INLINEFORM2 when INLINEFORM3 is not strongly associated with INLINEFORM4 ."
],
[
"While the combination of learned representations and aggregation produces an effective system in its own right, its predictions may reflect a lack of world knowledge. For instance, we may want to discourage the model from predicting the same value for multiple slots, as this is not a common occurrence.",
"Following recent work in computer vision which proposes a differentiable interpretation of belief propagation inference BIBREF12 , BIBREF13 , we present a recurrent neural network (RNN) which implements inference under this constraint.",
"A factor graph is a graphical model which factorizes the model function using a bipartite graph, consisting of variables and factors. Variables maintain beliefs over their values, and factors specify scores over configurations of these values for the variables they neighbor.",
"We constrain model output by applying a factor graph model to the INLINEFORM0 scores it produces. The slot INLINEFORM1 taking the value INLINEFORM2 is represented in the factor graph by a Boolean variable INLINEFORM3 . Each INLINEFORM4 is connected to a local factor INLINEFORM5 whose initial potential is derived from INLINEFORM6 . A combinatorial logic factor, Exactly-1 BIBREF14 , is (1) created for each slot, connected across all values, and (2) created for each value, connected across all slots. This is illustrated in Figure FIGREF22 . Each Exactly-1 factor provides a hard constraint over neighboring Boolean variables requiring exactly one variable's value to be true, therefore diminishing the possibility of duplicate predictions during inference.",
"The resulting graph contains cycles, preventing the use of exact message passing inference. We instead treat an RNN as implementing loopy belief propagation (LBP), an iterative approximate message passing inference algorithm. The hidden state of the RNN is the set of variable beliefs, and each round of message updates corresponds to one iteration of LBP, or one recurrence in the RNN.",
"There are two types of messages: messages from variables to factors and messages from factors to variables. The message that a variable INLINEFORM0 sends to a factor INLINEFORM1 (denoted INLINEFORM2 ) is defined recursively w.r.t. to incoming messages from its neighbors INLINEFORM3 as follows: DISPLAYFORM0 ",
"and conveys the information “My other neighbors jointly suggest I have the posterior distribution INLINEFORM0 over my values.” In our RNN formulation of message passing the initial outgoing message for a variable INLINEFORM1 to its neighboring Exactly-1 factors is: DISPLAYFORM0 ",
"where the sigmoid moves the scores into probability space.",
"A message from an Exactly-1 factor to its neighboring variables is calculated as:",
" INLINEFORM0 ",
"All subsequent LBP iterations compute variable messages as in Eq. EQREF24 , incorporating the out-going factor beliefs of the previous iteration."
],
[
"The Stanford Plane Crash Dataset BIBREF15 is a small data set consisting of 80 plane crash events, each paired with a set of related news articles. Of these events, 40 are reserved for training, and 40 for testing, with the average cluster containing more than 2,000 mentions. Gold labels for each cluster are derived from Wikipedia infoboxes and cover up to 15 slots, of which 8 are used in evaluation (Figure TABREF54 ).",
"We follow the same entity normalization procedure as reschke2014, limit the cluster size to the first 200 documents, and further reduce the number of duplicate documents to prevent biases in aggregation. We partition out every fifth document from the training set to be used as development data, primarily for use in an early stopping criterion. We also construct additional clusters from the remaining training documents, and use this to increase the size of the development set."
],
[
"In all experiments we train using adaptive online gradient updates (Adam, see kingma2014). Model architecture and parameter values were tuned on the development set, and are as follows (chosen values in bold):",
"The number of training epochs is determined via early stopping with respect to the model performance on development data. The pre-trained word embeddings are 200-dimensional GLoVe embeddings BIBREF16 ."
],
[
"We evaluate on four categories of architecture:",
"reschke2014 proposed several methods for event extraction in this scenario. We compare against three notable examples drawn from this work:",
"",
"[leftmargin=*]",
"Reschke CRF: a conditional random field model.",
"Reschke Noisy-OR: a sequence tagger with a \"Noisy-OR\" form of aggregation that discourages the model from predicting the same value for multiple slots.",
"Reschke Best: a sequence tagger using a cost-sensitive classifier, optimized with SEARN BIBREF17 , a learning-to-search framework.",
"Each of these models uses features drawn from dependency trees, local context (unigram/part-of-speech features for up to 5 neighboring words), sentence context (bag-of-word/part-of-speech), words/part-of-speech of words occurring within the value, as well as the entity type of the mention itself.",
"",
"The representation and scoring components of our architecture, with an additional slot for predicting a null value. The INLINEFORM0 scores are used when constructing the loss and during decoding. These scores can also be aggregated in a max/sum manner after decoding, but such aggregation is not incorporated during training.",
"",
"Representation, scoring, and aggregation components, trained end-to-end with a cluster-level loss. Null values are predicted as described in Sec. UID18 .",
"",
"kadlec2016 present AS Reader, a state-of-the-art model for cloze-style QA. Like our architecture, AS Reader aggregates mention-level scores, pooling evidence for each answer candidate. However, in cloze-style QA an entity is often mentioned in complementary contexts throughout the text, but are frequently in similar contexts in news cluster extraction.",
"We tailor AS Reader to event extraction to illustrate the importance of choosing an aggregation which reflects how the gold labels are applied to the data. EE-AS Reader is implemented by applying Eq. EQREF9 and Eq. to each document, as opposed to clusters, as documents are analogous to sentences in the cloze-style QA task. We then concatenate the resulting vectors, and apply sum aggregation as before."
],
[
"We evaluate configurations of our proposed architecture across three measures. The first is a modified version of standard precision, recall, and F INLINEFORM0 , as proposed by reschke2014. It deviates from the standard protocol by (1) awarding full recall for any slot when a single predicted value is contained in the gold slot, (2) only penalizing slots for which there are findable gold values in the text, and (3) limiting candidate values to the set of entities proposed by the Stanford NER system and included in the data set release. Eight of the fifteen slots are used in evaluation. Similarly, the second evaluation measure we present is standard precision, recall, and F INLINEFORM1 , specifically for null values.",
"We also evaluate the models using mean reciprocal rank (MRR). When calculating the F INLINEFORM0 -based evaluation measure we decode the model by taking the single highest-scoring value for each slot. However, this does not necessarily reflect the quality of the overall value ranking produced by the model. For this reason we include MRR, defined as: DISPLAYFORM0 ",
"where rank INLINEFORM0 is the rank position of the first correct value for a given cluster and slot pair INLINEFORM1 , and INLINEFORM2 , the number of such pairs, is INLINEFORM3 , the product of the total number of clusters with the total number of predicted slots."
],
[
"Results are presented in Table TABREF44 . In comparison to previous work, our any configuration of our RAC architecture with sum-based aggregation outperforms the best existing systems by a minimum of 9.8 F INLINEFORM0 . In comparison to the various Mention-CNN systems, it is clear that this improvement is not a result of different features or the use of pre-trained word embeddings, or even the representational power of the CNN-based embeddings. Rather, it is attributable to training end-to-end with aggregation and a cluster-level loss function.",
"With respect to aggregation, sum-based methods consistently outperform their max counterparts, indicating that exploiting the redundancy of information in news clusters is beneficial to the task. The topic-based aggregation is statistically significant improvement over standard sum aggregation (p INLINEFORM0 ), and produces the highest performing unconstrained system.",
"Date-based aggregation did not yield a statistically significant improvement over sum aggregation. We hypothesize that the method is sound, but accurate datelines could only be extracted for 31 INLINEFORM0 documents. We did not modify the aggregation weights ( INLINEFORM1 ) for the remaining documents, minimizing the effect of this approach.",
"The EE-AS Reader has the lowest overall performance, which one can attribute to pooling evidence in a manner that is poorly suited to this problem domain. By placing a softmax over each document's beliefs, what is an advantage in the cloze-style QA setting here becomes a liability: the model is forced to predict a value for every slot, for every each document, even when few are truly mentioned."
],
[
"In Table TABREF50 we show the results of incorporating factor graph constraints into our best-performing system. Performing one iteration of LBP inference produces our highest performance, an F INLINEFORM0 of 44.9. This is 14.9 points higher than Reschke's best system, and a statistically significant improvement over the unconstrained model (p INLINEFORM1 ). The improvements persist throughout training, as shown in Figure FIGREF52 .",
"Additional iterations reduce performance. This effect is largely due to the constraint assumption not holding absolutely in the data. For instance, multiple slots can have the null value, and zero is common value for a number of slots. Running the constraint inference for a single iteration encourages a 1-to-1 mapping from values to slots, but it does not prohibit it. This result also implies that a hard heuristic decoding constraint time would not be as effective."
],
[
"We randomly selected 15 development set instances which our best model predicts incorrectly. Of these, we find three were incorrectly labeled in the gold data as errors from the distance supervision hypothesis (i.e., “zero chance” being labeled for 0 survivors, when the number of survivors was not mentioned in the cluster), and should not be predicted. Six were clearly expressed and should be predictable, with highly correlated keywords present in the context window, but were assigned low scores by the model. We belief a richer representation which combines the generalization of CNNs with the discrete signal of n-gram features BIBREF18 may solve some of these issues.",
"Four of the remaining errors appear to be due to aggregation errors. Specifically, the occurrence of a particular punctuation mark with far greater than average frequency resulted in it being predicted for three slots. While these could be filtered out, a more general solution may be to build a representation based on the actual mention (“Ryanair”), in addition to its context. This may reduce the scores of these mentions to such an extent that they are removed from consideration.",
"Table TABREF54 shows the accuracy of the model on each slot type. The model is struggles with predicting the Injuries and Survivors slots. The nature of news media leads these slots to be discussed less frequently, with their mentions often embedded more deeply in the document, or expressed textually. For instance, it is common to express INLINEFORM0 =Survivors, INLINEFORM1 as “no survivors”, but it is impossible to predict a 0 value in this case, under the current problem definition."
],
[
"A pointer network uses a softmax to normalize a vector the size of the input, to create an output distribution over the dictionary of inputs BIBREF23 . This assumes that the input vector is the size of the dictionary, and that each occurrence is scored independently of others. If an element appears repeatedly throughout the input, each occurrence is in competition not only with other elements, but also with its duplicates.",
"Here the scoring and aggregation steps of our proposed architecture can together be viewed as a pointer network where there is a redundancy in the input which respects an underlying grouping. Here the softmax normalizes the scores over the input vector, and the aggregation step again yields an output distribution over the dictionary of the input."
],
[
"In this work we present a machine reading architecture designed to effectively read collections of documents in noisy, less controlled scenarios where information may be missing or inaccurate. Through attention-based mention scoring, cluster-wide aggregation of these scores, and global constraints to discourage unlikely solutions, we improve upon the state-of-the-art on this task by 14.9 F INLINEFORM0 .",
"In future work, the groundwork laid here may be applied to larger data sets, and may help motivate the development of such data. Larger noisy data sets would enable the differentiable constraints and weighted aggregation to be included during the optimization, and tuned with respect to data. In addition, we find the incorporation of graphical model inference into neural architectures to be a powerful new tool, and potentially an important step towards incorporating higher-level reasoning and prior knowledge into neural models of NLP."
]
],
"section_name": [
"Introduction",
"The Case for Aggregation",
"Model",
"Representations and Scoring",
"Aggregating Mention-level Scores",
"Global Constraints",
"Data",
"Experiments",
"Systems",
"Evaluation",
"Results",
"Effects of Global Constraints",
"Error Analysis",
"Connections to Pointer Networks",
"Conclusion and Future Work"
]
} | {
"answers": [
{
"annotation_id": [
"2fe3fa0649460b7dfcb1a7c701462b1d88467eb8",
"35302e8088644acf13b1c4ae60291f27d42a8869"
],
"answer": [
{
"evidence": [
"We evaluate configurations of our proposed architecture across three measures. The first is a modified version of standard precision, recall, and F INLINEFORM0 , as proposed by reschke2014. It deviates from the standard protocol by (1) awarding full recall for any slot when a single predicted value is contained in the gold slot, (2) only penalizing slots for which there are findable gold values in the text, and (3) limiting candidate values to the set of entities proposed by the Stanford NER system and included in the data set release. Eight of the fifteen slots are used in evaluation. Similarly, the second evaluation measure we present is standard precision, recall, and F INLINEFORM1 , specifically for null values.",
"We also evaluate the models using mean reciprocal rank (MRR). When calculating the F INLINEFORM0 -based evaluation measure we decode the model by taking the single highest-scoring value for each slot. However, this does not necessarily reflect the quality of the overall value ranking produced by the model. For this reason we include MRR, defined as: DISPLAYFORM0"
],
"extractive_spans": [
"modified version of standard precision, recall, and F INLINEFORM0 , as proposed by reschke2014",
"mean reciprocal rank (MRR)"
],
"free_form_answer": "",
"highlighted_evidence": [
"The first is a modified version of standard precision, recall, and F INLINEFORM0 , as proposed by reschke2014.",
"We also evaluate the models using mean reciprocal rank (MRR).",
"We also evaluate the models using mean reciprocal rank (MRR)."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We evaluate configurations of our proposed architecture across three measures. The first is a modified version of standard precision, recall, and F INLINEFORM0 , as proposed by reschke2014. It deviates from the standard protocol by (1) awarding full recall for any slot when a single predicted value is contained in the gold slot, (2) only penalizing slots for which there are findable gold values in the text, and (3) limiting candidate values to the set of entities proposed by the Stanford NER system and included in the data set release. Eight of the fifteen slots are used in evaluation. Similarly, the second evaluation measure we present is standard precision, recall, and F INLINEFORM1 , specifically for null values.",
"We also evaluate the models using mean reciprocal rank (MRR). When calculating the F INLINEFORM0 -based evaluation measure we decode the model by taking the single highest-scoring value for each slot. However, this does not necessarily reflect the quality of the overall value ranking produced by the model. For this reason we include MRR, defined as: DISPLAYFORM0"
],
"extractive_spans": [
"precision",
"recall",
"mean reciprocal rank",
"F INLINEFORM0"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate configurations of our proposed architecture across three measures. The first is a modified version of standard precision, recall, and F INLINEFORM0 , as proposed by reschke2014. ",
"We also evaluate the models using mean reciprocal rank (MRR)."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"annotation_id": [
"2620f19aa4b495a801385f4246c13e29a9263fd3",
"4fcf1939440508017944114dad8dae5c39352f45"
],
"answer": [
{
"evidence": [
"We evaluate on four categories of architecture:",
"reschke2014 proposed several methods for event extraction in this scenario. We compare against three notable examples drawn from this work:",
"Reschke CRF: a conditional random field model.",
"Reschke Noisy-OR: a sequence tagger with a \"Noisy-OR\" form of aggregation that discourages the model from predicting the same value for multiple slots.",
"Reschke Best: a sequence tagger using a cost-sensitive classifier, optimized with SEARN BIBREF17 , a learning-to-search framework."
],
"extractive_spans": [
"Reschke CRF",
"Reschke Noisy-OR",
"Reschke Best"
],
"free_form_answer": "",
"highlighted_evidence": [
"We evaluate on four categories of architecture:\n\nreschke2014 proposed several methods for event extraction in this scenario. We compare against three notable examples drawn from this work:",
"Reschke CRF: a conditional random field model.\n\nReschke Noisy-OR: a sequence tagger with a \"Noisy-OR\" form of aggregation that discourages the model from predicting the same value for multiple slots.\n\nReschke Best: a sequence tagger using a cost-sensitive classifier, optimized with SEARN BIBREF17 , a learning-to-search framework."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"reschke2014 proposed several methods for event extraction in this scenario. We compare against three notable examples drawn from this work:",
"Reschke CRF: a conditional random field model.",
"Reschke Noisy-OR: a sequence tagger with a \"Noisy-OR\" form of aggregation that discourages the model from predicting the same value for multiple slots.",
"Reschke Best: a sequence tagger using a cost-sensitive classifier, optimized with SEARN BIBREF17 , a learning-to-search framework."
],
"extractive_spans": [
"Reschke CRF",
"Reschke Noisy-OR",
"Reschke Best"
],
"free_form_answer": "",
"highlighted_evidence": [
"We compare against three notable examples drawn from this work:",
"Reschke CRF: a conditional random field model.\n\nReschke Noisy-OR: a sequence tagger with a \"Noisy-OR\" form of aggregation that discourages the model from predicting the same value for multiple slots.\n\nReschke Best: a sequence tagger using a cost-sensitive classifier, optimized with SEARN BIBREF17 , a learning-to-search framework."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"645f4a101a60b8e7fa8d10a307ebeab866cc5975",
"6d605b6106918192c3a5e7954b8f63dbc558c850"
],
"answer": [
{
"evidence": [
"The Stanford Plane Crash Dataset BIBREF15 is a small data set consisting of 80 plane crash events, each paired with a set of related news articles. Of these events, 40 are reserved for training, and 40 for testing, with the average cluster containing more than 2,000 mentions. Gold labels for each cluster are derived from Wikipedia infoboxes and cover up to 15 slots, of which 8 are used in evaluation (Figure TABREF54 )."
],
"extractive_spans": [
"80 plane crash events"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Stanford Plane Crash Dataset BIBREF15 is a small data set consisting of 80 plane crash events, each paired with a set of related news articles. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Stanford Plane Crash Dataset BIBREF15 is a small data set consisting of 80 plane crash events, each paired with a set of related news articles. Of these events, 40 are reserved for training, and 40 for testing, with the average cluster containing more than 2,000 mentions. Gold labels for each cluster are derived from Wikipedia infoboxes and cover up to 15 slots, of which 8 are used in evaluation (Figure TABREF54 )."
],
"extractive_spans": [
"80 plane crash events, each paired with a set of related news articles"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Stanford Plane Crash Dataset BIBREF15 is a small data set consisting of 80 plane crash events, each paired with a set of related news articles."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"95f0fea84225b9503ac0a53228a37752959c80bc",
"ade9bf50b934ca8f22add3955c97229e3aa61f1f"
],
"answer": [
{
"evidence": [
"The Stanford Plane Crash Dataset BIBREF15 is a small data set consisting of 80 plane crash events, each paired with a set of related news articles. Of these events, 40 are reserved for training, and 40 for testing, with the average cluster containing more than 2,000 mentions. Gold labels for each cluster are derived from Wikipedia infoboxes and cover up to 15 slots, of which 8 are used in evaluation (Figure TABREF54 )."
],
"extractive_spans": [],
"free_form_answer": "Event dataset with news articles",
"highlighted_evidence": [
"The Stanford Plane Crash Dataset BIBREF15 is a small data set consisting of 80 plane crash events, each paired with a set of related news articles. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The Stanford Plane Crash Dataset BIBREF15 is a small data set consisting of 80 plane crash events, each paired with a set of related news articles. Of these events, 40 are reserved for training, and 40 for testing, with the average cluster containing more than 2,000 mentions. Gold labels for each cluster are derived from Wikipedia infoboxes and cover up to 15 slots, of which 8 are used in evaluation (Figure TABREF54 )."
],
"extractive_spans": [
"Stanford Plane Crash Dataset BIBREF15"
],
"free_form_answer": "",
"highlighted_evidence": [
"The Stanford Plane Crash Dataset BIBREF15 is a small data set consisting of 80 plane crash events, each paired with a set of related news articles."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
],
"nlp_background": [
"",
"",
"",
""
],
"paper_read": [
"",
"",
"",
""
],
"question": [
"what metrics are used to evaluate the models?",
"what are the baselines?",
"what is the size of the dataset?",
"what dataset did they use?"
],
"question_id": [
"742d5e182b57bfa5f589fde645717ed0ac3f49c2",
"726c5c1b6951287f4bae22978f9a91d22d9bef61",
"dfdd309e56b71589b25412ba85b0a5d79a467ceb",
"7ae95716977d39d96e871e552c35ca0753115229"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"",
"",
"",
""
]
} | {
"caption": [
"Figure 1: An example news cluster. While we assume all documents mention the target flight, inaccurate information (d1), incorrect labels (d2), and mentions of non-topical events (d5) are frequent sources of noise the model must deal with. Red tokens indicate mentions of values, i.e. candidate answers.",
"Figure 2: Belief propagation constraint inference as an RNN. Red indicates a true value. At each step of LBP inference, the belief of each variable is updated with respect to the two Exactly-1 factors it is connected to, pushing it closer to discrete values. In convergence, (c.), values reflect the desired constraint.",
"Table 1: Event extraction results across different systems on the Stanford Plane Crash Dataset.",
"Table 2: Results using global constraints.",
"Figure 3: Improvement of BP constraint inference across training iterations",
"Table 3: Per-slot accuracies of our best model"
],
"file": [
"2-Figure1-1.png",
"6-Figure2-1.png",
"8-Table1-1.png",
"9-Table2-1.png",
"9-Figure3-1.png",
"10-Table3-1.png"
]
} | [
"what dataset did they use?"
] | [
[
"1610.09722-Data-0"
]
] | [
"Event dataset with news articles"
] | 407 |
1908.05925 | Incorporating Word and Subword Units in Unsupervised Machine Translation Using Language Model Rescoring | This paper describes CAiRE's submission to the unsupervised machine translation track of the WMT'19 news shared task from German to Czech. We leverage a phrase-based statistical machine translation (PBSMT) model and a pre-trained language model to combine word-level neural machine translation (NMT) and subword-level NMT models without using any parallel data. We propose to solve the morphological richness problem of languages by training byte-pair encoding (BPE) embeddings for German and Czech separately, and they are aligned using MUSE (Conneau et al., 2018). To ensure the fluency and consistency of translations, a rescoring mechanism is proposed that reuses the pre-trained language model to select the translation candidates generated through beam search. Moreover, a series of pre-processing and post-processing approaches are applied to improve the quality of final translations. | {
"paragraphs": [
[
"Machine translation (MT) has achieved huge advances in the past few years BIBREF1, BIBREF2, BIBREF3, BIBREF4. However, the need for a large amount of manual parallel data obstructs its performance under low-resource conditions. Building an effective model on low resource data or even in an unsupervised way is always an interesting and challenging research topic BIBREF5, BIBREF6, BIBREF7. Recently, unsupervised MT BIBREF8, BIBREF9, BIBREF0, BIBREF10, BIBREF11, which can immensely reduce the reliance on parallel corpora, has been gaining more and more interest.",
"Training cross-lingual word embeddings BIBREF0, BIBREF12 is always the first step of the unsupervised MT models which produce a word-level shared embedding space for both the source and target, but the lexical coverage can be an intractable problem. To tackle this issue, BIBREF13 provided a subword-level solution to overcome the out-of-vocabulary (OOV) problem.",
"In this work, the systems we implement for the German-Czech language pair are built based on the previously proposed unsupervised MT systems, with some adaptations made to accommodate the morphologically rich characteristics of German and Czech BIBREF14. Both word-level and subword-level neural machine translation (NMT) models are applied in this task and further tuned by pseudo-parallel data generated from a phrase-based statistical machine translation (PBSMT) model, which is trained following the steps proposed in BIBREF10 without using any parallel data. We propose to train BPE embeddings for German and Czech separately and align those trained embeddings into a shared space with MUSE BIBREF0 to reduce the combinatorial explosion of word forms for both languages. To ensure the fluency and consistency of translations, an additional Czech language model is trained to select the translation candidates generated through beam search by rescoring them. Besides the above, a series of post-processing steps are applied to improve the quality of final translations. Our contribution is two-fold:",
"We propose a method to combine word and subword (BPE) pre-trained input representations aligned using MUSE BIBREF0 as an NMT training initialization on a morphologically-rich language pair such as German and Czech.",
"We study the effectiveness of language model rescoring to choose the best sentences and unknown word replacement (UWR) procedure to reduce the drawback of OOV words.",
"This paper is organized as follows: in Section SECREF2, we describe our approach to the unsupervised translation from German to Czech. Section SECREF3 reports the training details and the results for each steps of our approach. More related work is provided in Section SECREF4. Finally, we conclude our work in Section SECREF5."
],
[
"In this section, we describe how we built our main unsupervised machine translation system, which is illustrated in Figure FIGREF4."
],
[
"We follow the unsupervised NMT in BIBREF10 by leveraging initialization, language modeling and back-translation. However, instead of using BPE, we use MUSE BIBREF0 to align word-level embeddings of German and Czech, which are trained by FastText BIBREF15 separately. We leverage the aligned word embeddings to initialize our unsupervised NMT model.",
"The language model is a denoising auto-encoder, which is trained by reconstructing original sentences from noisy sentences. The process of language modeling can be expressed as minimizing the following loss:",
"where $N$ is a noise model to drop and swap some words with a certain probability in the sentence $x$, $P_{s \\rightarrow s}$ and $P_{t \\rightarrow t}$ operate on the source and target sides separately, and $\\lambda $ acts as a weight to control the loss function of the language model. a Back-translation turns the unsupervised problem into a supervised learning task by leveraging the generated pseudo-parallel data. The process of back-translation can be expressed as minimizing the following loss:",
"where $v^*(x)$ denotes sentences in the target language translated from source language sentences $S$, $u^*(y)$ similarly denotes sentences in the source language translated from the target language sentences $T$ and $P_{t \\rightarrow s}$, and $P_{s \\rightarrow t}$ denote the translation direction from target to source and from source to target respectively."
],
[
"We note that both German and Czech BIBREF14 are morphologically rich languages, which leads to a very large vocabulary size for both languages, but especially for Czech (more than one million unique words for German, but three million unique words for Czech). To overcome OOV issues, we leverage subword information, which can lead to better performance.",
"We employ subword units BIBREF16 to tackle the morphological richness problem. There are two advantages of using the subword-level. First, we can alleviate the OOV issue by zeroing out the number of unknown words. Second, we can leverage the semantics of subword units from these languages. However, German and Czech are distant languages that originate from different roots, so they only share a small fraction of subword units. To tackle this problem, we train FastText word vectors BIBREF15 separately for German and Czech, and apply MUSE BIBREF0 to align these embeddings."
],
[
"PBSMT models can outperform neural models in low-resource conditions. A PBSMT model utilizes a pre-trained language model and a phrase table with phrase-to-phrase translations from the source language to target languages, which provide a good initialization. The phrase table stores the probabilities of the possible target phrase translations corresponding to the source phrases, which can be referred to as $P(s|t)$, with $s$ and $t$ representing the source and target phrases. The source and target phrases are mapped according to inferred cross-lingual word embeddings, which are trained with monolingual corpora and aligned into a shared space without any parallel data BIBREF12, BIBREF0.",
"We use a pre-trained n-gram language model to score the phrase translation candidates by providing the relative likelihood estimation $P(t)$, so that the translation of a source phrase is derived from: $arg max_{t} P(t|s)=arg max_{t} P(s|t)P(t)$.",
"Back-translation enables the PBSMT models to be trained in a supervised way by providing pseudo-parallel data from the translation in the reverse direction, which indicates that the PBSMT models need to be trained in dual directions so that the two models trained in the opposite directions can promote each other's performance.",
"In this task, we follow the method proposed by BIBREF10 to initialize the phrase table, train the KenLM language models BIBREF17 and train a PBSMT model, but we make two changes. First, we only initialize a uni-gram phrase table because of the large vocabulary size of German and Czech and the limitation of computational resources. Second, instead of training the model in the truecase mode, we maintain the same pre-processing step (see more details in §SECREF20) as the NMT models."
],
[
"We further fine-tune the NMT models mentioned above on the pseudo-parallel data generated by a PBSMT model. We choose the best PBSMT model and mix the pseudo-parallel data from the NMT models and the PBSMT model, which are used for back-translation. The intuition is that we can use the pseudo-parallel data produced by the PBSMT model as the supplementary translations in our NMT model, and these can potentially boost the robustness of the NMT model by increasing the variety of back-translation data."
],
[
"Around 10% of words found in our NMT training data are unknown words (<UNK>), which immensely limits the potential of the word-level NMT model. In this case, replacing unknown words with reasonable words can be a good remedy. Then, assuming the translations from the word-level NMT model and PBSMT model are roughly aligned in order, we can replace the unknown words in the NMT translations with the corresponding words in the PBSMT translations. Compared to the word-level NMT model, the PBSMT model ensures that every phrase will be translated without omitting any pieces from the sentences. We search for the word replacement by the following steps, which are also illustrated in Figure FIGREF13:"
],
[
"For every unknown word, we can get the context words with a context window size of two."
],
[
"Each context word is searched for in the corresponding PBSMT translation. From our observation, the meanings of the words in Czech are highly likely to be the same if only the last few characters are different. Therefore, we allow the last two characters to be different between the context words and the words they match."
],
[
"If several words in the PBSMT translation match a context word, the word that is closest to the position of the context word in the PBSMT translation will be selected and put into the candidate list to replace the corresponding <UNK> in the translation from the word-level NMT model."
],
[
"Step 2 and Step 3 are repeated until all the context words have been searched. After removing all the punctuation and the context words in the candidate list, the replacement word is the one that most frequently appears in the candidate list. If no candidate word is found, we just remove the <UNK> without adding a word."
],
[
"Instead of direct translation with NMT models, we generate several translation candidates using beam search with a beam size of five. We build the language model proposed by BIBREF18, BIBREF19 trained using a monolingual Czech dataset to rescore the generated translations. The scores are determined by the perplexity (PPL) of the generated sentences and the translation candidate with the lowest PPL will be selected as the final translation."
],
[
"Ensemble methods have been shown very effective in many natural language processing tasks BIBREF20, BIBREF21. We apply an ensemble method by taking the top five translations from word-level and subword-level NMT, and rescore all translations using our pre-trained Czech language model mentioned in §SECREF18. Then, we select the best translation with the lowest perplexity."
],
[
"We note that in the corpus, there are tokens representing quantity or date. Therefore, we delexicalize the tokens using two special tokens: (1) <NUMBER> to replace all the numbers that express a specific quantity, and (2) <DATE> to replace all the numbers that express a date. Then, we retrieve these numbers in the post-processing. There are two advantages of data pre-processing. First, replacing numbers with special tokens can reduce vocabulary size. Second, the special tokens are more easily processed by the model."
],
[
"In the pre-processing, we use the special tokens <NUMBER> and <DATE> to replace numbers that express a specific quantity and date respectively. Therefore, in the post-processing, we need to restore those numbers. We simply detect the pattern <NUMBER> and <DATE> in the original source sentences and then replace the special tokens in the translated sentences with the corresponding numbers detected in the source sentences. In order to make the replacement more accurate, we will detect more complicated patterns like <NUMBER> / <NUMBER> in the original source sentences. If the translated sentences also have the pattern, we replace this pattern <NUMBER> / <NUMBER> with the corresponding numbers in the original source sentences."
],
[
"The quotes are fixed to keep them the same as the source sentences."
],
[
"For all the models mentioned above that work under a lower-case setting, a recaser implemented with Moses BIBREF22 is applied to convert the translations to the real cases."
],
[
"From our observation, the ensemble NMT model lacks the ability to translate name entities correctly. We find that words with capital characters are named entities, and those named entities in the source language may have the same form in the target language. Hence, we capture and copy these entities at the end of the translation if they does not exist in our translation."
],
[
"The settings of the word-level NMT and subword-level NMT are the same, except the vocabulary size. We use a vocabulary size of 50k in the word-level NMT setting and 40k in the subword-level NMT setting for both German and Czech. In the encoder and decoder, we use a transformer BIBREF3 with four layers and a hidden size of 512. We share all encoder parameters and only share the first decoder layer across two languages to ensure that the latent representation of the source sentence is robust to the source language. We train auto-encoding and back-translation during each iteration. As the training goes on, the importance of language modeling become a less important compared to back-translation. Therefore the weight of auto-encoding ($\\lambda $ in equation (DISPLAY_FORM7)) is decreasing during training."
],
[
"The PBSMT is implemented with Moses using the same settings as those in BIBREF10. The PBSMT model is trained iteratively. Both monolingual datasets for the source and target languages consist of 12 million sentences, which are taken from the latest parts of the WMT monolingual dataset. At each iteration, two out of 12 million sentences are randomly selected from the the monolingual dataset."
],
[
"According to the findings in BIBREF23, the morphological richness of a language is closely related to the performance of the model, which indicates that the language models will be extremely hard to train for Czech, as it is one of the most complex languages. We train the QRNN model with 12 million sentences randomly sampled from the original WMT Czech monolingual dataset, which is also pre-processed in the way mentioned in §SECREF20. To maintain the quality of the language model, we enlarge the vocabulary size to three million by including all the words that appear more than 15 times. Finally, the PPL of the language model on the test set achieves 93.54."
],
[
"We use the recaser model provided in Moses and train the model with the two million latest sentences in the Czech monolingual dataset. After the training procedure, the recaser can restore words to the form in which the maximum probability occurs."
],
[
"The BLEU (cased) score of the initialized phrase table and models after training at different iterations are shown in Table TABREF33. From comparing the results, we observe that back-translation can improve the quality of the phrase table significantly, but after five iterations, the phrase table has hardly improved. The PBSMT model at the sixth iteration is selected as the final PBSMT model."
],
[
"The performances of our final model and other baseline models are illustrated in Table TABREF34. In the baseline unsupervised NMT models, subword-level NMT outperforms word-level NMT by around a 1.5 BLEU score. Although the unsupervised PBSMT model is worse than the subword-level NMT model, leveraging generated pseudo-parallel data from the PBSMT model to fine-tune the subword-level NMT model can still boost its performance. However, this pseudo-parallel data from the PBSMT model can not improve the word-level NMT model since the large percentage of OOV words limits its performance. After applying unknown words replacement to the word-level NMT model, the performance improves by a BLEU score of around 2. Using the Czech language model to re-score helps the model improve by around a 0.3 BLEU score each time. We also use this language model to create an ensemble of the best word-level and subword-level NMT model and achieve the best performance."
],
[
"Cross-lingual word embeddings can provide a good initialization for both the NMT and SMT models. In the unsupervised senario, BIBREF12 independently trained embeddings in different languages using monolingual corpora, and then learned a linear mapping to align them in a shared space based on a bilingual dictionary of a negligibly small size. BIBREF0 proposed a fully unsupervised learning method to build a bilingual dictionary without using any foregone word pairs, but by considering words from two languages that are near each other as pseudo word pairs. BIBREF24 showed that cross-lingual language model pre-training can learn a better cross-lingual embeddings to initialize an unsupervised machine translation model."
],
[
"In BIBREF8 and BIBREF25, the authors proposed the first unsupervised machine translation models which combines an auto-encoding language model and back-translation in the training procedure. BIBREF10 illustrated that initialization, language modeling, and back-translation are key for both unsupervised neural and statistical machine translation. BIBREF9 combined back-translation and MERT BIBREF26 to iteratively refine the SMT model. BIBREF11 proposed to discard back-translation. Instead, they extracted and edited the nearest sentences in the target language to construct pseudo-parallel data, which was used as a supervision signal."
],
[
"In this paper, we propose to combine word-level and subword-level input representation in unsupervised NMT training on a morphologically rich language pair, German-Czech, without using any parallel data. Our results show the effectiveness of using language model rescoring to choose more fluent translation candidates. A series of pre-processing and post-processing approaches improve the quality of final translations, particularly to replace unknown words with possible relevant target words."
],
[
"We would like to thank our colleagues Jamin Shin, Andrea Madotto, and Peng Xu for insightful discussions. This work has been partially funded by ITF/319/16FP and MRP/055/18 of the Innovation Technology Commission, the Hong Kong SAR Government."
]
],
"section_name": [
"Introduction",
"Methodology",
"Methodology ::: Unsupervised Machine Translation ::: Word-level Unsupervised NMT",
"Methodology ::: Unsupervised Machine Translation ::: Subword-level Unsupervised NMT",
"Methodology ::: Unsupervised Machine Translation ::: Unsupervised PBSMT",
"Methodology ::: Unsupervised Machine Translation ::: Fine-tuning NMT",
"Methodology ::: Unknown Word Replacement",
"Methodology ::: Unknown Word Replacement ::: Step 1",
"Methodology ::: Unknown Word Replacement ::: Step 2",
"Methodology ::: Unknown Word Replacement ::: Step 3",
"Methodology ::: Unknown Word Replacement ::: Step 4",
"Methodology ::: Language Model Rescoring",
"Methodology ::: Model Ensemble",
"Experiments ::: Data Pre-processing",
"Experiments ::: Data Post-processing ::: Special Token Replacement",
"Experiments ::: Data Post-processing ::: Quotes Fixing",
"Experiments ::: Data Post-processing ::: Recaser",
"Experiments ::: Data Post-processing ::: Patch-up",
"Experiments ::: Training ::: Unsupervised NMT",
"Experiments ::: Training ::: Unsupervised PBSMT",
"Experiments ::: Training ::: Language Model",
"Experiments ::: Training ::: Recaser",
"Experiments ::: PBSMT Model Selection",
"Experiments ::: Results",
"Related Work ::: Unsupervised Cross-lingual Embeddings",
"Related Work ::: Unsupervised Machine Translation",
"Conclusion",
"Acknowledgments"
]
} | {
"answers": [
{
"annotation_id": [
"268971b483fc22f0c6b7cafb607c320314323f37",
"65358bdbd80bcf5f3135dd2dc8846f7cc62ccb48"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2: Unsupervised translation results. We report the scores of several evaluation methods for every step of our approach. Except the result that is listed on the last line, all results are under the condition that the translations are post-processed without patch-up."
],
"extractive_spans": [],
"free_form_answer": "They report the scores of several evaluation methods for every step of their approach.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Unsupervised translation results. We report the scores of several evaluation methods for every step of our approach. Except the result that is listed on the last line, all results are under the condition that the translations are post-processed without patch-up."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The performances of our final model and other baseline models are illustrated in Table TABREF34. In the baseline unsupervised NMT models, subword-level NMT outperforms word-level NMT by around a 1.5 BLEU score. Although the unsupervised PBSMT model is worse than the subword-level NMT model, leveraging generated pseudo-parallel data from the PBSMT model to fine-tune the subword-level NMT model can still boost its performance. However, this pseudo-parallel data from the PBSMT model can not improve the word-level NMT model since the large percentage of OOV words limits its performance. After applying unknown words replacement to the word-level NMT model, the performance improves by a BLEU score of around 2. Using the Czech language model to re-score helps the model improve by around a 0.3 BLEU score each time. We also use this language model to create an ensemble of the best word-level and subword-level NMT model and achieve the best performance.",
"FLOAT SELECTED: Table 2: Unsupervised translation results. We report the scores of several evaluation methods for every step of our approach. Except the result that is listed on the last line, all results are under the condition that the translations are post-processed without patch-up."
],
"extractive_spans": [
"The performances of our final model and other baseline models are illustrated in Table TABREF34."
],
"free_form_answer": "",
"highlighted_evidence": [
"The performances of our final model and other baseline models are illustrated in Table TABREF34. In the baseline unsupervised NMT models, subword-level NMT outperforms word-level NMT by around a 1.5 BLEU score. ",
"FLOAT SELECTED: Table 2: Unsupervised translation results. We report the scores of several evaluation methods for every step of our approach. Except the result that is listed on the last line, all results are under the condition that the translations are post-processed without patch-up."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"5343d09c5631cc8afcd15681766188cb53b5bf48",
"5b1c04b30f07b971c337f881d14ffc40c650b25d"
],
"answer": [
{
"evidence": [
"The quotes are fixed to keep them the same as the source sentences.",
"For all the models mentioned above that work under a lower-case setting, a recaser implemented with Moses BIBREF22 is applied to convert the translations to the real cases.",
"From our observation, the ensemble NMT model lacks the ability to translate name entities correctly. We find that words with capital characters are named entities, and those named entities in the source language may have the same form in the target language. Hence, we capture and copy these entities at the end of the translation if they does not exist in our translation."
],
"extractive_spans": [
"Special Token Replacement",
"Quotes Fixing",
"Recaser",
" Patch-up"
],
"free_form_answer": "",
"highlighted_evidence": [
"In the pre-processing, we use the special tokens and to replace numbers that express a specific quantity and date respectively. Therefore, in the post-processing, we need to restore those numbers. We simply detect the pattern and in the original source sentences and then replace the special tokens in the translated sentences with the corresponding numbers detected in the source sentences.",
"The quotes are fixed to keep them the same as the source sentences.",
"For all the models mentioned above that work under a lower-case setting, a recaser implemented with Moses BIBREF22 is applied to convert the translations to the real cases.",
"From our observation, the ensemble NMT model lacks the ability to translate name entities correctly. We find that words with capital characters are named entities, and those named entities in the source language may have the same form in the target language. Hence, we capture and copy these entities at the end of the translation if they does not exist in our translation."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"The performances of our final model and other baseline models are illustrated in Table TABREF34. In the baseline unsupervised NMT models, subword-level NMT outperforms word-level NMT by around a 1.5 BLEU score. Although the unsupervised PBSMT model is worse than the subword-level NMT model, leveraging generated pseudo-parallel data from the PBSMT model to fine-tune the subword-level NMT model can still boost its performance. However, this pseudo-parallel data from the PBSMT model can not improve the word-level NMT model since the large percentage of OOV words limits its performance. After applying unknown words replacement to the word-level NMT model, the performance improves by a BLEU score of around 2. Using the Czech language model to re-score helps the model improve by around a 0.3 BLEU score each time. We also use this language model to create an ensemble of the best word-level and subword-level NMT model and achieve the best performance."
],
"extractive_spans": [
"unknown words replacement"
],
"free_form_answer": "",
"highlighted_evidence": [
"After applying unknown words replacement to the word-level NMT model, the performance improves by a BLEU score of around 2."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"54cf2978e8d13650c0a3c04cdd468c0224714dbe",
"ae24e291b4ba77fa31d66454d3dec620695fc26d"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"The performances of our final model and other baseline models are illustrated in Table TABREF34. In the baseline unsupervised NMT models, subword-level NMT outperforms word-level NMT by around a 1.5 BLEU score. Although the unsupervised PBSMT model is worse than the subword-level NMT model, leveraging generated pseudo-parallel data from the PBSMT model to fine-tune the subword-level NMT model can still boost its performance. However, this pseudo-parallel data from the PBSMT model can not improve the word-level NMT model since the large percentage of OOV words limits its performance. After applying unknown words replacement to the word-level NMT model, the performance improves by a BLEU score of around 2. Using the Czech language model to re-score helps the model improve by around a 0.3 BLEU score each time. We also use this language model to create an ensemble of the best word-level and subword-level NMT model and achieve the best performance."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The performances of our final model and other baseline models are illustrated in Table TABREF34. In the baseline unsupervised NMT models, subword-level NMT outperforms word-level NMT by around a 1.5 BLEU score. "
],
"unanswerable": false,
"yes_no": false
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"c44fc41aab1a66c195c6ff3edbceb64284e537a3",
"f1b682d6cc121ee8fdee589fcc41d696c9eb8b0c"
],
"answer": [
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"two",
"two",
"two",
"two"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"How is the quality of the translation evaluated?",
"What are the post-processing approaches applied to the output?",
"Is the MUSE alignment independently evaluated?",
"How does byte-pair encoding work?"
],
"question_id": [
"ff3e93b9b5f08775ebd1a7408d7f0ed2f6942dde",
"59a3d4cdd1c3797962bf8d72c226c847e06e1d44",
"49474a3047fa3f35e1bcd63991e6f15e012ac10b",
"63279ecb2ba4e51c1225e63b81cb021abc10d0d1"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"search_query": [
"German",
"German",
"German",
"German"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Figure 1: The illustration of our system. The translation procedure can be divided into five steps: (a) preprocessing, (b) translation generation (§2.1) from word-level NMT, subword-level NMT, and PBSMT. In the training, we fine-tune word-level and subword-level NMT models with pseudo-parallel data from NMT models and the best PBSMT model. Moreover, an unknown word replacement mechanism (§2.2) is applied to the translations generated from the word-level NMT model, (c) translation candidate rescoring, (d) construction of an ensemble of the translations from NMT models, and (e) post-processing.",
"Figure 2: The illustration of the unknown word replacement (UWR) procedure for word-level NMT. The words of the PBSMT model translation in the pink boxes match the context words of the unknown word <UNK> in the word-level NMT model translation in the blue boxes. Finally, we choose a possible target word, in the yellow box, from the PBSMT model translation to replace the unknown word in the green box.",
"Table 1: Results of PBSMT at different iterations",
"Table 2: Unsupervised translation results. We report the scores of several evaluation methods for every step of our approach. Except the result that is listed on the last line, all results are under the condition that the translations are post-processed without patch-up."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Table1-1.png",
"6-Table2-1.png"
]
} | [
"How is the quality of the translation evaluated?"
] | [
[
"1908.05925-6-Table2-1.png",
"1908.05925-Experiments ::: Results-0"
]
] | [
"They report the scores of several evaluation methods for every step of their approach."
] | 408 |
1709.06136 | Iterative Policy Learning in End-to-End Trainable Task-Oriented Neural Dialog Models | In this paper, we present a deep reinforcement learning (RL) framework for iterative dialog policy optimization in end-to-end task-oriented dialog systems. Popular approaches in learning dialog policy with RL include letting a dialog agent to learn against a user simulator. Building a reliable user simulator, however, is not trivial, often as difficult as building a good dialog agent. We address this challenge by jointly optimizing the dialog agent and the user simulator with deep RL by simulating dialogs between the two agents. We first bootstrap a basic dialog agent and a basic user simulator by learning directly from dialog corpora with supervised training. We then improve them further by letting the two agents to conduct task-oriented dialogs and iteratively optimizing their policies with deep RL. Both the dialog agent and the user simulator are designed with neural network models that can be trained end-to-end. Our experiment results show that the proposed method leads to promising improvements on task success rate and total task reward comparing to supervised training and single-agent RL training baseline models. | {
"paragraphs": [
[
"Task-oriented dialog system is playing an increasingly important role in enabling human-computer interactions via natural spoken language. Different from chatbot type of conversational agents BIBREF0 , BIBREF1 , BIBREF2 , task-oriented dialog systems assist users to complete everyday tasks, which usually involves aggregating information from external resources and planning over multiple dialog turns. Conventional task-oriented dialog systems are designed with complex pipelines BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , and there are usually separately developed modules for natural language understanding (NLU) BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , dialog state tracking (DST) BIBREF11 , BIBREF12 , and dialog management (DM) BIBREF13 , BIBREF14 , BIBREF15 . Such pipeline approach inherently make it hard to scale a dialog system to new domains, as each of these modules has to be redesigned separately with domain expertise. Moreover, credit assignment in such pipeline systems can be challenging, as errors made in upper stream modules may propagate and be amplified in downstream components.",
"Recent efforts have been made in designing end-to-end frameworks for task-oriented dialogs. Wen et al. BIBREF16 and Liu et al. BIBREF17 proposed supervised learning (SL) based end-to-end trainable neural network models. Zhao and Eskenazi BIBREF18 and Li et al. BIBREF19 introduced end-to-end trainable systems using deep reinforcement learning (RL) for dialog policy optimization. Comparing to SL based models, systems trained with RL by exploring the space of possible strategies showed improved model robustness against diverse dialog situations.",
"In RL based dialog policy learning, ideally the agent should be deployed to interact with users and get rewards from real user feedback. However, many dialog samples may be required for RL policy shaping, making it impractical to learn from real users directly from the beginning. Therefore, a user simulator BIBREF20 , BIBREF21 , BIBREF22 is usually used to train the dialog agent to a sufficient starting level before deploying it to the real environment. Quality of such user simulator has a direct impact on the effectiveness of dialog policy learning. Designing a reliable user simulator, however, is not trivial, often as difficult as building a good dialog agent. User simulators used in most of the recent RL based dialog models BIBREF18 , BIBREF22 , BIBREF19 are designed with expert knowledge and complex rules.",
"To address the challenge of lacking a reliable user simulator for dialog agent policy learning, we propose a method in jointly optimizing the dialog agent policy and the user simulator policy with deep RL. We first bootstrap a basic neural dialog agent and a basic neural user simulator by learning directly from dialog corpora with supervised training. We then improve them further by simulating task-oriented dialogs between the two agents and iteratively optimizing their dialog policies with deep RL. The intuition is that we model task-oriented dialog as a goal fulfilling task, in which we let the dialog agent and the user simulator to positively collaborate to achieve the goal. The user simulator is given a goal to complete, and it is expected to demonstrate coherent but diverse user behavior. The dialog agent attempts to estimate the user's goal and fulfill his request by conducting meaningful conversations. Both the two agents aim to learn to collaborate with each other to complete the task but without exploiting the game.",
"Our contribution in this work is two-fold. Firstly, we propose an iterative dialog policy learning method that jointly optimizes the dialog agent and the user simulator in end-to-end trainable neural dialog systems. Secondly, we design a novel neural network based user simulator for task-oriented dialogs that can be trained in a data-driven manner without requiring the design of complex rules.",
"The remainder of the paper is organized as follows. In section 2, we discuss related work on end-to-end trainable task-oriented dialog systems and RL policy learning methods. In section 3, we describe the proposed framework and learning methods in detail. In Section 4, we discuss the experiment setup and analyze the results. Section 5 gives the conclusions."
],
[
"Popular approaches for developing task-oriented dialog systems include treating the problem as a partially observable Markov Decision Process (POMDP) BIBREF6 . RL methods BIBREF23 can be applied to optimize the dialog policy online with the feedback collected via interacting with users. In order to make the RL policy learning tractable, dialog state and system actions have to be carefully designed.",
"Recently, people have proposed neural network based methods for task-oriented dialogs, motivated by their superior performance in modeling chit-chat type of conversations BIBREF24 , BIBREF1 , BIBREF2 , BIBREF25 . Bordes and Weston BIBREF26 proposed modeling task-oriented dialogs with a reasoning approach using end-to-end memory networks. Their model skips the belief tracking stage and selects the final system response directly from a list of response candidates. Comparing to this approach, our model explicitly tracks dialog belief state over the sequence of turns, as robust dialog state tracking has been shown BIBREF27 to boost the success rate in task completion. Wen et al. BIBREF16 proposed an end-to-end trainable neural network model with modularity connected system components. This system is trained in supervised manner, and thus may not be robust enough to handle diverse dialog situations due to the limited varieties in dialog corpus. Our system is trained by a combination of SL and deep RL methods, as it is shown that RL training may effectively improved the system robustness and dialog success rate BIBREF28 , BIBREF19 , BIBREF29 . Moreover, other than having separated dialog components as in BIBREF16 , we use a unified network for belief tracking, knowledge base (KB) operation, and dialog management, to fully explore the knowledge that can be shared among different tasks.",
"In many of the recent work on using RL for dialog policy learning BIBREF18 , BIBREF30 , BIBREF19 , hand-designed user simulators are used to interact with the dialog agent. Designing a good performing user simulator is not easy. A too basic user simulator as in BIBREF18 may only be able to produce short and simple utterances with limited variety, making the final system lack of robustness against noise in real world user inputs. Advanced user simulators BIBREF31 , BIBREF22 may demonstrate coherent user behavior, but they typically require designing complex rules with domain expertise. We address this challenge using a hybrid learning method, where we firstly bootstrapping a basic functioning user simulator with SL on human annotated corpora, and continuously improving it together with the dialog agent during dialog simulations with deep RL.",
"Jointly optimizing policies for dialog agent and user simulator with RL has also been studied in literature. Chandramohan et al. BIBREF32 proposed a co-adaptation framework for dialog systems by jointly optimizing the policies for multiple agents. Georgila et al. BIBREF33 discussed applying multi-agent RL for policy learning in a resource allocation negotiation scenario. Barlier et al. BIBREF34 modeled non-cooperative task dialog as a stochastic game and learned jointly the strategies of both agents. Comparing to these previous work, our proposed framework focuses on task-oriented dialogs where the user and the agent positively collaborate to achieve the user's goal. More importantly, we work towards building end-to-end models for task-oriented dialogs that can handle noises and ambiguities in natural language understanding and belief tracking, which is not taken into account in previous work."
],
[
"In this section, we first provide a high level description of our proposed framework. We then discuss each module component and the training methods in detail.",
"In the supervised pre-training stage, we train the dialog agent and the user simulator separately using task-oriented dialog corpora. In the RL training stage, we simulate dialogs between the two agents. The user simulator starts the conversation based on a sampled user goal. The dialog agent attempts to estimate the user's goal and complete the task with the user simulator by conducting multi-turn conversation. At the end of each simulated dialog, a reward is generated based on the level of task completion. This reward is used to further optimize the dialog policies of the two agents with RL."
],
[
"Figure 1 illustrates the design of the dialog agent. The dialog agent is capable of tracking dialog state, issuing API calls to knowledge bases (KB), and producing corresponding system actions and responses by incorporating the query results, which are key skill sets BIBREF26 in conducting task-oriented dialogs. State of the dialog agent is maintained in the LSTM BIBREF35 state and being updated after the processing of each turn. At the INLINEFORM0 th turn of a dialog, the dialog agent takes in (1) the previous agent output encoding INLINEFORM1 , (2) the user input encoding INLINEFORM2 , (3) the retrieved KB result encoding INLINEFORM3 , and updates its internal state conditioning on the previous agent state INLINEFORM4 . With the updated agent state INLINEFORM5 , the dialog agent emits (1) a system action INLINEFORM6 , (2) an estimation of the belief state, and (3) a pointer to an entity in the retrieved query results. These outputs are then passed to an NLG module to generate the final agent response.",
"Utterance Encoding For natural language format inputs at turn INLINEFORM0 , we use bi-directional LSTM to encode the utterance to a continuous vector INLINEFORM1 . With INLINEFORM2 representing the utterance at the INLINEFORM3 th turn with INLINEFORM4 words, the utterance vector INLINEFORM5 is produced by concatenating the last forward and backward LSTM states: INLINEFORM6 .",
"Action Modeling We use dialog acts as system actions, which can be seen as shallow representations of the utterance semantics. We treat system action modeling as a multi-class classification problem, where the agent select an appropriate action from a predefined list of system actions based on current dialog state INLINEFORM0 : DISPLAYFORM0 ",
" where INLINEFORM0 in the agent's network is a multilayer perceptron (MLP) with a single hidden layer and a INLINEFORM1 activation function over all possible system actions.",
"Belief Tracking Belief tracker, or dialog state tracker BIBREF36 , BIBREF37 , continuously tracks the user's goal by accumulating evidence in the conversation. We represent the user's goal using a list of slot values. The belief tracker maintains and updates a probability distribution INLINEFORM0 over candidate values for each slot type INLINEFORM1 at each turn INLINEFORM2 : DISPLAYFORM0 ",
" where INLINEFORM0 is an MLP with a single hidden layer and a INLINEFORM1 activation function for slot type INLINEFORM2 .",
"KB Operation The proposed dialog agent is able to access external information by interfacing with a KB or a database by issuing API calls. Making API call is one of the dialog acts that can be emitted by the agent, conditioning on the state of the conversation. An API call command template is firstly generated with slot type tokens. The final API call command is produced by replacing slot type tokens with the corresponding slot values from the belief tracker outputs.",
"At the INLINEFORM0 th turn of a dialog, the KB input encoding INLINEFORM1 is a binary value informing the availability of the entities that match the KB query. Corresponding output is the probability distribution of the entity pointer. We treat the KB results as a list of structured entities and let the model to maintain an entity pointer. The agent learns to adjust the entity pointer when user requests for alternative options.",
"Response Generation We use a template-based NLG module to convert the agent outputs (system action, slot values, and KB entity values) to natural language format."
],
[
"Figure 2 shows the design of the user simulator. User simulator is given a randomly sampled goal at the beginning of the conversation. Similar to the design of the dialog agent, state of the user simulator is maintained in the state of an LSTM. At the INLINEFORM0 th turn of a dialog, the user simulator takes in (1) the goal encoding INLINEFORM1 , (2) the previous user output encoding INLINEFORM2 , (3) the current turn agent input encoding INLINEFORM3 , and updates its internal state conditioning on the previous user state INLINEFORM4 . On the output side, the user simulator firstly emits a user action INLINEFORM5 based on the updated state INLINEFORM6 . Conditioning on this emitted user action and the user dialog state INLINEFORM7 , a set of slot values are emitted. The user action and slot values are then passed to an NLG module to generate the final user utterance.",
"User Goal We define a user's goal INLINEFORM0 using a list of informable and requestable slots BIBREF38 . Informable slots are the slots that users can provide a value for to describe their goal (e.g. slots for food type, area, etc.). Requestable slots are the slots that users want to request the value for, such as requesting the address of a restaurant. We treat informable slots as discrete type of inputs that can take multiple values, and treat requestable slots as inputs that take binary values (i.e. a slot is either requested or not). In this work, once the a goal is sampled at the beginning of the conversation, we fix the user goal and do not change it during the conversation.",
"Action Selection Similar to the action modeling in dialog agent, we treat user action modeling as a multi-class classification problem conditioning on the dialog context encoded in the dialog-level LSTM state INLINEFORM0 on the user simulator side: DISPLAYFORM0 ",
" Once user action is generated at turn INLINEFORM0 , it is used together with the current user dialog state INLINEFORM1 to generate value for each informable slot: DISPLAYFORM0 ",
" Similar to the design of the dialog agent, INLINEFORM0 and INLINEFORM1 are MLPs with a single hidden layer and use INLINEFORM2 activation over their corresponding outputs.",
"Utterance Generation We use a template-based NLG module to convert the user simulator's outputs (action and slot values) to the natural language surface form."
],
[
"RL policy optimization is performed on top of the supervised pre-trained networks. The system architecture is shown in Figure FIGREF10 . We defines the state, action, and reward in our RL training setting and present the training details.",
"State For RL policy learning, states of the dialog agent and the user simulator at the INLINEFORM0 th turn are the dialog-level LSTM state INLINEFORM1 and INLINEFORM2 respectively. Both INLINEFORM3 and INLINEFORM4 captures the dialog history up till current turn. The user state INLINEFORM5 also encodes the user's goal.",
"Action Actions of the dialog agent and user simulator are the system action outputs INLINEFORM0 and INLINEFORM1 . An action is sampled by the agent based on a stochastic representation of the policy, which produces a probability distribution over actions given a dialog state. Action space is finite and discrete for both the dialog agent and the user simulator.",
"Reward Reward is calculated based on the level of task completion. A turn level reward INLINEFORM0 is applied based on the progress that the agent and user made in completing the predefined task over the past turn. At the end of each turn, a score INLINEFORM1 is calculated indicating to what extend the agent has fulfilled the user's request so far. The turn level reward INLINEFORM2 is then calculated by the difference of the scores received in two consecutive turns: DISPLAYFORM0 ",
" where INLINEFORM0 is the scoring function. INLINEFORM1 and INLINEFORM2 are the true user's goal and the agent's estimation of the user's goal, both are represented by slot-value pairs. Alternatively, the turn level reward INLINEFORM3 can be obtained by using the discounted reward received at the end of the dialog (positive reward for task success, and negative reward for task failure).",
"Policy Gradient RL For policy optimization with RL, policy gradient method is preferred over Q-learning in our system as the policy network parameters can be initialized with the INLINEFORM0 parameters learnied during supervised pre-training stage. With REINFORCE BIBREF39 , the objective function can be written as INLINEFORM1 , with INLINEFORM2 being the discount factor. We optimize parameter sets INLINEFORM3 and INLINEFORM4 for the dialog agent and the user simulator to maximize INLINEFORM5 . For the dialog agent, with likelihood ratio gradient estimator, gradient of INLINEFORM6 can be derived as: DISPLAYFORM0 ",
" This last expression above gives us an unbiased gradient estimator. We sample agent action and user action at each dialog turn and compute the policy gradient. Similarly, gradient on the user simulator side can be derived as: DISPLAYFORM0 ",
" A potential drawback of using REINFORCE is that the policy gradient might have high variance, since the agent may take many steps over the course of a dialog episode. We also explore using Advantage Actor-Critic (A2C) in our study, in which we approximate a state-value function using a feed forward neural network.",
"During model training, we use softmax policy for both the dialog agent and the user simulator to encourage exploration. Softmax policy samples action from the action probability distribution calculated by the INLINEFORM0 in the system action output. During evaluation, we apply greedy policy to the dialog agent, and still apply softmax policy to the user simulator. This is to increase randomness and diversity in the user simulator behavior, which is closer to the realistic dialog system evaluation settings with human users. This also prevents the two agents from fully cooperating with each other and exploiting the game."
],
[
"We prepare the data in our study based the corpus from the second Dialog State Tracking Challenge (DSTC2) BIBREF40 . We converte this corpus to our required format by adding API call commands and the corresponding KB query results. The dialogs simulation is based on real KB search results, which makes the dialog agent evaluation closer to real cases. Different from DSTC2, agent and user actions in our system are generated by concatenating the act and slot names in the original dialog act output (e.g. “ INLINEFORM0 ” maps to “ INLINEFORM1 ”). Slot values are captured in the belief tracking outputs. Table 1 shows the statistics of the dataset used in our experiments."
],
[
"In supervised pre-training, the dialog agent and the user simulator are trained separately against dialog corpus. We use the same set of neural network model configurations for both agents. Hidden layer sizes of the dialog-level LSTM for dialog modeling and utterance-level LSTM for utterance encoding are both set as 150. We perform mini-batch training using Adam optimization method BIBREF41 . Initial learning rate is set as 1e-3. Dropout BIBREF42 ( INLINEFORM0 ) is applied during model training to prevent to model from over-fitting.",
"In deep RL training stage, the policy network parameters are initialized with INLINEFORM0 parameters from the SL training. State-value function network parameters in A2C are initialized randomly. To ameliorate the non-stationarity problem when jointly training the two agents, we update the two agents iteratively during RL training. We take 100 episodes as a RL training cycle, in which we fix one agent and only update the other, and switch the training agent in the next cycle until convergence. In dialog simulation, we end the dialog if the dialog length exceeds the maximum turn size (20 in our experiment) or the user simulator emits the end of dialog action."
],
[
"We evaluate the system on task success rate, average task reward, and average dialog length on simulated dialogs. A dialog is considered successful if the agent's belief tracking outputs match the informable user goal slot values completely, and all user requested slots are fulfilled. Note that the results on task success rate in this work should not be directly compared to the numbers in BIBREF16 , BIBREF43 , as both the dialog agent and the user simulator in our study are end-to-end models that take noisy natural language utterance as input and directly generate the final dialog act output. Moreover, instead of using greedy policy on user simulator, we sample user actions based on the action probability distribution from the user policy network to encourage diversity and variety in user behavior.",
"Table 2 shows the evaluation results. The baseline model uses the SL trained agents. REINFORCE-agent and A2C-agent apply RL training on the dialog agent only, without updating the user simulator. REINFORCE-joint and A2C-joint apply RL on both the dialog agent and user simulator over the SL pre-trained models. Figure 4, 5, and 6 show the learning curves of these five models during RL training on dialog success rate, average reward, and average success dialog length.",
"Success Rate As shown in Table 2, the SL model achieves the lowest task success rate. Model trained with SL on dialog corpus has limited capabilities in capturing the change in state, and thus may not be able to generalize well to unseen dialog situations during simulation. RL efficiently improves the dialog task success rate, as it enables the dialog agent to explore strategies that are not in the training corpus. The agent-update-only models using REINFORCE and A2C achieve similar results, outperforming the baseline model by 14.9% and 15.3% respectively. The jointly optimized models improved the performance further over the agent-update-only models. Model using A2C for joint policy optimization achieves the best task success rate.",
"Average Reward RL curves on average dialog reward show similar trends as above. One difference is that the joint training model using REINFORCE achieves the highest average reward, outperforming that using A2C by a small margin. This is likely due to the better performance of our REINFORCE models in earning reward in the failed dialogs. We find that our user simulator trained with A2C tends to have sharper action distribution from the softmax policy, making it easier to get stuck when it falls into an unfavorable state. We are interested in exploring fine-grained control strategies in joint RL policy optimization framework in our future work.",
"Average Success Turn Size The average turn size of success dialogs tends to decrease along the episode of RL policy learning. This is in line with our expectation as both the dialog agent and the user simulator improve their policies for more efficient and coherent strategies with the RL training."
],
[
"In this work, we propose a reinforcement learning framework for dialog policy optimization in end-to-end task-oriented dialog systems. The proposed method addresses the challenge of lacking a reliable user simulator for policy learning in task-oriented dialog systems. We present an iterative policy learning method that jointly optimizes the dialog agent and the user simulator with deep RL by simulating dialogs between the two agents. Both the dialog agent and the user simulator are designed with neural network models that can be trained end-to-end. Experiment results show that our proposed method leads to promising improvements on task success rate and task reward over the supervised training and single-agent RL training baseline models."
]
],
"section_name": [
"Introduction",
"Related Work",
"Proposed Framework",
"Dialog Agent",
"User Simulator",
"Deep RL Policy Optimization",
"Dataset",
"Training Procedure",
"Results and Analysis",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"2d409b8b808ba940c1f31a01c6d38f8376c3faab",
"f9e43046a247c474c55b566ea6319bfcca32762a"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 2. Evaluation results on the converted DSTC2 dataset."
],
"extractive_spans": [],
"free_form_answer": "A2C and REINFORCE-joint for joint policy optimization achieve the improvement over SL baseline of 29.4% and 25.7 susses rate, 1.21 AND 1.28 AvgRevard and 0.25 and -1.34 AvgSucccess Turn Size, respectively.",
"highlighted_evidence": [
"FLOAT SELECTED: Table 2. Evaluation results on the converted DSTC2 dataset."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Success Rate As shown in Table 2, the SL model achieves the lowest task success rate. Model trained with SL on dialog corpus has limited capabilities in capturing the change in state, and thus may not be able to generalize well to unseen dialog situations during simulation. RL efficiently improves the dialog task success rate, as it enables the dialog agent to explore strategies that are not in the training corpus. The agent-update-only models using REINFORCE and A2C achieve similar results, outperforming the baseline model by 14.9% and 15.3% respectively. The jointly optimized models improved the performance further over the agent-update-only models. Model using A2C for joint policy optimization achieves the best task success rate."
],
"extractive_spans": [
"agent-update-only models using REINFORCE and A2C achieve similar results, outperforming the baseline model by 14.9% and 15.3% respectively",
" jointly optimized models improved the performance further"
],
"free_form_answer": "",
"highlighted_evidence": [
" The agent-update-only models using REINFORCE and A2C achieve similar results, outperforming the baseline model by 14.9% and 15.3% respectively. The jointly optimized models improved the performance further over the agent-update-only models. Model using A2C for joint policy optimization achieves the best task success rate."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"95c2ed0cfeca03f944e91ac6a031a5c7315a8f80",
"f992ef9e3cb436927fe70c98b66aa5f8bc7d067e"
],
"answer": [
{
"evidence": [
"To address the challenge of lacking a reliable user simulator for dialog agent policy learning, we propose a method in jointly optimizing the dialog agent policy and the user simulator policy with deep RL. We first bootstrap a basic neural dialog agent and a basic neural user simulator by learning directly from dialog corpora with supervised training. We then improve them further by simulating task-oriented dialogs between the two agents and iteratively optimizing their dialog policies with deep RL. The intuition is that we model task-oriented dialog as a goal fulfilling task, in which we let the dialog agent and the user simulator to positively collaborate to achieve the goal. The user simulator is given a goal to complete, and it is expected to demonstrate coherent but diverse user behavior. The dialog agent attempts to estimate the user's goal and fulfill his request by conducting meaningful conversations. Both the two agents aim to learn to collaborate with each other to complete the task but without exploiting the game."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To address the challenge of lacking a reliable user simulator for dialog agent policy learning, we propose a method in jointly optimizing the dialog agent policy and the user simulator policy with deep RL."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"To address the challenge of lacking a reliable user simulator for dialog agent policy learning, we propose a method in jointly optimizing the dialog agent policy and the user simulator policy with deep RL. We first bootstrap a basic neural dialog agent and a basic neural user simulator by learning directly from dialog corpora with supervised training. We then improve them further by simulating task-oriented dialogs between the two agents and iteratively optimizing their dialog policies with deep RL. The intuition is that we model task-oriented dialog as a goal fulfilling task, in which we let the dialog agent and the user simulator to positively collaborate to achieve the goal. The user simulator is given a goal to complete, and it is expected to demonstrate coherent but diverse user behavior. The dialog agent attempts to estimate the user's goal and fulfill his request by conducting meaningful conversations. Both the two agents aim to learn to collaborate with each other to complete the task but without exploiting the game."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"To address the challenge of lacking a reliable user simulator for dialog agent policy learning, we propose a method in jointly optimizing the dialog agent policy and the user simulator policy with deep RL. "
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"annotation_id": [
"4373eafba8a67c91b572ddd1fd914c4dd01e80d5",
"e59e09860afa6000eda81d5b71c940944440981e"
],
"answer": [
{
"evidence": [
"Figure 1 illustrates the design of the dialog agent. The dialog agent is capable of tracking dialog state, issuing API calls to knowledge bases (KB), and producing corresponding system actions and responses by incorporating the query results, which are key skill sets BIBREF26 in conducting task-oriented dialogs. State of the dialog agent is maintained in the LSTM BIBREF35 state and being updated after the processing of each turn. At the INLINEFORM0 th turn of a dialog, the dialog agent takes in (1) the previous agent output encoding INLINEFORM1 , (2) the user input encoding INLINEFORM2 , (3) the retrieved KB result encoding INLINEFORM3 , and updates its internal state conditioning on the previous agent state INLINEFORM4 . With the updated agent state INLINEFORM5 , the dialog agent emits (1) a system action INLINEFORM6 , (2) an estimation of the belief state, and (3) a pointer to an entity in the retrieved query results. These outputs are then passed to an NLG module to generate the final agent response.",
"Figure 2 shows the design of the user simulator. User simulator is given a randomly sampled goal at the beginning of the conversation. Similar to the design of the dialog agent, state of the user simulator is maintained in the state of an LSTM. At the INLINEFORM0 th turn of a dialog, the user simulator takes in (1) the goal encoding INLINEFORM1 , (2) the previous user output encoding INLINEFORM2 , (3) the current turn agent input encoding INLINEFORM3 , and updates its internal state conditioning on the previous user state INLINEFORM4 . On the output side, the user simulator firstly emits a user action INLINEFORM5 based on the updated state INLINEFORM6 . Conditioning on this emitted user action and the user dialog state INLINEFORM7 , a set of slot values are emitted. The user action and slot values are then passed to an NLG module to generate the final user utterance."
],
"extractive_spans": [
"LSTM"
],
"free_form_answer": "",
"highlighted_evidence": [
"Figure 1 illustrates the design of the dialog agent. The dialog agent is capable of tracking dialog state, issuing API calls to knowledge bases (KB), and producing corresponding system actions and responses by incorporating the query results, which are key skill sets BIBREF26 in conducting task-oriented dialogs. State of the dialog agent is maintained in the LSTM BIBREF35 state and being updated after the processing of each turn.",
"Figure 2 shows the design of the user simulator. User simulator is given a randomly sampled goal at the beginning of the conversation. Similar to the design of the dialog agent, state of the user simulator is maintained in the state of an LSTM. "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Figure 1 illustrates the design of the dialog agent. The dialog agent is capable of tracking dialog state, issuing API calls to knowledge bases (KB), and producing corresponding system actions and responses by incorporating the query results, which are key skill sets BIBREF26 in conducting task-oriented dialogs. State of the dialog agent is maintained in the LSTM BIBREF35 state and being updated after the processing of each turn. At the INLINEFORM0 th turn of a dialog, the dialog agent takes in (1) the previous agent output encoding INLINEFORM1 , (2) the user input encoding INLINEFORM2 , (3) the retrieved KB result encoding INLINEFORM3 , and updates its internal state conditioning on the previous agent state INLINEFORM4 . With the updated agent state INLINEFORM5 , the dialog agent emits (1) a system action INLINEFORM6 , (2) an estimation of the belief state, and (3) a pointer to an entity in the retrieved query results. These outputs are then passed to an NLG module to generate the final agent response.",
"Figure 2 shows the design of the user simulator. User simulator is given a randomly sampled goal at the beginning of the conversation. Similar to the design of the dialog agent, state of the user simulator is maintained in the state of an LSTM. At the INLINEFORM0 th turn of a dialog, the user simulator takes in (1) the goal encoding INLINEFORM1 , (2) the previous user output encoding INLINEFORM2 , (3) the current turn agent input encoding INLINEFORM3 , and updates its internal state conditioning on the previous user state INLINEFORM4 . On the output side, the user simulator firstly emits a user action INLINEFORM5 based on the updated state INLINEFORM6 . Conditioning on this emitted user action and the user dialog state INLINEFORM7 , a set of slot values are emitted. The user action and slot values are then passed to an NLG module to generate the final user utterance."
],
"extractive_spans": [
"Similar to the design of the dialog agent, state of the user simulator is maintained in the state of an LSTM.",
"State of the dialog agent is maintained in the LSTM BIBREF35"
],
"free_form_answer": "",
"highlighted_evidence": [
"The dialog agent is capable of tracking dialog state, issuing API calls to knowledge bases (KB), and producing corresponding system actions and responses by incorporating the query results, which are key skill sets BIBREF26 in conducting task-oriented dialogs. State of the dialog agent is maintained in the LSTM BIBREF35 state and being updated after the processing of each turn. At the INLINEFORM0 th turn of a dialog, the dialog agent takes in (1) the previous agent output encoding INLINEFORM1 , (2) the user input encoding INLINEFORM2 , (3) the retrieved KB result encoding INLINEFORM3 , and updates its internal state conditioning on the previous agent state INLINEFORM4 . With the updated agent state INLINEFORM5 , the dialog agent emits (1) a system action INLINEFORM6 , (2) an estimation of the belief state, and (3) a pointer to an entity in the retrieved query results. These outputs are then passed to an NLG module to generate the final agent response.",
"User simulator is given a randomly sampled goal at the beginning of the conversation. Similar to the design of the dialog agent, state of the user simulator is maintained in the state of an LSTM. At the INLINEFORM0 th turn of a dialog, the user simulator takes in (1) the goal encoding INLINEFORM1 , (2) the previous user output encoding INLINEFORM2 , (3) the current turn agent input encoding INLINEFORM3 , and updates its internal state conditioning on the previous user state INLINEFORM4 . On the output side, the user simulator firstly emits a user action INLINEFORM5 based on the updated state INLINEFORM6 . Conditioning on this emitted user action and the user dialog state INLINEFORM7 , a set of slot values are emitted. The user action and slot values are then passed to an NLG module to generate the final user utterance."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"27ad5173a576770fedd16193e3e445e00fbecd62",
"7c84044f4ad18ee7da90e00e04c6335675bc54d0"
],
"answer": [
{
"evidence": [
"In the supervised pre-training stage, we train the dialog agent and the user simulator separately using task-oriented dialog corpora. In the RL training stage, we simulate dialogs between the two agents. The user simulator starts the conversation based on a sampled user goal. The dialog agent attempts to estimate the user's goal and complete the task with the user simulator by conducting multi-turn conversation. At the end of each simulated dialog, a reward is generated based on the level of task completion. This reward is used to further optimize the dialog policies of the two agents with RL."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In the supervised pre-training stage, we train the dialog agent and the user simulator separately using task-oriented dialog corpora."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"In the supervised pre-training stage, we train the dialog agent and the user simulator separately using task-oriented dialog corpora. In the RL training stage, we simulate dialogs between the two agents. The user simulator starts the conversation based on a sampled user goal. The dialog agent attempts to estimate the user's goal and complete the task with the user simulator by conducting multi-turn conversation. At the end of each simulated dialog, a reward is generated based on the level of task completion. This reward is used to further optimize the dialog policies of the two agents with RL.",
"In many of the recent work on using RL for dialog policy learning BIBREF18 , BIBREF30 , BIBREF19 , hand-designed user simulators are used to interact with the dialog agent. Designing a good performing user simulator is not easy. A too basic user simulator as in BIBREF18 may only be able to produce short and simple utterances with limited variety, making the final system lack of robustness against noise in real world user inputs. Advanced user simulators BIBREF31 , BIBREF22 may demonstrate coherent user behavior, but they typically require designing complex rules with domain expertise. We address this challenge using a hybrid learning method, where we firstly bootstrapping a basic functioning user simulator with SL on human annotated corpora, and continuously improving it together with the dialog agent during dialog simulations with deep RL."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In the supervised pre-training stage, we train the dialog agent and the user simulator separately using task-oriented dialog corpora. ",
" We address this challenge using a hybrid learning method, where we firstly bootstrapping a basic functioning user simulator with SL on human annotated corpora, and continuously improving it together with the dialog agent during dialog simulations with deep RL."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity"
],
"paper_read": [
"no",
"no",
"no",
"no"
],
"question": [
"By how much do they improve upon supervised traning methods?",
"Do they jointly optimize both agents?",
"Which neural network architecture do they use for the dialog agent and user simulator?",
"Do they create the basic dialog agent and basic user simulator separately?"
],
"question_id": [
"55e3daecaf8030ed627e037992402dd0a7dd67ff",
"5522a9eeb06221722052e3c38f9b0d0dbe7c13e6",
"30870a962cf88ac8c8e6b7b795936fd62214f507",
"7ece07a84635269bb19796497847e4517d1e3e61"
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"search_query": [
"",
"",
"",
""
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Fig. 1. Dialog agent network architecture.",
"Fig. 2. User simulator network architecture.",
"Fig. 3. System architecture for joint dialog agent and user simulator policy optimization with deep RL",
"Table 1. Statistics of the dataset",
"Fig. 5. Learning curve for average reward",
"Table 2. Evaluation results on the converted DSTC2 dataset.",
"Fig. 4. Learning curve for task success rate.",
"Fig. 6. Learning curve for average turn size"
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Table1-1.png",
"6-Figure5-1.png",
"6-Table2-1.png",
"6-Figure4-1.png",
"6-Figure6-1.png"
]
} | [
"By how much do they improve upon supervised traning methods?"
] | [
[
"1709.06136-Results and Analysis-2",
"1709.06136-6-Table2-1.png"
]
] | [
"A2C and REINFORCE-joint for joint policy optimization achieve the improvement over SL baseline of 29.4% and 25.7 susses rate, 1.21 AND 1.28 AvgRevard and 0.25 and -1.34 AvgSucccess Turn Size, respectively."
] | 411 |
2003.08769 | Personalized Taste and Cuisine Preference Modeling via Images | With the exponential growth in the usage of social media to share live updates about life, taking pictures has become an unavoidable phenomenon. Individuals unknowingly create a unique knowledge base with these images. The food images, in particular, are of interest as they contain a plethora of information. From the image metadata and using computer vision tools, we can extract distinct insights for each user to build a personal profile. Using the underlying connection between cuisines and their inherent tastes, we attempt to develop such a profile for an individual based solely on the images of his food. Our study provides insights about an individual's inclination towards particular cuisines. Interpreting these insights can lead to the development of a more precise recommendation system. Such a system would avoid the generic approach in favor of a personalized recommendation system. | {
"paragraphs": [
[
"A picture is worth a thousand words. Complex ideas can easily be depicted via an image. An image is a mine of data in the 21st century. With each person taking an average of 20 photographs every day, the number of photographs taken around the world each year is astounding. According to a Statista report on Photographs, an estimated 1.2 trillion photographs were taken in 2017 and 85% of those images were of food. Youngsters can't resist taking drool-worthy pictures of their food before tucking in. Food and photography have been amalgamated into a creative art form where even the humble home cooked meal must be captured in the perfect lighting and in the right angle before digging in. According to a YouGov poll, half of Americans take pictures of their food.",
"The sophistication of smart-phone cameras allows users to capture high quality images on their hand held device. Paired with the increasing popularity of social media platforms such as Facebook and Instagram, it makes sharing of photographs much easier than with the use of a standalone camera. Thus, each individual knowingly or unknowingly creates a food log.",
"A number of applications such as MyFitnessPal, help keep track of a user's food consumption. These applications are heavily dependent on user input after every meal or snack. They often include several data fields that have to be manually filled by the user. This tedious process discourages most users, resulting in a sparse record of their food intake over time. Eventually, this data is not usable. On the other hand, taking a picture of your meal or snack is an effortless exercise.",
"Food images may not give us an insight into the quantity or quality of food consumed by the individual but it can tell us what he/she prefers to eat or likes to eat. We try to tackle the following research question with our work: Can we predict the cuisine of a food item based on just it's picture, with no additional text input from the user?"
],
[
"The work in this field has not delved into extracting any information from food pictures. The starting point for most of the research is a knowledge base of recipes (which detail the ingredients) mapped to a particular cuisine.",
"Han Su et. al.BIBREF0 have worked on investigating if the recipe cuisines can be predicted from the ingredients of recipes. They treat ingredients as features and provide insights on which cuisines are most similar to each other. Finding common ingredients for each cuisine is also an important aspect. Ueda et al. BIBREF1 BIBREF2 proposed a personalized recipe recommendation method based on users' food preferences. This is derived from his/her recipe browsing activities and cooking history.",
"Yang et al BIBREF3 believed the key to recognizing food is exploiting the spatial relationships between different ingredients (such as meat and bread in a sandwich). They propose a new representation for food items that calculates pairwise statistics between local features computed over a soft pixel-level segmentation of the image into eight ingredient types. Then they accumulate these statistics in a multi-dimensional histogram, which is then used as a feature vector for a discriminative classifier.",
"Existence of huge cultural diffusion among cuisines is shown by the work carried out by S Jayaraman et al in BIBREF4. They explore the performance of each classifier for a given type of dataset under unsupervised learning methods(Linear support Vector Classifier (SVC), Logistic Regression, Random Forest Classifier and Naive Bayes).",
"H Holste et al's work BIBREF5 predicts the cuisine of a recipe given the list of ingredients. They eliminate distribution of ingredients per recipe as a weak feature. They focus on showing the difference in performance of models with and without tf-idf scoring. Their custom tf-idf scoring model performs well on the Yummly Dataset but is considerably naive.",
"R M Kumar et al BIBREF6 use Tree Boosting algorithms(Extreme Boost and Random Forest) to predict cuisine based on ingredients. It is seen from their work that Extreme Boost performs better than Random Forest.",
"Teng et al BIBREF7 have studied substitutable ingredients using recipe reviews by creating substitute ingredient graphs and forming clusters of such ingredients."
],
[
"The YummlyBIBREF8 dataset is used to understand how ingredients can be used to determine the cuisine. The dataset consists of 39,774 recipes. Each recipe is associated with a particular cuisine and a particular set of ingredients. Initial analysis of the data-set revealed a total of 20 different cuisines and 6714 different ingredients. Italian cuisine, with 7383 recipes, overshadows the dataset.",
"The numbers of recipes for the 19 cuisines is quite imbalanced.BIBREF9 The following graph shows the count of recipes per cuisine.",
"User specific data is collected from social media platforms such as Facebook and Instagram with the users permission. These images are then undergo a series of pre processing tasks. This helps in cleaning the data."
],
[
"The real task lies in converting the image into interpretable data that can be parsed and used. To help with this, a data processing pipeline is built. The details of the pipeline are discussed below. The data pipeline extensively uses the ClarifaiBIBREF8 image recognition model. The 3 models used extensively are:",
"The General Model : It recognizes over 11,000 different concepts and is a great all purpose solution. We have used this model to distinguish between Food images and Non-Food images.",
"The Food Model : It recognizes more than 1,000 food items in images down to the ingredient level. This model is used to identify the ingredients in a food image.",
"The General Embedding Model : It analyzes images and returns numerical vectors that represent the input images in a 1024-dimensional space. The vector representation is computed by using Clarifai’s ‘General’ model. The vectors of visually similar images will be close to each other in the 1024-dimensional space. This is used to eliminate multiple similar images of the same food item."
],
[
"A cuisine can often be identified by some distinctive ingredientsBIBREF10. Therefore, we performed a frequency test to find the most occurring ingredients in each cuisine. Ingredients such as salt and water tend to show up at the top of these lists quite often but they are not distinctive ingredients. Hence, identification of unique ingredients is an issue that is overcome by individual inspection. For example:"
],
[
"A dataset of 275 images of different food items from different cuisines was compiled. These images were used as input to the Clarifai Food Model. The returned tags were used to create a knowledge database. When the general model labels for an image with high probability were a part of this database, the image was classified as a food image. The most commonly occurring food labels are visualized in Fig 3.",
""
],
[
"To build a clean database for the user, images with people are excluded. This includes images with people holding or eating food. This is again done with the help of the descriptive labels returned by the Clarifai General Model. Labels such as \"people\" or \"man/woman\" indicate the presence of a person and such images are discarded.",
""
],
[
"Duplicate images are removed by accessing the EXIF data of each image. Images with the same DateTime field are considered as duplicates and one copy is removed from the database.",
""
],
[
"NLTK tools were used to remove low content adjectives from the labels/concepts returned from the Clarifai Models. This ensures that specific ingredient names are extracted without their unnecessary description. The Porter Stemmer Algorithm is used for removing the commoner morphological and inflectional endings from words."
],
[
"From the food images(specific to each user), each image's descriptive labels are obtained from the Food Model. The Clarifai Food Model returns a list of concepts/labels/tags with corresponding probability scores on the likelihood that these concepts are contained within the image. The sum of the probabilities of each of these labels occurring in each image is plotted against the label in Fig 4.",
"The count of each of the labels occurring in each image is also plotted against each of the labels in Fig 5."
],
[
"Sometimes Clarifai returns the name of the dish itself. For example: \"Tacos\" which can be immediately classified as Mexican. There is no necessity to now map the ingredients to find the cuisine. Therefore, it is now necessary to maintain another database of native dishes from each cuisine. This database was built using the most popular or most frequently occurring dishes from each of the cuisines.",
"When no particular dish name was returned by the API, the ingredients with a probability of greater than 0.75 are selected from the output of the API. These ingredients are then mapped to the unique and frequently occurring ingredients from each cuisine. If more than 10 ingredients occur from a particular cuisine, the dish is classified into that cuisine. A radar map is plotted to understand the preference of the user. In this case, we considered only 10 cuisines."
],
[
"A more sophisticated approach to classify based on the ingredients was adopted by using the K Nearest Neighbors Model. The Yummly dataset from Kaggle is used to train the model. The ingredients extracted from the images are used as a test set. The model was run successfully for k-values ranging from 1-25. The radar charts for some of the k values are shown in Fig 7, 8 and 9.",
"Thus from these charts, we see that the user likes to eat Italian and Mexican food on most occasions. This is also in sync with the rudimentary method that we had used earlier."
],
[
"In this paper, we present an effortless method to build a personal cuisine preference model. From images of food taken by each user, the data pipeline takes over, resulting in a visual representation of the user's preference. With more focus on preprocessing and natural text processing, it becomes important to realize the difficulty presented by the problem. We present a simple process to extract maximum useful information from the image. We observe that there is significant overlap between the ingredients from different cuisines and the identified unique ingredients might not always be picked up from the image. Although, this similarity is what helps when classifying using the KNN model. For the single user data used, we see that the 338 images are classified as food images. It is observed that Italian and Mexican are the most preferred cuisines. It is also seen that as K value increases, the number of food images classified into Italian increases significantly. Classification into cuisines like Filipino, Vietnamese and Cajun_Creole decreases. This may be attributed to the imbalanced Yummly Dataset that is overshadowed by a high number of Italian recipes.",
"Limitations : The quality of the image and presentation of food can drastically affect the system. Items which look similar in shape and colour can throw the system off track. However, with a large database this should not matter much.",
"Future Directions : The cuisine preferences determined for a user can be combined with the weather and physical activity of the user to build a more specific suggestive model. For example, if the meta data of the image were to be extracted and combined with the weather conditions for that date and time then we would be able to predict the type of food the user prefers during a particular weather. This would lead to a sophisticated recommendation system."
]
],
"section_name": [
"INTRODUCTION",
"RELATED WORK",
"DATASET",
"METHODOLOGY",
"METHODOLOGY ::: DATA PRE PROCESSING ::: Distinctive Ingredients",
"METHODOLOGY ::: DATA PRE PROCESSING ::: To Classify Images as Food Images",
"METHODOLOGY ::: DATA PRE PROCESSING ::: To Remove Images with People",
"METHODOLOGY ::: DATA PRE PROCESSING ::: To Remove Duplicate Images",
"METHODOLOGY ::: DATA PRE PROCESSING ::: Natural Language Processing",
"METHODOLOGY ::: Basic Observations",
"METHODOLOGY ::: Rudimentary Method of Classification",
"METHODOLOGY ::: KNN Model for Classification",
"CONCLUSIONS"
]
} | {
"answers": [
{
"annotation_id": [
"282696575ffc20f26d205ec77f649ce5c76a8111",
"d799bb0f4cb189a6a45b542646ccd04441840d85"
],
"answer": [
{
"evidence": [
"METHODOLOGY",
"The real task lies in converting the image into interpretable data that can be parsed and used. To help with this, a data processing pipeline is built. The details of the pipeline are discussed below. The data pipeline extensively uses the ClarifaiBIBREF8 image recognition model. The 3 models used extensively are:",
"The General Model : It recognizes over 11,000 different concepts and is a great all purpose solution. We have used this model to distinguish between Food images and Non-Food images.",
"The Food Model : It recognizes more than 1,000 food items in images down to the ingredient level. This model is used to identify the ingredients in a food image.",
"The General Embedding Model : It analyzes images and returns numerical vectors that represent the input images in a 1024-dimensional space. The vector representation is computed by using Clarifai’s ‘General’ model. The vectors of visually similar images will be close to each other in the 1024-dimensional space. This is used to eliminate multiple similar images of the same food item.",
"A dataset of 275 images of different food items from different cuisines was compiled. These images were used as input to the Clarifai Food Model. The returned tags were used to create a knowledge database. When the general model labels for an image with high probability were a part of this database, the image was classified as a food image. The most commonly occurring food labels are visualized in Fig 3.",
"To build a clean database for the user, images with people are excluded. This includes images with people holding or eating food. This is again done with the help of the descriptive labels returned by the Clarifai General Model. Labels such as \"people\" or \"man/woman\" indicate the presence of a person and such images are discarded.",
"From the food images(specific to each user), each image's descriptive labels are obtained from the Food Model. The Clarifai Food Model returns a list of concepts/labels/tags with corresponding probability scores on the likelihood that these concepts are contained within the image. The sum of the probabilities of each of these labels occurring in each image is plotted against the label in Fig 4.",
"A more sophisticated approach to classify based on the ingredients was adopted by using the K Nearest Neighbors Model. The Yummly dataset from Kaggle is used to train the model. The ingredients extracted from the images are used as a test set. The model was run successfully for k-values ranging from 1-25. The radar charts for some of the k values are shown in Fig 7, 8 and 9."
],
"extractive_spans": [],
"free_form_answer": "Supervised methods are used to identify the dish and ingredients in the image, and an unsupervised method (KNN) is used to create the food profile.",
"highlighted_evidence": [
"METHODOLOGY\nThe real task lies in converting the image into interpretable data that can be parsed and used. To help with this, a data processing pipeline is built. The details of the pipeline are discussed below. The data pipeline extensively uses the ClarifaiBIBREF8 image recognition model. The 3 models used extensively are:\n\nThe General Model : It recognizes over 11,000 different concepts and is a great all purpose solution. We have used this model to distinguish between Food images and Non-Food images.\n\nThe Food Model : It recognizes more than 1,000 food items in images down to the ingredient level. This model is used to identify the ingredients in a food image.\n\nThe General Embedding Model : It analyzes images and returns numerical vectors that represent the input images in a 1024-dimensional space. The vector representation is computed by using Clarifai’s ‘General’ model. The vectors of visually similar images will be close to each other in the 1024-dimensional space. This is used to eliminate multiple similar images of the same food item.",
"A dataset of 275 images of different food items from different cuisines was compiled. These images were used as input to the Clarifai Food Model. The returned tags were used to create a knowledge database. When the general model labels for an image with high probability were a part of this database, the image was classified as a food image. ",
"To build a clean database for the user, images with people are excluded. This includes images with people holding or eating food. This is again done with the help of the descriptive labels returned by the Clarifai General Model. Labels such as \"people\" or \"man/woman\" indicate the presence of a person and such images are discarded.",
"From the food images(specific to each user), each image's descriptive labels are obtained from the Food Model. The Clarifai Food Model returns a list of concepts/labels/tags with corresponding probability scores on the likelihood that these concepts are contained within the image. ",
"A more sophisticated approach to classify based on the ingredients was adopted by using the K Nearest Neighbors Model. The Yummly dataset from Kaggle is used to train the model. The ingredients extracted from the images are used as a test set. The model was run successfully for k-values ranging from 1-25."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"From the food images(specific to each user), each image's descriptive labels are obtained from the Food Model. The Clarifai Food Model returns a list of concepts/labels/tags with corresponding probability scores on the likelihood that these concepts are contained within the image. The sum of the probabilities of each of these labels occurring in each image is plotted against the label in Fig 4.",
"A more sophisticated approach to classify based on the ingredients was adopted by using the K Nearest Neighbors Model. The Yummly dataset from Kaggle is used to train the model. The ingredients extracted from the images are used as a test set. The model was run successfully for k-values ranging from 1-25. The radar charts for some of the k values are shown in Fig 7, 8 and 9."
],
"extractive_spans": [],
"free_form_answer": "Unsupervised",
"highlighted_evidence": [
"The Clarifai Food Model returns a list of concepts/labels/tags with corresponding probability scores on the likelihood that these concepts are contained within the image.",
"A more sophisticated approach to classify based on the ingredients was adopted by using the K Nearest Neighbors Model. The Yummly dataset from Kaggle is used to train the model. The ingredients extracted from the images are used as a test set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"5d0eb078a7e7b18161a784184cd373b2f2c9cbf2",
"a27fd9cba407b9b0f5560c2ae703e9d4b694c7f5"
],
"answer": [
{
"evidence": [
"A more sophisticated approach to classify based on the ingredients was adopted by using the K Nearest Neighbors Model. The Yummly dataset from Kaggle is used to train the model. The ingredients extracted from the images are used as a test set. The model was run successfully for k-values ranging from 1-25. The radar charts for some of the k values are shown in Fig 7, 8 and 9.",
"Thus from these charts, we see that the user likes to eat Italian and Mexican food on most occasions. This is also in sync with the rudimentary method that we had used earlier."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"The ingredients extracted from the images are used as a test set. The model was run successfully for k-values ranging from 1-25. The radar charts for some of the k values are shown in Fig 7, 8 and 9.\n\nThus from these charts, we see that the user likes to eat Italian and Mexican food on most occasions."
],
"unanswerable": false,
"yes_no": true
},
{
"evidence": [
"A more sophisticated approach to classify based on the ingredients was adopted by using the K Nearest Neighbors Model. The Yummly dataset from Kaggle is used to train the model. The ingredients extracted from the images are used as a test set. The model was run successfully for k-values ranging from 1-25. The radar charts for some of the k values are shown in Fig 7, 8 and 9.",
"Thus from these charts, we see that the user likes to eat Italian and Mexican food on most occasions. This is also in sync with the rudimentary method that we had used earlier."
],
"extractive_spans": [],
"free_form_answer": "The study features a radar chart describing inclinations toward particular cuisines, but they do not perform any experiments",
"highlighted_evidence": [
"The model was run successfully for k-values ranging from 1-25. The radar charts for some of the k values are shown in Fig 7, 8 and 9.\n\nThus from these charts, we see that the user likes to eat Italian and Mexican food on most occasions. This is also in sync with the rudimentary method that we had used earlier."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
]
}
],
"nlp_background": [
"zero",
"zero"
],
"paper_read": [
"no",
"no"
],
"question": [
"Is this done in form of unsupervised (clustering) or suppervised learning?",
"Does this study perform experiments to prove their claim that indeed personalized profiles will have inclination towards particular cuisines?"
],
"question_id": [
"f94cea545f745994800c1fb4654d64d1384f2c26",
"f3b851c9063192c86a3cc33b2328c02efa41b668"
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"search_query": [
"computer vision",
"computer vision"
],
"topic_background": [
"research",
"research"
]
} | {
"caption": [
"TABLE I UNIQUE INGREDIENTS",
"Fig. 1. Count of Recipes per Cuisine",
"Fig. 2. The above diagram represents the flow of the data pipeline along with the Models used.",
"Fig. 3. The top 20 most frequently occurring food labels",
"Fig. 4. The sum of the probabilities of each label occurring in the images",
"Fig. 5. Count of each of the labels occurring in each image",
"Fig. 7. Radar chart indicating the frequency of cuisines in the prediction for K value = 2 i.e 2 nearest neighbors",
"Fig. 6. Radar chart depicting the frequency of predicted cuisines via the Rudimentary Method",
"Fig. 8. Radar chart indicating the frequency of cuisines in the prediction for K value = 10 i.e 10 nearest neighbors",
"Fig. 9. Radar chart indicating the frequency of cuisines in the prediction for K value = 20 i.e 20 nearest neighbors"
],
"file": [
"2-TableI-1.png",
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"3-Figure4-1.png",
"4-Figure5-1.png",
"4-Figure7-1.png",
"4-Figure6-1.png",
"4-Figure8-1.png",
"4-Figure9-1.png"
]
} | [
"Is this done in form of unsupervised (clustering) or suppervised learning?",
"Does this study perform experiments to prove their claim that indeed personalized profiles will have inclination towards particular cuisines?"
] | [
[
"2003.08769-METHODOLOGY-0",
"2003.08769-METHODOLOGY ::: DATA PRE PROCESSING ::: To Classify Images as Food Images-0",
"2003.08769-METHODOLOGY-1",
"2003.08769-METHODOLOGY-3",
"2003.08769-METHODOLOGY ::: DATA PRE PROCESSING ::: To Remove Images with People-0",
"2003.08769-METHODOLOGY-2",
"2003.08769-METHODOLOGY ::: Basic Observations-0",
"2003.08769-METHODOLOGY ::: KNN Model for Classification-0"
],
[
"2003.08769-METHODOLOGY ::: KNN Model for Classification-1",
"2003.08769-METHODOLOGY ::: KNN Model for Classification-0"
]
] | [
"Unsupervised",
"The study features a radar chart describing inclinations toward particular cuisines, but they do not perform any experiments"
] | 412 |
1909.13466 | Regressing Word and Sentence Embeddings for Regularization of Neural Machine Translation | In recent years, neural machine translation (NMT) has become the dominant approach in automated translation. However, like many other deep learning approaches, NMT suffers from overfitting when the amount of training data is limited. This is a serious issue for low-resource language pairs and many specialized translation domains that are inherently limited in the amount of available supervised data. For this reason, in this paper we propose regressing word (ReWE) and sentence (ReSE) embeddings at training time as a way to regularize NMT models and improve their generalization. During training, our models are trained to jointly predict categorical (words in the vocabulary) and continuous (word and sentence embeddings) outputs. An extensive set of experiments over four language pairs of variable training set size has showed that ReWE and ReSE can outperform strong state-of-the-art baseline models, with an improvement that is larger for smaller training sets (e.g., up to +5:15 BLEU points in Basque-English translation). Visualizations of the decoder's output space show that the proposed regularizers improve the clustering of unique words, facilitating correct predictions. In a final experiment on unsupervised NMT, we show that ReWE and ReSE are also able to improve the quality of machine translation when no parallel data are available. | {
"paragraphs": [
[
"Machine translation (MT) is a field of natural language processing (NLP) focussing on the automatic translation of sentences from a source language to a target language. In recent years, the field has been progressing quickly mainly thanks to the advances in deep learning and the advent of neural machine translation (NMT). The first NMT model was presented in 2014 by Sutskever et al. BIBREF0 and consisted of a plain encoder-decoder architecture based on recurrent neural networks (RNNs). In the following years, a series of improvements has led to major performance increases, including the attention mechanism (a word-aligment model between words in the source and target sentences) BIBREF1, BIBREF2 and the transformer (a non-recurrent neural network that offers an alternative to RNNs and makes NMT highly parallelizable) BIBREF3. As a result, NMT models have rapidly outperformed traditional approaches such as phrase-based statistical machine translation (PBSMT) BIBREF4 in challenging translation contexts (e.g., the WMT conference series). Nowadays, the majority of commercial MT systems utilise NMT in some form.",
"However, NMT systems are not exempt from limitations. The main is their tendence to overfit the training set due to their large number of parameters. This issue is common to many other tasks that use deep learning models and it is caused to a large extent by the way these models are trained: maximum likelihood estimation (MLE). As pointed out by Elbayad et al. BIBREF5, in the case of machine translation, MLE has two clear shortcomings that contribute to overfitting:",
"Single ground-truth reference: Usually, NMT models are trained with translation examples that have a single reference translation in the target language. MLE tries to give all the probability to the words of the ground-truth reference and zero to all others. Nevertheless, a translation that uses different words from the reference (e.g. paraphrase sentences, synonyms) can be equally correct. Standard MLE training is not able to leverage this type of information since it treats every word other than the ground truth as completely incorrect.",
"Exposure biasBIBREF6: NMT models are trained with “teacher forcing”, which means that the previous word from the reference sentence is given as input to the decoder for the prediction of the next. This is done to speed up training convergence and avoid prediction drift. However, at test time, due to the fact that the reference is not available, the model has to rely on its own predictions and the performance can be drastically lower.",
"Both these limitations can be mitigated with sufficient training data. In theory, MLE could achieve optimal performance with infinite training data, but in practice this is impossible as the available resources are always limited. In particular, when the training data are scarce such as in low-resource language pairs or specific translation domains, NMT models display a modest performance, and other traditional approaches (e.g., PBSMT)BIBREF7 often obtain better accuracies. As such, generalization of NMT systems still calls for significant improvement.",
"In our recent work BIBREF8, we have proposed a novel regularization technique that is based on co-predicting words and their embeddings (“regressing word embeddings”, or ReWE for short). ReWE is a module added to the decoder of a sequence-to-sequence model so that, during training, the model is trained to jointly predict the next word in the translation (categorical value) and its pre-trained word embedding (continuous value). This approach can leverage the contextual information embedded in pre-trained word vectors to achieve more accurate translations at test time. ReWE has been showed to be very effective over low/medium size training sets BIBREF8. In this paper, we extend this idea to its natural counterpart: sentence embedding. We propose regressing sentence embeddings (ReSE) as an additional regularization method to further improve the accuracy of the translations. ReSE uses a self-attention mechanism to infer a fixed-dimensional sentence vector for the target sentence. During training, the model is trained to regress this inferred vector towards the pre-trained sentence embedding of the ground-truth sentence. The main contributions of this paper are:",
"The proposal of a new regularization technique for NMT based on sentence embeddings (ReSE).",
"Extensive experimentation over four language pairs of different dataset sizes (from small to large) with both word and sentence regularization. We show that using both ReWE and ReSE can outperform strong state-of-the-art baselines based on long short-term memory networks (LSTMs) and transformers.",
"Insights on how ReWE and ReSE help to improve NMT models. Our analysis shows that these regularizers improve the organization of the decoder's output vector space, likely facilitating correct word classification.",
"Further experimentation of the regularizer on unsupervised machine translation, showing that it can improve the quality of the translations even in the absence of parallel training data.",
"The rest of this paper is organized as follows. Section SECREF2 presents and discusses the related work. Section SECREF3 describes the model used as baseline while Section SECREF4 presents the proposed regularization techniques, ReWE and ReSE. Section SECREF5 describes the experiments and analyzes the experimental results. Finally, Section SECREF6 concludes the paper."
],
[
"The related work is organized over the three main research subareas that have motivated this work: regularization techniques, word and sentence embeddings and unsupervised NMT."
],
[
"In recent years, the research community has dedicated much attention to the problem of overfitting in deep neural models. Several regularization approaches have been proposed in turn such as dropout BIBREF9, BIBREF10, data augmentation BIBREF11 and multi-task learning BIBREF12, BIBREF13. Their common aim is to encourage the model to learn parameters that allow for better generalization.",
"In NMT, too, mitigating overfitting has been the focus of much research. As mentioned above, the two, main acknowledged problems are the single ground-truth reference and the exposure bias. For the former, Fadee et al. BIBREF11 have proposed augmenting the training data with synthetically-generated sentence pairs containing rare words. The intuition is that the model will be able to see the vocabulary's words in more varied contexts during training. Kudo BIBREF14 has proposed using variable word segmentations to improve the model's robustness, achieving notable improvements in low-resource languages and out-of-domain settings. Another line of work has focused on “smoothing” the output probability distribution over the target vocabulary BIBREF5, BIBREF15. These approaches use token-level and sentence-level reward functions that push the model to distribute the output probability mass over words other than the ground-truth reference. Similarly, Ma et al. BIBREF16 have added a bag-of-words term to the training objective, assuming that the set of correct translations share similar bag-of-word vectors.",
"There has also been extensive work on addressing the exposure bias problem. An approach that has proved effective is the incorporation of predictions in the training, via either imitation learning BIBREF17, BIBREF18, BIBREF19 or reinforcement learning BIBREF20, BIBREF21. Another approach, that is computationally more efficient, leverages scheduled sampling to obtain a stochastic mixture of words from the reference and the predictions BIBREF6. In turn, Wu et al. BIBREF22 have proposed a soft alignment algorithm to alleviate the missmatches between the reference translations and the predictions obtained with scheduled sampling; and Zhang et al.BIBREF23 have introduced two regularization terms based on the Kullback-Leibler (KL) divergence to improve the agreement of sentences predicted from left-to-right and right-to-left."
],
[
"Word vectors or word embeddings BIBREF24, BIBREF25, BIBREF26 are ubiquitous in NLP since they provide effective input features for deep learning models. Recently, contextual word vectors such as ELMo BIBREF27, BERT BIBREF28 and the OpenAI transformer BIBREF29 have led to remarkable performance improvements in several language understanding tasks. Additionally, researchers have focused on developing embeddings for entire sentences and documents as they may facilitate several textual classification tasks BIBREF30, BIBREF31, BIBREF32, BIBREF33.",
"In NMT models, word embeddings play an important role as input of both the encoder and the decoder. A recent paper has shown that contextual word embeddings provide effective input features for both stages BIBREF34. However, very little research has been devoted to using word embeddings as targets. Kumar and Tsvetkov BIBREF35 have removed the typical output softmax layer, forcing the decoder to generate continuous outputs. At inference time, they use a nearest-neighbour search in the word embedding space to select the word to predict. Their model allows for significantly faster training while performing on par with state-of-the-art models. Our approach differs from BIBREF35 in that our decoder generates continuous outputs in parallel with the standard softmax layer, and only during training to provide regularization. At inference time, the continuous output is ignored and prediction operates as in a standard NMT model. To the best of our knowledge, our model is the first to use embeddings as targets for regularization, and at both word and sentence level."
],
[
"The amount of available parallel, human-annotated corpora for training NMT systems is at times very scarce. This is the case of many low-resource languages and specialized translation domains (e.g., health care). Consequently, there has been a growing interest in developing unsupervised NMT models BIBREF36, BIBREF37, BIBREF38 which do not require annotated data for training. Such models learn to translate by only using monolingual corpora, and even though their accuracy is still well below that of their supervised counterparts, they have started to reach interesting levels. The architecture of unsupervised NMT systems differs from that of supervised systems in that it combines translation in both directions (source-to-target and target-to-source). Typically, a single encoder is used to encode sentences from both languages, and a separate decoder generates the translations in each language. The training of such systems follows three stages: 1) building a bilingual dictionary and word embedding space, 2) training two monolingual language models as denoising autoencoders BIBREF39, and 3) converting the unsupervised problem into a weakly-supervised one by use of back-translations BIBREF40. For more details on unsupervised NMT systems, we refer the reader to the original papers BIBREF36, BIBREF37, BIBREF38.",
"In this paper, we explore using the proposed regularization approach also for unsupervised NMT. Unsupervised NMT models still require very large amounts of monolingual data for training, and often such amounts are not available. Therefore, these models, too, are expected to benefit from improved regularization."
],
[
"In this section, we describe the NMT model that has been used as the basis for the proposed regularizer. It is a neural encoder-decoder architecture with attention BIBREF1 that can be regarded as a strong baseline as it incorporates both LSTMs and transformers as modules. Let us assume that $\\textbf {x}:\\lbrace x_1 \\dots x_n\\rbrace $ is the source sentence with $n$ tokens and $\\textbf {y}:\\lbrace y_1 \\dots y_m\\rbrace $ is the target translated sentence with $m$ tokens. First, the words in the source sentence are encoded into their word embeddings by an embedding layer:",
"and then the source sentence is encoded by a sequential module into its hidden vectors, ${\\textbf {h}_1 \\dots \\textbf {h}_n}$:",
"Next, for each decoding step $j=1 \\ldots m$, an attention network provides a context vector $\\textbf {c}_j$ as a weighted average of all the encoded vectors, $\\textbf {h}_1 \\dots \\textbf {h}_n$, conditional on the decoder output at the previous step, $\\textbf {s}_{j-1}$ (Eq. DISPLAY_FORM17). For this network, we have used the attention mechanism of Badhdanau et al.BIBREF1.",
"Given the context vector, $\\textbf {c}_j$, the decoder output at the previous step, $\\textbf {s}_{j-1}$, and the word embedding of the previous word in the target sentence, $\\textbf {y}^{e}_{j}$ (Eq. DISPLAY_FORM18), the decoder generates vector $\\textbf {s}_j$ (Eq. DISPLAY_FORM19). This vector is later transformed into a larger vector of the same size as the target vocabulary via learned parameters $\\textbf {W}$, $\\textbf {b}$ and a softmax layer (Eq. DISPLAY_FORM20). The resulting vector, $\\textbf {p}_j$, is the inferred probability distribution over the target vocabulary at decoding step $j$. Fig. FIGREF12 depicts the full architecture of the baseline model.",
"The model is trained by minimizing the negative log-likelihood (NLL) which can be expressed as:",
"where the probability of ground-truth word ${y}_j$ has been noted as $\\textbf {p}_{j}({y}_{j})$. Minimizing the NLL is equivalent to MLE and results in assigning maximum probability to the words in the reference translation, $y_j, j=1 \\ldots m$. The training objective is minimized with standard backpropagation over the training data, and at inference time the model uses beam search for decoding."
],
[
"As mentioned in the introduction, MLE suffers from some limitations when training a neural machine translation system. To alleviate these shortcomings, in our recent paper BIBREF8 we have proposed a new regularization method based on regressing word embeddings. In this paper, we extend this idea to sentence embeddings."
],
[
"Pre-trained word embeddings are trained on large monolingual corpora by measuring the co-occurences of words in text windows (“contexts”). Words that occur in similar contexts are assumed to have similar meaning, and hence, similar vectors in the embedding space. Our goal with ReWE is to incorporate the information embedded in the word vector in the loss function to encourage model regularization.",
"In order to generate continuous vector representations as outputs, we have added a ReWE block to the NMT baseline (Fig. FIGREF14). At each decoding step, the ReWE block receives the hidden vector from the decoder, $\\textbf {s}_j$, as input and outputs another vector, $\\textbf {e}_j$, of the same size of the pre-trained word embeddings:",
"where $\\textbf {W}_1$, $\\textbf {W}_2$, $\\textbf {b}_1$ and $\\textbf {b}_2$ are the learnable parameters of a two-layer feed-forward network with a Rectified Linear Unit (ReLU) as activation function between the layers. Vector $\\textbf {e}_j$ aims to reproduce the word embedding of the target word, and thus the distributional properties (or co-occurrences) of its contexts. During training, the model is guided to regress the predicted vector, $\\textbf {e}_j$, towards the word embedding of the ground-truth word, $\\textbf {y}^{e}_j$. This is achieved by using a loss function that computes the distance between $\\textbf {e}_j$ and $\\textbf {y}^{e}_j$ (Eq. DISPLAY_FORM24). Previous work BIBREF8 has showed that the cosine distance is empirically an effective distance between word embeddings and has thus been adopted as loss. This loss and the original NLL loss are combined together with a tunable hyper-parameter, $\\lambda $ (Eq. DISPLAY_FORM25). Therefore, the model is trained to jointly predict both a categorical and a continuous representation of the words. Even though the system is performing a single task, this setting could also be interpreted as a form of multi-task learning with different representations of the same targets.",
"The word vectors of both the source ($\\textbf {x}^{e}$) and target ($\\textbf {y}^{e}$) vocabularies are initialized with pre-trained embeddings, but updated during training. At inference time, we ignore the outputs of the ReWE block and we perform translation using only the categorical prediction."
],
[
"Sentence vectors, too, have been extensively used as input representations in many NLP tasks such as text classification, paraphrase detection, natural language inference and question answering. The intuition behind them is very similar to that of word embeddings: sentences with similar meanings are expected to be close to each other in vector space. Many off-the-shelf sentence embedders are currently available and they can be easily integrated in deep learning models. Based on similar assumptions to the case of word embeddings, we have hypothesized that an NMT model could also benefit from a regularization term based on regressing sentence embeddings (the ReSE block in Fig. FIGREF14).",
"The main difference of ReSE compared to ReWE is that there has to be a single regressed vector per sentence rather than one per word. Thus, ReSE first uses a self-attention mechanism to learn a weighted average of the decoder's hidden vectors, $\\textbf {s}_1 \\dots \\textbf {s}_m$:",
"where the $\\alpha _j$ attention weights are obtained from Eqs. DISPLAY_FORM28 and DISPLAY_FORM29, and $\\textbf {U}_1$ and $\\textbf {U}_2$ are learnable parameters. Then, a two-layered neural network similar to ReWE's predicts the sentence vector, $\\textbf {r}$ (Eq. DISPLAY_FORM30). Parameters $\\textbf {W}_3$, $\\textbf {W}_4$, $\\textbf {b}_3$ and $\\textbf {b}_4$ are also learned during training.",
"Similarly to ReWE, a loss function computes the cosine distance between the predicted sentence vector, $\\textbf {r}$, and the sentence vector inferred with the off-the-shelf sentence embedder, $\\textbf {y}^r$ (Eq. DISPLAY_FORM31). This loss is added to the previous objective as an extra term with an additional, tunable hyper-parameter, $\\beta $:",
"Since the number of sentences is significantly lower than that of the words, $\\beta $ typically needs to be higher than $\\lambda $. Nevertheless, we tune it blindly using the validation set. The reference sentence embedding, $\\textbf {y}^{r}$, can be inferred with any off-the-shelf pre-trained embedder. At inference time, the model solely relies on the categorical prediction and ignores the predicted word and sentence vectors."
],
[
"We have carried out an ample range of experiments to probe the performance of the proposed regularization approaches. This section describes the datasets, the models and the hyper-parameters used, and presents and discusses all results."
],
[
"Four different language pairs have been selected for the experiments. The datasets' size varies from tens of thousands to millions of sentences to test the regularizers' ability to improve translation over a range of low-resource and high-resource language pairs.",
"De-En: The German-English dataset (de-en) has been taken from the WMT18 news translation shared task. The training set contains over 5M sentence pairs collected from the Europarl, CommonCrawl and Newscommentary parallel corpora. As validation and test sets, we have used the newstest2017 and the newstest2018 datasets, respectively. We consider this dataset as a high-resource case.",
"En-Fr: The English-French dataset (en-fr) has been sourced from the IWSLT 2016 translation shared task. This corpus contains translations of TED talks of very diverse topics. The training data provided by the organizers consist of $219,777$ translations which allow us to categorize this dataset as low/medium-resource. Following Denkowski and Neubig BIBREF41, the validation set has been formed by merging the 2013 and 2014 test sets from the same shared task, and the test set has been formed with the 2015 and 2016 test sets.",
"Cs-En: The Czech-English dataset (cs-en) is also from the IWSLT 2016 TED talks translation task. However, this dataset is approximately half the size of en-fr as its training set consists of $114,243$ sentence pairs. Again following Denkowski and Neubig BIBREF41), the validation set has been formed by merging the 2012 and 2013 test sets, and the test set by merging the 2015 and 2016 test sets. We regard this dataset as low-resource.",
"Eu-En: The Basque-English dataset (eu-en) has been collected from the WMT16 IT-domain translation shared task. This is the smallest dataset, with only $89,413$ sentence pairs in the training set. However, only $2,000$ sentences in the training set have been translated by human annotators. The remaining sentence pairs are translations of IT-domain short phrases and Wikipedia titles. Therefore, we consider this dataset as extremely low-resource. It must be said that translations in the IT domain are somehow easier than in the news domain, as this domain is very specific and the wording of the sentences are less varied. For this dataset, we have used the validation and test sets ($1,000$ sentences each) provided in the shared task.",
"All the datasets have been pre-processed with moses-tokenizer. Additionally, words have been split into subword units using byte pair encoding (BPE) BIBREF42. For the BPE merge operations parameter, we have used $32,000$ (the default value) for all the datasets, except for eu-en where we have set it to $8,000$ since this dataset is much smaller. Experiments have been performed at both word and subword level since morphologically-rich languages such as German, Czech and Basque can benefit greatly from operating the NMT model at subword level."
],
[
"To implement ReWE and ReSE, we have modified the popular OpenNMT open-source toolkit BIBREF43. Two variants of the standard OpenNMT model have been used as baselines: the LSTM and the transformer, described hereafter.",
"LSTM: A strong NMT baseline was prepared by following the indications given by Denkowski and Neubig BIBREF41. The model uses a bidirectional LSTM BIBREF44 for the encoder and a unidirectional LSTM for the decoder, with two layers each. The size of the word embeddings was set to 300d and that of the sentence embeddings to 512d. The sizes of the hidden vectors of both LSTMs and of the attention network were set to 1024d. In turn, the LSTM's dropout rate was set to $0.2$ and the training batch size was set to 40 sentences. As optimizer, we have used Adam BIBREF45 with a learning rate of $0.001$. During training, the learning rate was halved with simulated annealing upon convergence of the perplexity over the validation set, which was evaluated every $25,000$ training sentences. Training was stopped after halving the learning rate 5 times.",
"Transformer: The transformer network BIBREF3 has somehow become the de-facto neural network for the encoder and decoder of NMT pipelines thanks to its strong empirical accuracy and highly-parallelizable training. For this reason, we have used it as another baseline for our model. For its hyper-parameters, we have used the default values set by the developers of OpenNMT. Both the encoder and the decoder are formed by a 6-layer network. The sizes of the word embeddings, the hidden vectors and the attention network have all been set to either 300d or 512d, depending on the best results over the validation set. The head count has been set correspondingly to either 6 or 8, and the dropout rate to $0.2$ as for the LSTM. The model was also optimized using Adam, but with a much higher learning rate of 1 (OpenAI default). For this model, we have not used simulated annealing since some preliminary experiments showed that it did penalize performance. The batch size used was $4,096$ and $1,024$ words, again selected based on the accuracy over the validation set. Training was stopped upon convergence in perplexity over the validation set, which was evaluated at every epoch.",
"In addition, the word embeddings for both models were initialized with pre-trained fastText embeddings BIBREF26. For the 300d word embeddings, we have used the word embeddings available on the official fastText website. For the 512d embeddings and the subword units, we have trained our own pre-trained vectors using the fastText embedder with a large monolingual corpora from Wikipedia and the training data. Both models have used the same sentence embeddings which have been computed with the Universal Sentence Encoder (USE). However, the USE is only available for English, so we have only been able to use ReSE with the datasets where English is the target language (i.e., de-en, cs-en and eu-en). When using BPE, the subwords of every sentence have been merged back into words before passing them to the USE. The BLEU score for the BPE models has also been computed after post-processing the subwords back into words. Finally, hyper-parameters $\\lambda $ and $\\beta $ have been tuned only once for all datasets by using the en-fr validation set. This was done in order to save the significant computational time that would have been required by further hyper-parameter exploration. However, in the de-en case the initial results were far from the state of the art and we therefore repeated the selection with its own validation set. For all experiments, we have used an Intel Xeon E5-2680 v4 with an NVidia GPU card Quadro P5000. On this machine, the training time of the transformer has been approximately an order of magnitude larger than that of the LSTM."
],
[
"We have carried out a number of experiments with both baselines. The scores reported are an average of the BLEU scores (in percentage points, or pp) BIBREF46 over the test sets of 5 independently trained models. Table TABREF44 shows the results over the en-fr dataset. In this case, the models with ReWE have outperformed the LSTM and transformer baselines consistently. The LSTM did not benefit from using BPE, but the transformer+ReWE with BPE reached $36.30$ BLEU pp (a $+0.99$ pp improvement over the best model without ReWE). For this dataset we did not use ReSE because French was the target language.",
"Table TABREF45 reports the results over the cs-en dataset. Also in this case, all the models with ReWE have improved over the corresponding baselines. The LSTM+ReWE has achieved the best results ($23.72$ BLEU pp; an improvement of $+1.16$ pp over the best model without ReWE). This language pair has also benefited more from the BPE pre-processing, likely because Czech is a morphologically-rich language. For this dataset, it was possible to use ReSE in combination with ReWE, with an improvement for the LSTM at word level ($+0.14$ BLEU pp), but not for the remaining cases. We had also initially tried to use ReSE without ReWE (i.e., $\\lambda =0$), but the results were not encouraging and we did not continue with this line of experiments.",
"For the eu-en dataset (Table TABREF46), the results show that, again, ReWE outperforms the baselines by a large margin. Moreover, ReWE+ReSE has been able to improve the results even further ($+3.15$ BLEU pp when using BPE and $+5.15$ BLEU pp at word level over the corresponding baselines). Basque is, too, a morphologically-rich language and using BPE has proved very beneficial ($+4.27$ BLEU pp over the best word-level model). As noted before, the eu-en dataset is very low-resource (less than $100,000$ sentence pairs) and it is more likely that the baseline models generalize poorly. Consequently, regularizers such as ReWE and ReSE are more helpful, with larger margins of improvement with respect to the baselines. On a separate note, the transformer has unexpectedly performed well below the LSTM on this dataset, and especially so with BPE. We speculate that it may be more sensitive than the LSTM to the dataset's much smaller size, or in need of more refined hyper-parameter tuning.",
"Finally, Table TABREF47 shows the results over the de-en dataset that we categorize as high-resource (5M+ sentence pairs). For this dataset, we have only been able to perform experiments with the LSTM due to the exceedingly long training times of the transformer. At word level, both ReWE and ReWE+ReSE have been able to outperform the baseline, although the margins of improvement have been smaller than for the other language pairs ($+0.42$ and $+0.48$ BLEU pp, respectively). However, when using BPE both ReWE and ReWE+ReSE have performed slightly below the baseline ($-0.37$ and $-0.12$ points BLEU pp, respectively). This shows that when the training data are abundant, ReWE or ReSE may not be beneficial. To probe this further, we have repeated these experiments by training the models over subsets of the training set of increasing size (200K, 500K, 1M, and 2M sentence pairs). Fig. FIGREF57 shows the BLEU scores achieved by the baseline and the regularized models for the different training data sizes. The plot clearly shows that the performance margin increases as the training data size decreases, as expected from a regularized model.",
"Table TABREF54 shows two examples of the translations made by the different LSTM models for eu-en and cs-en. A qualitative analysis of these examples shows that both ReWE and ReWE+ReSE have improved the quality of these translations. In the eu-en example, ReWE has correctly translated “File tab”; and ReSE has correctly added “click Create”. In the cs-en example, the model with ReWE has picked the correct subject “they”, and only the model with ReWE and ReSE has correctly translated “students” and captured the opening phrase “What was...about this...”."
],
[
"The quantitative experiments have proven that ReWE and ReSE can act as effective regularizers for low- and medium-resource NMT. Yet, it would be very interesting to understand how do they influence the training to achieve improved models. For that purpose, we have conducted an exploration of the values of the hidden vectors on the decoder end ($\\textbf {s}_j$, Eq. DISPLAY_FORM19). These values are the “feature space” used by the final classification block (a linear transformation and a softmax) to generate the class probabilities and can provide insights on the model. For this reason, we have considered the cs-en test set and stored all the $\\textbf {s}_j$ vectors with their respective word predictions. Then, we have used t-SNE BIBREF47 to reduce the dimensionality of the $\\textbf {s}_j$ vectors to a visualizable 2d. Finally, we have chosen a particular word (architecture) as the center of the visualization, and plotted all the vectors within a chosen neighborhood of this center word (Fig. FIGREF58). To avoid cluttering the figure, we have not superimposed the predicted words to the vectors, but only used a different color for each distinct word. The center word in the two subfigures (a: baseline; b: baseline+ReWE) is the same (architecture) and from the same source sentence, so the visualized regions are comparable. The visualizations also display all other predicted instances of word architecture in the neighborhood.",
"These visualizations show two interesting behaviors: 1) from eye judgment, the points predicted by the ReWE model seem more uniformly spread out; 2) instances of the same words have $\\textbf {s}_j$ vectors that are close to each other. For instance, several instances of word architecture are close to each other in Fig. FIGREF58 while a single instance appears in Fig. FIGREF58. The overall observation is that the ReWE regularizer leads to a vector space that is easier to discriminate, i.e. find class boundaries for, facilitating the final word prediction. In order to confirm this observation, we have computed various clustering indexes over the clusters formed by the vectors with identical predicted word. As indexes, we have used the silhouette and the Davies-Bouldin indexes that are two well-known unsupervised metrics for clustering. The silhouette index ranges from -1 to +1, where values closer to 1 mean that the clusters are compact and well separated. The Davies-Bouldin index is an unbounded nonnegative value, with values closer to 0 meaning better clustering. Table TABREF62 shows the values of these clustering indexes over the entire cs-en test set for the LSTM models. As the table shows, the models with ReWE and ReWE+ReSE have reported the best values. This confirms that applying ReWE and ReSE has a positive impact on the decoder's hidden space, ultimately justifying the increase in word classification accuracy.",
"For further exploration, we have created another visualization of the $\\textbf {s}$ vectors and their predictions over a smaller neighborhood (Fig. FIGREF63). The same word (architecture) has been used as the center word of the plot. Then, we have “vibrated” each of the $\\textbf {s}_j$ vector by small increments (between 0.05 and 8 units) in each of their dimensions, creating several new synthetic instances of $\\textbf {s}$ vectors which are very close to the original ones. These synthetic vectors have then been decoded with the trained NMT model to obtain their predicted words. Finally, we have used t-SNE to reduce the dimensionality to 2d, and visualized all the vectors and their predictions in a small neighborhood ($\\pm 10$ units) around the center word. Fig. FIGREF63 shows that, with the ReWE model, all the $\\textbf {s}$ vectors surrounding the center word predict the same word (architecture). Conversely, with the baseline, the surrounding points predict different words (power, force, world). This is additional evidence that the $\\textbf {s}$ space is evened out by the use of the proposed regularizer."
],
[
"Finally, we have also experimented with the use of ReWE and ReWE+ReSE for an unsupervised NMT task. For this experiment, we have used the open-source model provided by Lample et al. BIBREF36 which is currently the state of the art for unsupervised NMT, and also adopted its default hyper-parameters and pre-processing steps which include 4-layer transformers for the encoder and both decoders, and BPE subword learning. The experiments have been performed using the WMT14 English-French test set for testing in both language directions (en-fr and fr-en), and the monolingual data from that year's shared task for training.",
"As described in Section SECREF13, an unsupervised NMT model contains two decoders to be able to translate into both languages. The model is trained by iterating over two alternate steps: 1) training using the decoders as monolingual, de-noising language models (e.g., en-en, fr-fr), and 2) training using back-translations (e.g., en-fr-en, fr-en-fr). Each step requires an objective function, which is usually an NLL loss. Moreover, each step is performed in both directions (en$\\rightarrow $fr and fr$\\rightarrow $en), which means that an unsupervised NMT model uses a total of four different objective functions. Potentially, the regularizers could be applied to each of them. However, the pre-trained USE sentence embeddings are only available in English, not in French, and for this reason we have limited our experiments to ReWE alone. In addition, the initial results have showed that ReWE is actually detrimental in the de-noising language model step, so we have limited its use to both language directions in the back-translation step, with the hyper-parameter, $\\lambda $, tuned over the validation set ($\\lambda =0.2$).",
"To probe the effectiveness of the regularized model, Fig. FIGREF67 shows the results over the test set from the different models trained with increasing amounts of monolingual data (50K, 500K, 1M, 2M, 5M and 10M sentences in each language). The model trained using ReWE has been able to consistently outperform the baseline in both language directions. The trend we had observed in the supervised case has applied to these experiments, too: the performance margin has been larger for smaller training data sizes. For example, in the en-fr direction the margin has been $+1.74$ BLEU points with 50K training sentences, but it has reduced to $+0.44$ BLEU points when training with 10M sentences. Again, this behavior is in line with the regularizing nature of the proposed regressive objectives."
],
[
"In this paper, we have proposed regressing continuous representations of words and sentences (ReWE and ReSE, respectively) as novel regularization techniques for improving the generalization of NMT models. Extensive experiments over four different language pairs of different training data size (from 89K to 5M sentence pairs) have shown that both ReWE and ReWE+ReSE have improved the performance of NMT models, particularly in low- and medium-resource cases, for increases in BLEU score up to $5.15$ percentage points. In addition, we have presented a detailed analysis showing how the proposed regularization modifies the decoder's output space, enhancing the clustering of the vectors associated with unique words. Finally, we have showed that the regularized models have also outperformed the baselines in experiments on unsupervised NMT. As future work, we plan to explore how the categorical and continuous predictions from our model could be jointly utilized to further improve the quality of the translations."
],
[
"The authors would like to thank the RoZetta Institute (formerly CMCRC) for providing financial support to this research.",
"[]Inigo Jauregi Unanue received the BEng degree in telecommunication systems from University of Navarra, Donostia-San Sebastian, Spain, in 2016. From 2014 to 2016, he was a research assistant at Centro de Estudio e Investigaciones Tecnicas (CEIT). Since 2016, he is a natural language processing and machine learning researcher at the RoZetta Institute (former CMCRC) in Sydney, Australia. Additionally, he is in the last year of his PhD at University of Technology Sydney, Australia. His research interests are machine learning, natural language processing and information theory.",
"",
"[]Ehsan Zare Borzeshi received the PhD degree from University of Technology Sydney, Australia, in 2013. He is currently a Senior Data & Applied Scientist with Microsoft CSE (Commercial Software Engineering). He has previously held appointments as a senior researcher at the University of Newcastle, University of Technology Sydney, and the RoZetta Institute (formerly CMCRC) in Sydney. He has also been a Visiting Scholar with the University of Central Florida, Orlando, FL, USA. His current research interests include big data, deep learning and natural language processing where he has many publications.",
"",
"[]Massimo Piccardi (SM'05) received the MEng and PhD degrees from the University of Bologna, Bologna, Italy, in 1991 and 1995, respectively. He is currently a Full Professor of computer systems with University of Technology Sydney, Australia. His research interests include natural language processing, computer vision and pattern recognition and he has co-authored over 150 papers in these areas. Prof. Piccardi is a Senior Member of the IEEE, a member of its Computer and Systems, Man, and Cybernetics Societies, and a member of the International Association for Pattern Recognition. He presently serves as an Associate Editor for the IEEE Transactions on Big Data."
]
],
"section_name": [
"Introduction",
"Related Work",
"Related Work ::: Regularization Techniques",
"Related Work ::: Word and Sentence Embeddings",
"Related Work ::: Unsupervised NMT",
"The Baseline NMT model",
"Regressing word and sentence embeddings",
"Regressing word and sentence embeddings ::: ReWE",
"Regressing word and sentence embeddings ::: ReSE",
"Experiments",
"Experiments ::: Datasets",
"Experiments ::: Model Training and Hyper-Parameter Selection",
"Experiments ::: Results",
"Experiments ::: Understanding ReWE and ReSE",
"Experiments ::: Unsupervised NMT",
"Conclusion",
"Acknowledgment"
]
} | {
"answers": [
{
"annotation_id": [
"4fd61e9949de2140a958d03822e29fe311887ec8",
"b880ae3014d9f608854bc7fbfa57026fe4b23ace"
],
"answer": [
{
"evidence": [
"Extensive experimentation over four language pairs of different dataset sizes (from small to large) with both word and sentence regularization. We show that using both ReWE and ReSE can outperform strong state-of-the-art baselines based on long short-term memory networks (LSTMs) and transformers.",
"In this section, we describe the NMT model that has been used as the basis for the proposed regularizer. It is a neural encoder-decoder architecture with attention BIBREF1 that can be regarded as a strong baseline as it incorporates both LSTMs and transformers as modules. Let us assume that $\\textbf {x}:\\lbrace x_1 \\dots x_n\\rbrace $ is the source sentence with $n$ tokens and $\\textbf {y}:\\lbrace y_1 \\dots y_m\\rbrace $ is the target translated sentence with $m$ tokens. First, the words in the source sentence are encoded into their word embeddings by an embedding layer:"
],
"extractive_spans": [],
"free_form_answer": "a encoder-decoder architecture with attention incorporating LSTMs and transformers",
"highlighted_evidence": [
"We show that using both ReWE and ReSE can outperform strong state-of-the-art baselines based on long short-term memory networks (LSTMs) and transformers.",
"In this section, we describe the NMT model that has been used as the basis for the proposed regularizer. It is a neural encoder-decoder architecture with attention BIBREF1 that can be regarded as a strong baseline as it incorporates both LSTMs and transformers as modules."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"FLOAT SELECTED: TABLE I: BLEU scores over the En-Fr test set. The reported results are the average of 5 independent runs.",
"FLOAT SELECTED: TABLE II: BLEU scores over the Cs-En test set. The reported results are the average of 5 independent runs.",
"FLOAT SELECTED: TABLE III: BLEU scores over the Eu-En test set. The reported results are the average of 5 independent runs.",
"FLOAT SELECTED: TABLE IV: BLEU scores over the De-En test set. The reported results are the average of 5 independent runs.",
"In this section, we describe the NMT model that has been used as the basis for the proposed regularizer. It is a neural encoder-decoder architecture with attention BIBREF1 that can be regarded as a strong baseline as it incorporates both LSTMs and transformers as modules. Let us assume that $\\textbf {x}:\\lbrace x_1 \\dots x_n\\rbrace $ is the source sentence with $n$ tokens and $\\textbf {y}:\\lbrace y_1 \\dots y_m\\rbrace $ is the target translated sentence with $m$ tokens. First, the words in the source sentence are encoded into their word embeddings by an embedding layer:"
],
"extractive_spans": [],
"free_form_answer": "A neural encoder-decoder architecture with attention using LSTMs or Transformers",
"highlighted_evidence": [
"FLOAT SELECTED: TABLE I: BLEU scores over the En-Fr test set. The reported results are the average of 5 independent runs.",
"FLOAT SELECTED: TABLE II: BLEU scores over the Cs-En test set. The reported results are the average of 5 independent runs.",
"FLOAT SELECTED: TABLE III: BLEU scores over the Eu-En test set. The reported results are the average of 5 independent runs.",
"FLOAT SELECTED: TABLE IV: BLEU scores over the De-En test set. The reported results are the average of 5 independent runs.",
"In this section, we describe the NMT model that has been used as the basis for the proposed regularizer. It is a neural encoder-decoder architecture with attention BIBREF1 that can be regarded as a strong baseline as it incorporates both LSTMs and transformers as modules"
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"289275f8e2fec74f7759a034b021db5661418d2c",
"65d132c5d922c2b17eae797131a5d7a4ba0932e7"
],
"answer": [
{
"evidence": [
"Four different language pairs have been selected for the experiments. The datasets' size varies from tens of thousands to millions of sentences to test the regularizers' ability to improve translation over a range of low-resource and high-resource language pairs.",
"De-En: The German-English dataset (de-en) has been taken from the WMT18 news translation shared task. The training set contains over 5M sentence pairs collected from the Europarl, CommonCrawl and Newscommentary parallel corpora. As validation and test sets, we have used the newstest2017 and the newstest2018 datasets, respectively. We consider this dataset as a high-resource case.",
"En-Fr: The English-French dataset (en-fr) has been sourced from the IWSLT 2016 translation shared task. This corpus contains translations of TED talks of very diverse topics. The training data provided by the organizers consist of $219,777$ translations which allow us to categorize this dataset as low/medium-resource. Following Denkowski and Neubig BIBREF41, the validation set has been formed by merging the 2013 and 2014 test sets from the same shared task, and the test set has been formed with the 2015 and 2016 test sets.",
"Cs-En: The Czech-English dataset (cs-en) is also from the IWSLT 2016 TED talks translation task. However, this dataset is approximately half the size of en-fr as its training set consists of $114,243$ sentence pairs. Again following Denkowski and Neubig BIBREF41), the validation set has been formed by merging the 2012 and 2013 test sets, and the test set by merging the 2015 and 2016 test sets. We regard this dataset as low-resource.",
"Eu-En: The Basque-English dataset (eu-en) has been collected from the WMT16 IT-domain translation shared task. This is the smallest dataset, with only $89,413$ sentence pairs in the training set. However, only $2,000$ sentences in the training set have been translated by human annotators. The remaining sentence pairs are translations of IT-domain short phrases and Wikipedia titles. Therefore, we consider this dataset as extremely low-resource. It must be said that translations in the IT domain are somehow easier than in the news domain, as this domain is very specific and the wording of the sentences are less varied. For this dataset, we have used the validation and test sets ($1,000$ sentences each) provided in the shared task."
],
"extractive_spans": [
"219,777",
"114,243",
"89,413",
"over 5M "
],
"free_form_answer": "",
"highlighted_evidence": [
"Four different language pairs have been selected for the experiments. The datasets' size varies from tens of thousands to millions of sentences to test the regularizers' ability to improve translation over a range of low-resource and high-resource language pairs.",
"De-En: The German-English dataset (de-en) has been taken from the WMT18 news translation shared task. The training set contains over 5M sentence pairs collected from the Europarl, CommonCrawl and Newscommentary parallel corpora. ",
"En-Fr: The English-French dataset (en-fr) has been sourced from the IWSLT 2016 translation shared task. This corpus contains translations of TED talks of very diverse topics. The training data provided by the organizers consist of $219,777$ translations which allow us to categorize this dataset as low/medium-resource.",
"Cs-En: The Czech-English dataset (cs-en) is also from the IWSLT 2016 TED talks translation task. However, this dataset is approximately half the size of en-fr as its training set consists of $114,243$ sentence pairs",
"Eu-En: The Basque-English dataset (eu-en) has been collected from the WMT16 IT-domain translation shared task. This is the smallest dataset, with only $89,413$ sentence pairs in the training set."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Four different language pairs have been selected for the experiments. The datasets' size varies from tens of thousands to millions of sentences to test the regularizers' ability to improve translation over a range of low-resource and high-resource language pairs.",
"De-En: The German-English dataset (de-en) has been taken from the WMT18 news translation shared task. The training set contains over 5M sentence pairs collected from the Europarl, CommonCrawl and Newscommentary parallel corpora. As validation and test sets, we have used the newstest2017 and the newstest2018 datasets, respectively. We consider this dataset as a high-resource case.",
"En-Fr: The English-French dataset (en-fr) has been sourced from the IWSLT 2016 translation shared task. This corpus contains translations of TED talks of very diverse topics. The training data provided by the organizers consist of $219,777$ translations which allow us to categorize this dataset as low/medium-resource. Following Denkowski and Neubig BIBREF41, the validation set has been formed by merging the 2013 and 2014 test sets from the same shared task, and the test set has been formed with the 2015 and 2016 test sets.",
"Cs-En: The Czech-English dataset (cs-en) is also from the IWSLT 2016 TED talks translation task. However, this dataset is approximately half the size of en-fr as its training set consists of $114,243$ sentence pairs. Again following Denkowski and Neubig BIBREF41), the validation set has been formed by merging the 2012 and 2013 test sets, and the test set by merging the 2015 and 2016 test sets. We regard this dataset as low-resource.",
"Eu-En: The Basque-English dataset (eu-en) has been collected from the WMT16 IT-domain translation shared task. This is the smallest dataset, with only $89,413$ sentence pairs in the training set. However, only $2,000$ sentences in the training set have been translated by human annotators. The remaining sentence pairs are translations of IT-domain short phrases and Wikipedia titles. Therefore, we consider this dataset as extremely low-resource. It must be said that translations in the IT domain are somehow easier than in the news domain, as this domain is very specific and the wording of the sentences are less varied. For this dataset, we have used the validation and test sets ($1,000$ sentences each) provided in the shared task."
],
"extractive_spans": [],
"free_form_answer": "89k, 114k, 291k, 5M",
"highlighted_evidence": [
"Four different language pairs have been selected for the experiments. The datasets' size varies from tens of thousands to millions of sentences to test the regularizers' ability to improve translation over a range of low-resource and high-resource language pairs.",
"The German-English dataset (de-en) has been taken from the WMT18 news translation shared task. The training set contains over 5M sentence pairs collected from the Europarl, CommonCrawl and Newscommentary parallel corpora.",
"The English-French dataset (en-fr) has been sourced from the IWSLT 2016 translation shared task. This corpus contains translations of TED talks of very diverse topics. The training data provided by the organizers consist of $219,777$ translations which allow us to categorize this dataset as low/medium-resource.",
"The Czech-English dataset (cs-en) is also from the IWSLT 2016 TED talks translation task. However, this dataset is approximately half the size of en-fr as its training set consists of $114,243$ sentence pairs.",
"The Basque-English dataset (eu-en) has been collected from the WMT16 IT-domain translation shared task. This is the smallest dataset, with only $89,413$ sentence pairs in the training set."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"annotation_id": [
"435cb53361eafb94d0c31678bd9cf377beaa9a91",
"e42b74a8e1819e992793c8dc8c2c8dbaaca11448"
],
"answer": [
{
"evidence": [
"Experiments ::: Datasets",
"Four different language pairs have been selected for the experiments. The datasets' size varies from tens of thousands to millions of sentences to test the regularizers' ability to improve translation over a range of low-resource and high-resource language pairs.",
"De-En: The German-English dataset (de-en) has been taken from the WMT18 news translation shared task. The training set contains over 5M sentence pairs collected from the Europarl, CommonCrawl and Newscommentary parallel corpora. As validation and test sets, we have used the newstest2017 and the newstest2018 datasets, respectively. We consider this dataset as a high-resource case.",
"En-Fr: The English-French dataset (en-fr) has been sourced from the IWSLT 2016 translation shared task. This corpus contains translations of TED talks of very diverse topics. The training data provided by the organizers consist of $219,777$ translations which allow us to categorize this dataset as low/medium-resource. Following Denkowski and Neubig BIBREF41, the validation set has been formed by merging the 2013 and 2014 test sets from the same shared task, and the test set has been formed with the 2015 and 2016 test sets.",
"Cs-En: The Czech-English dataset (cs-en) is also from the IWSLT 2016 TED talks translation task. However, this dataset is approximately half the size of en-fr as its training set consists of $114,243$ sentence pairs. Again following Denkowski and Neubig BIBREF41), the validation set has been formed by merging the 2012 and 2013 test sets, and the test set by merging the 2015 and 2016 test sets. We regard this dataset as low-resource.",
"Eu-En: The Basque-English dataset (eu-en) has been collected from the WMT16 IT-domain translation shared task. This is the smallest dataset, with only $89,413$ sentence pairs in the training set. However, only $2,000$ sentences in the training set have been translated by human annotators. The remaining sentence pairs are translations of IT-domain short phrases and Wikipedia titles. Therefore, we consider this dataset as extremely low-resource. It must be said that translations in the IT domain are somehow easier than in the news domain, as this domain is very specific and the wording of the sentences are less varied. For this dataset, we have used the validation and test sets ($1,000$ sentences each) provided in the shared task."
],
"extractive_spans": [
"German",
"English",
"French",
"Czech",
"Basque"
],
"free_form_answer": "",
"highlighted_evidence": [
"Experiments ::: Datasets\nFour different language pairs have been selected for the experiments",
"De-En: The German-English dataset (de-en) has been taken from the WMT18 news translation shared task",
"En-Fr: The English-French dataset (en-fr) has been sourced from the IWSLT 2016 translation shared task",
"Cs-En: The Czech-English dataset (cs-en) is also from the IWSLT 2016 TED talks translation task",
"Eu-En: The Basque-English dataset (eu-en) has been collected from the WMT16 IT-domain translation shared task"
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"De-En: The German-English dataset (de-en) has been taken from the WMT18 news translation shared task. The training set contains over 5M sentence pairs collected from the Europarl, CommonCrawl and Newscommentary parallel corpora. As validation and test sets, we have used the newstest2017 and the newstest2018 datasets, respectively. We consider this dataset as a high-resource case.",
"En-Fr: The English-French dataset (en-fr) has been sourced from the IWSLT 2016 translation shared task. This corpus contains translations of TED talks of very diverse topics. The training data provided by the organizers consist of $219,777$ translations which allow us to categorize this dataset as low/medium-resource. Following Denkowski and Neubig BIBREF41, the validation set has been formed by merging the 2013 and 2014 test sets from the same shared task, and the test set has been formed with the 2015 and 2016 test sets.",
"Cs-En: The Czech-English dataset (cs-en) is also from the IWSLT 2016 TED talks translation task. However, this dataset is approximately half the size of en-fr as its training set consists of $114,243$ sentence pairs. Again following Denkowski and Neubig BIBREF41), the validation set has been formed by merging the 2012 and 2013 test sets, and the test set by merging the 2015 and 2016 test sets. We regard this dataset as low-resource.",
"Eu-En: The Basque-English dataset (eu-en) has been collected from the WMT16 IT-domain translation shared task. This is the smallest dataset, with only $89,413$ sentence pairs in the training set. However, only $2,000$ sentences in the training set have been translated by human annotators. The remaining sentence pairs are translations of IT-domain short phrases and Wikipedia titles. Therefore, we consider this dataset as extremely low-resource. It must be said that translations in the IT domain are somehow easier than in the news domain, as this domain is very specific and the wording of the sentences are less varied. For this dataset, we have used the validation and test sets ($1,000$ sentences each) provided in the shared task."
],
"extractive_spans": [],
"free_form_answer": "German-English, English-French, Czech-English, Basque-English pairs",
"highlighted_evidence": [
"De-En: The German-English dataset (de-en) has been taken from the WMT18 news translation shared task.",
"En-Fr: The English-French dataset (en-fr) has been sourced from the IWSLT 2016 translation shared task.",
"Cs-En: The Czech-English dataset (cs-en) is also from the IWSLT 2016 TED talks translation task.",
"Eu-En: The Basque-English dataset (eu-en) has been collected from the WMT16 IT-domain translation shared task."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
],
"nlp_background": [
"two",
"two",
"two"
],
"paper_read": [
"",
"",
""
],
"question": [
"What baselines do they compare to?",
"What training set sizes do they use?",
"What languages do they experiment with?"
],
"question_id": [
"54b25223ab32bf8d9205eaa8a570e99c683f0077",
"e5be900e70ea86c019efb06438ba200e11773a7c",
"b36a8a73b3457a94203eed43f063cb684a8366b7"
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"",
"",
""
]
} | {
"caption": [
"Fig. 1: Baseline NMT model. (Left) The encoder receives the input sentence and generates a context vector cj for each decoding step using an attention mechanism. (Right) The decoder generates one-by-one the output vectors pj , which represent the probability distribution over the target vocabulary. During training yj is a token from the ground truth sentence, but during inference the model uses its own predictions.",
"Fig. 2: Full model: Baseline + ReWE + ReSE. (Left) The encoder with the attention mechanism generates vectos cj in the same way as the baseline system. (Right) The decoder generates one-by-one the output vectors pj , which represent the probability distribution over the target vocabulary, and ej , which is a continuous word vector. Additionally, the model can also generate another continuous vector, r, which represents the sentence embedding.",
"TABLE I: BLEU scores over the En-Fr test set. The reported results are the average of 5 independent runs.",
"TABLE III: BLEU scores over the Eu-En test set. The reported results are the average of 5 independent runs.",
"TABLE II: BLEU scores over the Cs-En test set. The reported results are the average of 5 independent runs.",
"TABLE IV: BLEU scores over the De-En test set. The reported results are the average of 5 independent runs.",
"TABLE V: Translation examples. Example 1: Eu-En and Example 2: Cs-En.",
"Fig. 3: BLEU scores over the De-En test set for models trained with training sets of different size.",
"Fig. 4: Visualization of the sj vectors from the decoder for a subset of the cs-en test set. Please refer to Section V-D for explanations. This figure should be viewed in color.",
"TABLE VI: Clustering indexes of the LSTM models over the cs-en test set. The reported results are the average of 5 independent runs.",
"Fig. 5: Visualization of the sj vectors in a smaller neighborhood of the center word.",
"Fig. 6: BLEU scores over the test set. The reported results are the average of 5 independent runs.. The red line represents the baseline model and the blue line is the baseline + ReWE."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"6-TableI-1.png",
"6-TableIII-1.png",
"6-TableII-1.png",
"6-TableIV-1.png",
"7-TableV-1.png",
"7-Figure3-1.png",
"8-Figure4-1.png",
"9-TableVI-1.png",
"10-Figure5-1.png",
"11-Figure6-1.png"
]
} | [
"What baselines do they compare to?",
"What training set sizes do they use?",
"What languages do they experiment with?"
] | [
[
"1909.13466-6-TableIII-1.png",
"1909.13466-6-TableII-1.png",
"1909.13466-6-TableI-1.png",
"1909.13466-6-TableIV-1.png",
"1909.13466-The Baseline NMT model-0",
"1909.13466-Introduction-7"
],
[
"1909.13466-Experiments ::: Datasets-1",
"1909.13466-Experiments ::: Datasets-0",
"1909.13466-Experiments ::: Datasets-2",
"1909.13466-Experiments ::: Datasets-4",
"1909.13466-Experiments ::: Datasets-3"
],
[
"1909.13466-Experiments ::: Datasets-1",
"1909.13466-Experiments ::: Datasets-0",
"1909.13466-Experiments ::: Datasets-2",
"1909.13466-Experiments ::: Datasets-4",
"1909.13466-Experiments ::: Datasets-3"
]
] | [
"A neural encoder-decoder architecture with attention using LSTMs or Transformers",
"89k, 114k, 291k, 5M",
"German-English, English-French, Czech-English, Basque-English pairs"
] | 413 |
1910.09916 | Automatic Extraction of Personality from Text: Challenges and Opportunities | In this study we examined the possibility to extract personality traits from a text. We created an extensive dataset by having experts annotate personality traits in a large number of texts from multiple online sources. From these annotated texts we selected a sample and made further annotations ending up with a large low-reliability dataset and a small high-reliability dataset. We then used the two datasets to train and test several machine learning models to extract personality from text, including a language model. Finally, we evaluated our best models in the wild, on datasets from different domains. Our results show that the models based on the small high-reliability dataset performed better (in terms of R2) than models based on large low-reliability dataset. Also, the language model based on the small high-reliability dataset performed better than the random baseline. Finally, and more importantly, the results showed our best model did not perform better than the random baseline when tested in the wild. Taken together, our results show that determining personality traits from a text remains a challenge and that no firm conclusions can be made on model performance before testing in the wild. | {
"paragraphs": [
[
"Since the introduction of the personality concept, psychologists have worked to formulate theories and create models describing human personality and reliable measure to accordingly. The filed has been successful to bring forth a number of robust models with corresponding measures. One of the most widely accepted and used is the Five Factor Model BIBREF0. The model describes human personality by five traits/factors, popularly referred to as the Big Five or OCEAN: Openness to experience, Conscientiousness, Extraversion, Agreeableness, and emotional stability (henceforth Stability). There is now an extensive body of research showing that these factors matter in a large number of domains of people’s life. Specifically, the Big Five factors have been found to predict life outcomes such as health, longevity, work performance, interpersonal relations, migration and social attitudes, just to mention some domains (e.g. BIBREF1, BIBREF2, BIBREF3, BIBREF4).To date, the most common assessment of personality is by self-report questionnaires BIBREF5.",
"In the past decade however, personality psychologist, together with computer scientist, have worked hard to solve the puzzle of extracting a personality profile (e.g., the Big Five factors) of an individual based on a combination of social media activities BIBREF6. However, in the aftermath of Cambridge Analytica scandal, where the privacy of millions of Facebook users was violated, this line of research has been met with skepticism and suspicion. More recent research focuses on text from a variety of sources, including twitter data (e.g. BIBREF7, BIBREF8). Recent development in text analysis, machine learning, and natural language models, have move the field into an era of optimism, like never before. Importantly, the basic idea in this research is that personality is reflected in the way people write and that written communication includes information about the author’s personality characteristics BIBREF9. Nevertheless, while a number of attempts has been made to extract personality from text (see below), the research is standing remarkably far from reality. There are, to our knowledge, very few attempts to test machine learning models “in the wild”. The present paper aims to deal with this concern. Specifically, we aim to (A) create a model which is able to extract Big Five personality from a text using machine learning techniques, (B) investigate whether a model trained on a large amount of solo-annotated data performs better than a model trained on a smaller amount of high quality data, and, (C) measure the performance of our models on data from another two domains that differ from the training data."
],
[
"In BIBREF10 the authors trained a combination of logistic and linear regression models on data from 58,466 volunteers, including their demographic profiles, Facebook data and psychometric test results, such as their Big Five traits. This data, the myPersonality dataset BIBREF11, was available for academic research until 2018, although this access has since been closed down. A demonstration version of the trained system is available to the public in form of the ApplyMagicSauce web application of Cambridge University.",
"In 2018 media exposed the unrelated (and now defunct) company Cambridge Analytica to considerable public attention for having violated the privacy and data of millions of Facebook users and for having meddled in elections, with some of these operations misusing the aforementioned research results. This scandal demonstrates the commercial and political interest in this type of research, and it also emphasizes that the field has significant ethical aspects.",
"Several attempts have been made to automatically determining the Big Five personality traits using only text written by the test person. A common simplification in such approaches is to model each trait as binary (high or low) rather than on a more realistic granular spectrum.",
"The authors of BIBREF12 trained a Bayesian Multinomial Regression model on stylistic and content features of a collection of student-written stream-of-consciousness essays with associated Big Five questionnaire results of each respective student. The researchers focused on the classifier for stability. The original representation of the numerical factor was simplified to a dichotomy between positive and negative, denoting essay authors with values in the upper or lower third respectively, and discarding texts from authors in the more ambiguous middle third. The resulting classifier then achieved an accuracy of 65.7 percent. Similar performance for the other factors was claimed as well, but not published.",
"A number of regression models were trained and tested for Big Five analysis on texts in BIBREF13. To obtain training data the authors carried out a personality survey on a microblog site, which yielded the texts and the personality data from 444 users. This work is a rare example of the Big Five being represented an actual spectrum instead of a dichotomy, using an interval $[-1, 1]$. The performance of the systems was therefore measured as the deviation from the expected trait values. The best variant achieved an average Mean Absolute Percentage Error (i.e. MAPE over all five traits) of 14 percent.",
"In BIBREF14 the authors used neural networks to analyze the Big Five personality traits of Twitter users based on their tweets. The system had no fine-grained scoring, instead classifying each trait only as either yes (high) or no (low). The authors did not provide any details about their training data, and the rudimentary evaluation allows no conclusions regarding the actual performance of the system.",
"Deep convolutional neural networks were used in BIBREF8 as classifiers on the Pennebaker & King dataset of 2,469 Big Five annotated stream-of-consciousness essays BIBREF9. The authors filtered the essays, discarding all sentences that did not contain any words from a list of emotionally charged words. One classifier was then trained for each trait, with each trait classified only as either yes (high) or no (low). The trait classifiers achieved their respective best accuracies using different configurations. Averaging these best results yielded an overall best accuracy of 58.83 percent.",
"The authors of BIBREF15 trained and evaluated an assortment of Deep Learning networks on two datasets: a subset of the Big Five-annotated myPersonality dataset with 10,000 posts from 250 Facebook users, and another 150 Facebook users whose posts the authors collected manually and had annotated using the ApplyMagicSauce tool mentioned above. The traits were represented in their simplified binary form. Their best system achieved an average accuracy of 74.17 percent.",
"In BIBREF7 the accuracy of works on Big Five personality inference as a function of the size of the input text was studied. The authors showed that using Word Embedding with Gaussian Processes provided the best results when building a classifier for predicting the personality from tweets. The data consisted of self-reported personality ratings as well as tweets from a set of 1,323 participants.",
"In BIBREF16 a set of 694 blogs with corresponding self-reported personality ratings was collected. The Linguistic Inquiry and Word Count (LIWC) 2001 program was used to analyze the blogs. A total of 66 LIWC categories was used for each personality trait. The results revealed robust correlations between the Big Five traits and the frequency with which bloggers used different word categories."
],
[
"We employed machine learning for our text-based analysis of the Big Five personality traits. Applying machine learning presupposes large sets of annotated training data, and our case is no exception. Since we are working with Swedish language, we could not fall back on any existing large datasets like the ones available for more widespread languages such as English. Instead our work presented here encompassed the full process from the initial gathering of data over data annotation and feature extraction to training and testing of the detection models. To get an overview of the process, the workflow is shown in Figure FIGREF4.",
"Data annotation is time intensive work. Nevertheless, we decided to assemble two datasets, one prioritizing quantity over quality and one vice versa. The two sets are:",
"$D_\\textrm {\\textit {LR}}$: a large dataset with lower reliability (most text samples annotated by a single annotator),",
"$D_\\textrm {\\textit {HR}}$: a smaller dataset with higher reliability (each text sample annotated by multiple annotators).",
"By evaluating both directions we hoped to gain insights into the best allocation of annotation resources for future work. Regarding the choice of machine learning methods we also decided to test two approaches:",
"support vector regression (SVR): a well-understood method for the prediction of continuous values,",
"pre-trained language model (LM) with transfer learning: an LM is first trained on large amounts of non-annotated text, learning the relations between words of a given language; it is then fine-tuned for classification with annotated samples, utilizing its language representation to learn better classification with less training data. LM methods currently dominate the state-of-the-art in NLP tasks BIBREF17.",
"Each method was used to train a model on each dataset, resulting in a total of four models: $\\textrm {\\textit {SVR}}(D_\\textrm {\\textit {LR}})$ and $\\textrm {\\textit {LM}}(D_\\textrm {\\textit {LR}})$ denoting the SVR and the language model trained on the larger dataset, and $\\textrm {\\textit {SVR}}(D_\\textrm {\\textit {HR}})$ and $\\textrm {\\textit {LM}}(D_\\textrm {\\textit {HR}})$ based on the smaller set with more reliable annotations.",
"Technically, each of these four models consists of five subvariants, one for each Big Five personality trait, though for the sake of simplicity we will keep referring to the four main models only. Furthermore, to enhance legibility we will omit the dataset denotation in the model name when it is clear from the context which version is meant (e.g. in result tables)."
],
[
"As we intended our models to predict the Big Five personality traits on a scale from -3 to 3, rather than binary classification, we required training data that contained samples representing the whole data range for each trait. Given that no such dataset was available for the Swedish language, we set up our own large-scale collection and annotation operation.",
"The data was retrieved from four different Swedish discussion forums and news sites. These sources were selected such as to increase the chances of finding texts from authors with a variety of different personalities. Specifically, the four sources are:",
"Avpixlat: a migration critical news site with an extensive comment section for each editorial article. The debate climate in the comment section commonly expresses disappointment towards the society, immigrants, minority groups and the government.",
"Familjeliv: a discussion forum with the main focus on family life, relationships, pregnancy, children etc.",
"Flashback: an online forum with the tagline “freedom of speech - for real”, and in 2018 the fourth most visited social media in SwedenBIBREF18. The discussions on Flashback cover virtually any topic, from computer and relationship problems to sex, drugs and ongoing crimes.",
"Nordfront: the Swedish news site of the Nordic Resistance Movement (NMR - Nordiska motståndsrörelsen). NMR is a nordic national socialist party. The site features editorial articles, each with a section for reader comments.",
"Web spiders were used to download the texts from these sources. In total this process yielded over 70 million texts, but due to time constraints only a small fraction could be annotated and thus form our training datasets $D_\\textrm {\\textit {LR}}$ and $D_\\textrm {\\textit {HR}}$. Table TABREF19 details the size of the datasets, and how many annotated texts from each source contributed to each dataset. $D_\\textrm {\\textit {HR}}$ also contains 59 additional student texts created by the annotators themselves, an option offered to them during the annotation process (described in the following section)."
],
[
"The texts were annotated by 18 psychology students, each of whom had studied at least 15 credits of personality psychology. The annotation was carried out using a web-based tool. A student working with this tool would be shown a text randomly picked from one of the sources, as well as instructions to annotate one of the Big Five traits by selecting a number from the discrete integer interval -3 to 3. Initially the students were allowed to choose which of the five traits to annotate, but at times they would be instructed to annotate a specific trait, to ensure a more even distribution of annotations. The tool kept the samples at a sufficiently meaningful yet comfortable size by picking only texts with at least two sentences, and truncating them if they exceeded five sentences or 160 words.",
"The large dataset $D_\\textrm {\\textit {LR}}$ was produced in this manner, with 39,370 annotated texts. Due to the random text selection for each annotator, the average sample received 1.02 annotations - i.e. almost every sample was annotated by only one student and for only one Big Five trait. The distribution of annotations for the different factors is shown in Figure FIGREF23. We considered the notable prevalence of -1 and 1 to be symptomatic of a potential problem: random short texts like in our experiment, often without context, are likely not to contain any definitive personality related hints at all, and thus we would have expected results closer to a normal distribution. The students preferring -1 and 1 over the neutral zero might have been influenced by their desire to glean some psychological interpretation even from unsuitable texts.",
"For $D_\\textrm {\\textit {HR}}$, the smaller set with higher annotation reliability, we therefore modified the process. Texts were now randomly selected from the subset of $D_\\textrm {\\textit {LR}}$ containing texts which had been annotated with -3 or 3. We reasoned that these annotations at the ends of the spectrum were indicative of texts where the authors had expressed their personalities more clearly. Thus this subset would be easier to annotate, and each text was potentially more suitable for the annotation of multiple factors.",
"Eventually this process resulted in 2,774 texts with on average 4.5 annotations each. The distribution for the different factors is shown in Table FIGREF24, where multiple annotations of the same factor for one text were compounded into a single average value.",
"The intra-annotator reliability of both datasets $D_\\textrm {\\textit {LR}}$ and $D_\\textrm {\\textit {HR}}$ is shown in Table TABREF21. The reliability was calculated using the Krippendorff's alpha coefficient. Krippendorff's alpha can handle missing values, which in this case was necessary since many of the texts were annotated by only a few annotators.",
"Table TABREF22 shows how many texts were annotated for each factor, and Figure FIGREF25 shows how the different sources span over the factor values.",
"Avpixlat and Nordfront have a larger proportion of annotated texts with factors below zero, while Flashback and especially Familjeliv have a larger proportion in the positive interval. The annotators had no information about the source of the data while they were annotating."
],
[
"To extract information from the annotated text data and make it manageable for the regression algorithm, we used Term Frequency-Inverse Document Frequency (TF-IDF) to construct features from our labeled data. TF-IDF is a measurement of the importance of continuous series of words or characters (so called n-grams) in a document, where n-grams appearing more often in documents are weighted as less important. TF-IDF is further explained in BIBREF19. In this paper, TF-IDF was used on both word and character level with bi-gram for words and quad-grams for characters."
],
[
"Several regression models were tested from the scikit-learn framework BIBREF20, such as RandomForestRegressor, LinearSVR, and KNeighborsRegressor. The Support Vector Machine Regression yielded the lowest MAE and MSE while performing a cross validated grid search for all the models and a range of hyperparameters."
],
[
"As our language model we used ULMFiT BIBREF21. ULMFiT is an NLP transfer learning algorithm that we picked due to its straightforward implementation in the fast.ai library, and its promising results on small datasets. As the basis of our ULMFiT model we built a Swedish language model on a large corpus of Swedish text retrieved from the Swedish Wikipedia and the aforementioned forums Flashback and Familjeliv. We then used our annotated samples to fine-tune the language model, resulting in a classifier for the Big Five factors."
],
[
"The performance of the models was evaluated with cross validation measuring MAE, MSE and $\\textrm {R}^2$. We also introduced a dummy regressor. The dummy regressor is trained to always predict the mean value of the training data. In this way it was possible to see whether the trained models predicted better than just always guessing the mean value of the test data. To calculate the $\\text{R}^2$ score we use the following measurement:",
"where $y$ is the actual annotated score, $\\bar{y}$ is the sample mean, and $e$ is the residual."
],
[
"The models were evaluated using 5-fold cross validation. The results for the cross validation is shown in table TABREF33 and TABREF34.",
"For both datasets $D_\\textrm {\\textit {LR}}$ and $D_\\textrm {\\textit {HR}}$, the trained models predict the Big Five traits better than the dummy regressor. This means that the trained models were able to catch signals of personality from the annotated data. Extraversion and agreeableness were easiest to estimate. The smallest differences in MAE between the trained models and the dummy regressor are for extraversion and conscientiousness, for models trained on the lower reliability dataset $D_\\textrm {\\textit {LR}}$. The explanation for this might be that both of the factors are quite complicated to detect in texts and therefore hard to annotate. For the models based on $D_\\textrm {\\textit {HR}}$, we can find a large difference between the MAE for both stability and agreeableness. Agreeableness measures for example how kind and sympathetic a person is, which appears much more naturally in text compared to extraversion and conscientiousness. Stability, in particular low stability, can be displayed in writing as expressions of emotions like anger or fear, and these are often easy to identify."
],
[
"As set out in Section SECREF2, earlier attempts at automatic analysis of the Big Five traits have often avoided modelling the factors on a spectrum, instead opting to simplify the task to a binary classification of high or low. We consider our $[-3, 3]$ interval-based representation to be preferable, as it is sufficiently granular to express realistic nuances while remaining simple enough not to overtax annotators with too many choices. Nevertheless, to gain some understanding of how our approach would compare to the state of the art, we modified our methods to train binary classifiers on the large and small datasets. For the purposes of this training a factor value below zero was regarded as low and values above as high, and the classifiers learnt to distinguish only these two classes. The accuracy during cross validation was calculated and is presented in Table TABREF36. Note that a direct comparison with earlier systems is problematic due to the differences in datasets. This test merely serves to ensure that our approach is not out of line with the general performance in the field.",
"We conducted a head-to-head test (paired sample t-test) to compare the trained language model against the corresponding dummy regressor and found that the mean absolute error was significantly lower for the language model $\\textrm {\\textit {LM}}(D_\\textrm {\\textit {HR}})$, t(4) = 4.32, p = .02, as well as the $\\textrm {\\textit {LM}}(D_\\textrm {\\textit {LR}})$, t(4) = 4.47, p = .02. Thus, the trained language models performed significantly better than a dummy regressor. In light of these differences and the slightly lower mean absolute error $\\textrm {\\textit {LM}}(D_\\textrm {\\textit {HR}})$ compared to the $\\textrm {\\textit {LM}}(D_\\textrm {\\textit {LR}})$ [t(4) = 2.73, p = .05] and considering that $\\textrm {\\textit {LM}}(D_\\textrm {\\textit {HR}})$ is the best model in terms of $\\textrm {R}^2$ we take it for testing in the wild."
],
[
"Textual domain differences may affect the performance of a trained model more than expected. In the literature systems are often only evaluated on texts from their training domain. However, in our experience this is insufficient to assess the fragility of a system towards the data, and thus its limitations with respect to an actual application and generalizability across different domains. It is critical to go beyond an evaluation of trained models on the initial training data domain, and to test the systems “in the wild”, on texts coming from other sources, possibly written with a different purpose. Most of the texts in our training data have a conversational nature, given their origin in online forums, or occasionally in opinionated editorial articles. Ideally a Big Five classifier should be able to measure personality traits in any human-authored text of a reasonable length. In practice though it seems likely that the subtleties involved in personality detection could be severely affected by superficial differences in language and form. To gain some understanding on how our method would perform outside the training domain, we selected our best model $\\textrm {\\textit {LM}}(D_\\textrm {\\textit {HR}})$ and evaluated it on texts from two other domains."
],
[
"The cover letters dataset was created during a master thesis project at Uppsala University. The aim of the thesis project was to investigate the relationship between self-reported personality and personality traits extracted from texts. In the course of the thesis, 200 study participants each wrote a cover letter and answered a personality form BIBREF22. 186 of the participants had complete answers and therefore the final dataset contained 186 texts and the associated Big Five personality scores.",
"We applied $\\textrm {\\textit {LM}}(D_\\textrm {\\textit {HR}})$ to the cover letters to produce Big Five trait analyses, and we compared the results to the scores from the personality questionnaire. This comparison, measured in the form of the evaluation metrics MAE, MSE and $\\textrm {R}^2$, is shown in Table TABREF39. As it can be seen in the table, model performance is poor and $\\textrm {R}^2$ was not above zero for any of the factors."
],
[
"The self-descriptions dataset is the result of an earlier study conducted at Uppsala University. The participants, 68 psychology students (on average 7.7 semester), were instructed to describe themselves in text, yielding 68 texts with an average of approximately 450 words. The descriptions were made on one (randomly chosen) of nine themes like politics and social issues, film and music, food and drinks, and family and children. Each student also responded to a Big Five personality questionnaires consisting of 120 items. The distribution of the Big Five traits for the dataset is shown in figure FIGREF42.",
"Given this data, we applied $\\textrm {\\textit {LM}}(D_\\textrm {\\textit {HR}})$ to the self-description texts to compute the Big Five personality trait values. We then compared the results to the existing survey assessment using the evaluation metrics MAE, MSE and $\\textrm {R}^2$, as shown in Table TABREF40. As it can be seen in the table, model performance was poor and $\\textrm {R}^2$, like the results for the cover letters dataset, was not above zero for any of the Big Five factors."
],
[
"In this paper, we aimed to create a model that is able to extract Big Five personality traits from a text using machine learning techniques. We also aimed to investigate whether a model trained on a large amount of solo-annotated data performs better than a model trained on a smaller amount of high-quality data. Finally, we aimed to measure model performance in the wild, on data from two domains that differ from the training data. The results of our experiments showed that we were able to create models with reasonable performance (compared to a dummy classifier). These models exhibit a mean absolute error and accuracy in line with state-or-the-art models presented in previous research, with the caveat that comparisons over different datasets are fraught with difficulties. We also found that using a smaller amount of high-quality training data with multi-annotator assessments resulted in models that outperformed models based on a large amount of solo-annotated data. Finally, testing our best model ($\\textrm {\\textit {LM}}(D_\\textrm {\\textit {HR}})$) in the wild and found that the model could not, reliably, extract people’s personality from their text. These findings reveal the importance of the quality of the data, but most importantly, the necessity of examining models in the wild. Taken together, our results show that extracting personality traits from a text remains a challenge and that no firm conclusions can be made on model performance before testing in the wild. We hope that the findings will be guiding for future research."
]
],
"section_name": [
"Introduction",
"Related Work",
"Model Training",
"Model Training ::: Data",
"Model Training ::: Annotation",
"Model Training ::: Feature Extraction",
"Model Training ::: Regression Model",
"Model Training ::: Language Model",
"Model Training ::: Model Performance",
"Model Training ::: Model Performance ::: Cross Validation Test",
"Model Training ::: Model Performance ::: Binary Classification Test",
"Personality Detection in the Wild",
"Personality Detection in the Wild ::: Cover Letters Dataset",
"Personality Detection in the Wild ::: Self-Descriptions Dataset",
"Conclusions"
]
} | {
"answers": [
{
"annotation_id": [
"28bc97f9eb3f6b365ba9add8d1a37d5159ba085f",
"8d30bad02fb2f8911b066f27ced0c7452b6aa0ac"
],
"answer": [
{
"evidence": [
"As our language model we used ULMFiT BIBREF21. ULMFiT is an NLP transfer learning algorithm that we picked due to its straightforward implementation in the fast.ai library, and its promising results on small datasets. As the basis of our ULMFiT model we built a Swedish language model on a large corpus of Swedish text retrieved from the Swedish Wikipedia and the aforementioned forums Flashback and Familjeliv. We then used our annotated samples to fine-tune the language model, resulting in a classifier for the Big Five factors."
],
"extractive_spans": [
"ULMFiT"
],
"free_form_answer": "",
"highlighted_evidence": [
"As our language model we used ULMFiT BIBREF21."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"As our language model we used ULMFiT BIBREF21. ULMFiT is an NLP transfer learning algorithm that we picked due to its straightforward implementation in the fast.ai library, and its promising results on small datasets. As the basis of our ULMFiT model we built a Swedish language model on a large corpus of Swedish text retrieved from the Swedish Wikipedia and the aforementioned forums Flashback and Familjeliv. We then used our annotated samples to fine-tune the language model, resulting in a classifier for the Big Five factors."
],
"extractive_spans": [
"ULMFiT BIBREF21"
],
"free_form_answer": "",
"highlighted_evidence": [
"As our language model we used ULMFiT BIBREF21."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"annotation_id": [
"7d0d7cc7499c33836b86ff1c81c2c98fe30ee689",
"ff63ea71df9e5da8af6d16956b8707538dc8a4a4"
],
"answer": [
{
"evidence": [
"Several regression models were tested from the scikit-learn framework BIBREF20, such as RandomForestRegressor, LinearSVR, and KNeighborsRegressor. The Support Vector Machine Regression yielded the lowest MAE and MSE while performing a cross validated grid search for all the models and a range of hyperparameters."
],
"extractive_spans": [
"RandomForestRegressor",
"LinearSVR",
"KNeighborsRegressor",
"Support Vector Machine Regression"
],
"free_form_answer": "",
"highlighted_evidence": [
"Several regression models were tested from the scikit-learn framework BIBREF20, such as RandomForestRegressor, LinearSVR, and KNeighborsRegressor. The Support Vector Machine Regression yielded the lowest MAE and MSE while performing a cross validated grid search for all the models and a range of hyperparameters."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Several regression models were tested from the scikit-learn framework BIBREF20, such as RandomForestRegressor, LinearSVR, and KNeighborsRegressor. The Support Vector Machine Regression yielded the lowest MAE and MSE while performing a cross validated grid search for all the models and a range of hyperparameters."
],
"extractive_spans": [
"RandomForestRegressor",
"LinearSVR",
"KNeighborsRegressor"
],
"free_form_answer": "",
"highlighted_evidence": [
"Several regression models were tested from the scikit-learn framework BIBREF20, such as RandomForestRegressor, LinearSVR, and KNeighborsRegressor."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"annotation_id": [
"4310e5a0f0921e21f238143654cb5437334e7354",
"e37787151e579575d9ef88f5dc810a8593682bb2"
],
"answer": [
{
"evidence": [
"The intra-annotator reliability of both datasets $D_\\textrm {\\textit {LR}}$ and $D_\\textrm {\\textit {HR}}$ is shown in Table TABREF21. The reliability was calculated using the Krippendorff's alpha coefficient. Krippendorff's alpha can handle missing values, which in this case was necessary since many of the texts were annotated by only a few annotators."
],
"extractive_spans": [],
"free_form_answer": "Answer with content missing: (Table 2): Krippednorff's alpha coefficient for dataset is: Stability -0.26, Extraversion 0.07, Openness 0.36, Agreeableness 0.51, Conscientiousness 0.31",
"highlighted_evidence": [
"The intra-annotator reliability of both datasets $D_\\textrm {\\textit {LR}}$ and $D_\\textrm {\\textit {HR}}$ is shown in Table TABREF21."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [],
"unanswerable": true,
"yes_no": null
}
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
}
],
"nlp_background": [
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no"
],
"question": [
"What language model is trained?",
"What machine learning models are considered?",
"What is the agreement of the dataset?"
],
"question_id": [
"3d73cb92d866448ec72a571331967da5d34dfbb1",
"708f5f83a3c356b23b27a9175f5c35ac00cdf5db",
"9240ee584d4354349601aeca333f1bc92de2165e"
],
"question_writer": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
],
"search_query": [
"",
"",
""
],
"topic_background": [
"familiar",
"familiar",
"familiar"
]
} | {
"caption": [
"Figure 1. Workflow for building the models",
"Table III NUMBER OF TRAINING SAMPLES FOR EACH OF THE PERSONALITY FACTORS",
"Figure 2. Distribution of labeled samples for each of the factors of the large dataset.",
"Figure 3. Distribution of labeled samples for each of the factors of the small dataset",
"Figure 4. Distribution of Big Five factor values for the different sources from the large dataset",
"Table IV CROSS VALIDATED MODEL PERFORMANCE OF MODELS TRAINED ON DHR",
"Table VI CROSS VALIDATED ACCURACY FOR ALL THE BINARY MODELS ON DHR AND DLR",
"Figure 5. Distribution of labeled samples for each of the factors of the cover letters dataset",
"Table VII PERFORMANCE OF THE LANGUAGE MODEL LM(DHR) TESTED ON COVER LETTERS",
"Figure 6. Distribution of labeled samples for each of the factors of the self-descriptions letters dataset"
],
"file": [
"3-Figure1-1.png",
"4-TableIII-1.png",
"5-Figure2-1.png",
"5-Figure3-1.png",
"6-Figure4-1.png",
"6-TableIV-1.png",
"7-TableVI-1.png",
"7-Figure5-1.png",
"7-TableVII-1.png",
"8-Figure6-1.png"
]
} | [
"What is the agreement of the dataset?"
] | [
[
"1910.09916-Model Training ::: Annotation-4"
]
] | [
"Answer with content missing: (Table 2): Krippednorff's alpha coefficient for dataset is: Stability -0.26, Extraversion 0.07, Openness 0.36, Agreeableness 0.51, Conscientiousness 0.31"
] | 414 |
1803.05160 | How to evaluate sentiment classifiers for Twitter time-ordered data? | Social media are becoming an increasingly important source of information about the public mood regarding issues such as elections, Brexit, stock market, etc. In this paper we focus on sentiment classification of Twitter data. Construction of sentiment classifiers is a standard text mining task, but here we address the question of how to properly evaluate them as there is no settled way to do so. Sentiment classes are ordered and unbalanced, and Twitter produces a stream of time-ordered data. The problem we address concerns the procedures used to obtain reliable estimates of performance measures, and whether the temporal ordering of the training and test data matters. We collected a large set of 1.5 million tweets in 13 European languages. We created 138 sentiment models and out-of-sample datasets, which are used as a gold standard for evaluations. The corresponding 138 in-sample datasets are used to empirically compare six different estimation procedures: three variants of cross-validation, and three variants of sequential validation (where test set always follows the training set). We find no significant difference between the best cross-validation and sequential validation. However, we observe that all cross-validation variants tend to overestimate the performance, while the sequential methods tend to underestimate it. Standard cross-validation with random selection of examples is significantly worse than the blocked cross-validation, and should not be used to evaluate classifiers in time-ordered data scenarios. | {
"paragraphs": [
[
"Social media are becoming an increasingly important source of information about the public mood regarding issues such as elections, Brexit, stock market, etc. In this paper we focus on sentiment classification of Twitter data. Construction of sentiment classifiers is a standard text mining task, but here we address the question of how to properly evaluate them as there is no settled way to do so. Sentiment classes are ordered and unbalanced, and Twitter produces a stream of time-ordered data. The problem we address concerns the procedures used to obtain reliable estimates of performance measures, and whether the temporal ordering of the training and test data matters. We collected a large set of 1.5 million tweets in 13 European languages. We created 138 sentiment models and out-of-sample datasets, which are used as a gold standard for evaluations. The corresponding 138 in-sample datasets are used to empirically compare six different estimation procedures: three variants of cross-validation, and three variants of sequential validation (where test set always follows the training set). We find no significant difference between the best cross-validation and sequential validation. However, we observe that all cross-validation variants tend to overestimate the performance, while the sequential methods tend to underestimate it. Standard cross-validation with random selection of examples is significantly worse than the blocked cross-validation, and should not be used to evaluate classifiers in time-ordered data scenarios."
],
[
"Online social media are becoming increasingly important in our society. Platforms such as Twitter and Facebook influence the daily lives of people around the world. Their users create and exchange a wide variety of contents on social media, which presents a valuable source of information about public sentiment regarding social, economic or political issues. In this context, it is important to develop automatic methods to retrieve and analyze information from social media.",
"In the paper we address the task of sentiment analysis of Twitter data. The task encompasses identification and categorization of opinions (e.g., negative, neutral, or positive) written in quasi-natural language used in Twitter posts. We focus on estimation procedures of the predictive performance of machine learning models used to address this task. Performance estimation procedures are key to understand the generalization ability of the models since they present approximations of how these models will behave on unseen data. In the particular case of sentiment analysis of Twitter data, high volumes of content are continuously being generated and there is no immediate feedback about the true class of instances. In this context, it is fundamental to adopt appropriate estimation procedures in order to get reliable estimates about the performance of the models.",
"The complexity of Twitter data raises some challenges on how to perform such estimations, as, to the best of our knowledge, there is currently no settled approach to this. Sentiment classes are typically ordered and unbalanced, and the data itself is time-ordered. Taking these properties into account is important for the selection of appropriate estimation procedures.",
"The Twitter data shares some characteristics of time series and some of static data. A time series is an array of observations at regular or equidistant time points, and the observations are in general dependent on previous observations BIBREF0 . On the other hand, Twitter data is time-ordered, but the observations are short texts posted by Twitter users at any time and frequency. It can be assumed that original Twitter posts are not directly dependent on previous posts. However, there is a potential indirect dependence, demonstrated in important trends and events, through influential users and communities, or individual user's habits. These long-term topic drifts are typically not taken into account by the sentiment analysis models.",
"We study different performance estimation procedures for sentiment analysis in Twitter data. These estimation procedures are based on (i) cross-validation and (ii) sequential approaches typically adopted for time series data. On one hand, cross-validations explore all the available data, which is important for the robustness of estimates. On the other hand, sequential approaches are more realistic in the sense that estimates are computed on a subset of data always subsequent to the data used for training, which means that they take time-order into account.",
"Our experimental study is performed on a large collection of nearly 1.5 million Twitter posts, which are domain-free and in 13 different languages. A realistic scenario is emulated by partitioning the data into 138 datasets by language and time window. Each dataset is split into an in-sample (a training plus test set), where estimation procedures are applied to approximate the performance of a model, and an out-of-sample used to compute the gold standard. Our goal is to understand the ability of each estimation procedure to approximate the true error incurred by a given model on the out-of-sample data.",
"The paper is structured as follows. sec:relatedWork provides an overview of the state-of-the-art in estimation methods. In section sec:methods we describe the experimental setting for an empirical comparison of estimation procedures for sentiment classification of time-ordered Twitter data. We describe the Twitter sentiment datasets, a machine learning algorithm we employ, performance measures, and how the gold standard and estimation results are produced. In section sec:results we present and discuss the results of comparisons of the estimation procedures along several dimensions. sec-conclusions provide the limitations of our work and give directions for the future."
],
[
"In this section we briefly review typical estimation methods used in sentiment classification of Twitter data. In general, for time-ordered data, the estimation methods used are variants of cross-validation, or are derived from the methods used to analyze time series data. We examine the state-of-the-art of these estimation methods, pointing out their advantages and drawbacks.",
"Several works in the literature on sentiment classification of Twitter data employ standard cross-validation procedures to estimate the performance of sentiment classifiers. For example, Agarwal et al. BIBREF1 and Mohammad et al. BIBREF2 propose different methods for sentiment analysis of Twitter data and estimate their performance using 5-fold and 10-fold cross-validation, respectively. Bermingham and Smeaton BIBREF3 produce a comparative study of sentiment analysis between blogs and Twitter posts, where models are compared using 10-fold cross-validation. Saif et al. BIBREF4 asses binary classification performance of nine Twitter sentiment datasets by 10-fold cross validation. Other, similar applications of cross-validation are given in BIBREF5 , BIBREF6 .",
"On the other hand, there are also approaches that use methods typical for time series data. For example, Bifet and Frank BIBREF7 use the prequential (predictive sequential) method to evaluate a sentiment classifier on a stream of Twitter posts. Moniz et al. BIBREF8 present a method for predicting the popularity of news from Twitter data and sentiment scores, and estimate its performance using a sequential approach in multiple testing periods.",
"The idea behind the INLINEFORM0 -fold cross-validation is to randomly shuffle the data and split it in INLINEFORM1 equally-sized folds. Each fold is a subset of the data randomly picked for testing. Models are trained on the INLINEFORM2 folds and their performance is estimated on the left-out fold. INLINEFORM3 -fold cross-validation has several practical advantages, such as an efficient use of all the data. However, it is also based on an assumption that the data is independent and identically distributed BIBREF9 which is often not true. For example, in time-ordered data, such as Twitter posts, the data are to some extent dependent due to the underlying temporal order of tweets. Therefore, using INLINEFORM4 -fold cross-validation means that one uses future information to predict past events, which might hinder the generalization ability of models.",
"There are several methods in the literature designed to cope with dependence between observations. The most common are sequential approaches typically used in time series forecasting tasks. Some variants of INLINEFORM0 -fold cross-validation which relax the independence assumption were also proposed. For time-ordered data, an estimation procedure is sequential when testing is always performed on the data subsequent to the training set. Typically, the data is split into two parts, where the first is used to train the model and the second is held out for testing. These approaches are also known in the literature as the out-of-sample methods BIBREF10 , BIBREF11 .",
"Within sequential estimation methods one can adopt different strategies regarding train/test splitting, growing or sliding window setting, and eventual update of the models. In order to produce reliable estimates and test for robustness, Tashman BIBREF10 recommends employing these strategies in multiple testing periods. One should either create groups of data series according to, for example, different business cycles BIBREF12 , or adopt a randomized approach, such as in BIBREF13 . A more complete overview of these approaches is given by Tashman BIBREF10 .",
"In stream mining, where a model is continuously updated, the most commonly used estimation methods are holdout and prequential BIBREF14 , BIBREF15 . The prequential strategy uses an incoming observation to first test the model and then to train it.",
"Besides sequential estimation methods, some variants of INLINEFORM0 -fold cross-validation were proposed in the literature that are specially designed to cope with dependency in the data and enable the application of cross-validation to time-ordered data. For example, blocked cross-validation (the name is adopted from Bergmeir BIBREF11 ) was proposed by Snijders BIBREF16 . The method derives from a standard INLINEFORM1 -fold cross-validation, but there is no initial random shuffling of observations. This renders INLINEFORM2 blocks of contiguous observations.",
"The problem of data dependency for cross-validation is addressed by McQuarrie and Tsai BIBREF17 . The modified cross-validation removes observations from the training set that are dependent with the test observations. The main limitation of this method is its inefficient use of the available data since many observations are removed, as pointed out in BIBREF18 . The method is also known as non-dependent cross-validation BIBREF11 .",
"The applicability of variants of cross-validation methods in time series data, and their advantages over traditional sequential validations are corroborated by Bergmeir et al. BIBREF19 , BIBREF11 , BIBREF20 . The authors conclude that in time series forecasting tasks, the blocked cross-validations yield better error estimates because of their more efficient use of the available data. Cerqueira et al. BIBREF21 compare performance estimation of various cross-validation and out-of-sample approaches on real-world and synthetic time series data. The results indicate that cross-validation is appropriate for the stationary synthetic time series data, while the out-of-sample approaches yield better estimates for real-world data.",
"Our contribution to the state-of-the-art is a large scale empirical comparison of several estimation procedures on Twitter sentiment data. We focus on the differences between the cross-validation and sequential validation methods, to see how important is the violation of data independence in the case of Twitter posts. We consider longer-term time-dependence between the training and test sets, and completely ignore finer-scale dependence at the level of individual tweets (e.g., retweets and replies). To the best of our knowledge, there is no settled approach yet regarding proper validation of models for Twitter time-ordered data. This work provides some results which contribute to bridging that gap."
],
[
"The goal of this study is to recommend appropriate estimation procedures for sentiment classification of Twitter time-ordered data. We assume a static sentiment classification model applied to a stream of Twitter posts. In a real-case scenario, the model is trained on historical, labeled tweets, and applied to the current, incoming tweets. We emulate this scenario by exploring a large collection of nearly 1.5 million manually labeled tweets in 13 European languages (see subsection sec:data). Each language dataset is split into pairs of the in-sample data, on which a model is trained, and the out-of-sample data, on which the model is validated. The performance of the model on the out-of-sample data gives an estimate of its performance on the future, unseen data. Therefore, we first compute a set of 138 out-of-sample performance results, to be used as a gold standard (subsection sec:gold). In effect, our goal is to find the estimation procedure that best approximates this out-of-sample performance.",
"Throughout our experiments we use only one training algorithm (subsection sec:data), and two performance measures (subsection sec:measures). During training, the performance of the trained model can be estimated only on the in-sample data. However, there are different estimation procedures which yield these approximations. In machine learning, a standard procedure is cross-validation, while for time-ordered data, sequential validation is typically used. In this study, we compare three variants of cross-validation and three variants of sequential validation (subsection sec:eval-proc). The goal is to find the in-sample estimation procedure that best approximates the out-of-sample gold standard. The error an estimation procedure makes is defined as the difference to the gold standard."
],
[
"We collected a large corpus of nearly 1.5 million Twitter posts written in 13 European languages. This is, to the best of our knowledge, by far the largest set of sentiment labeled tweets publicly available. We engaged native speakers to label the tweets based on the sentiment expressed in them. The sentiment label has three possible values: negative, neutral or positive. It turned out that the human annotators perceived the values as ordered. The quality of annotations varies though, and is estimated from the self- and inter-annotator agreements. All the details about the datasets, the annotator agreements, and the ordering of sentiment values are in our previous study BIBREF22 . The sentiment distribution and quality of individual language datasets is in Table TABREF2 . The tweets in the datasets are ordered by tweet ids, which corresponds to ordering by the time of posting.",
"There are many supervised machine learning algorithms suitable for training sentiment classification models from labeled tweets. In this study we use a variant of Support Vector Machine (SVM) BIBREF23 . The basic SVM is a two-class, binary classifier. In the training phase, SVM constructs a hyperplane in a high-dimensional vector space that separates one class from the other. In the classification phase, the side of the hyperplane determines the class. A two-class SVM can be extended into a multi-class classifier which takes the ordering of sentiment values into account, and implements ordinal classification BIBREF24 . Such an extension consists of two SVM classifiers: one classifier is trained to separate the negative examples from the neutral-or-positives; the other separates the negative-or-neutrals from the positives. The result is a classifier with two hyperplanes, which partitions the vector space into three subspaces: negative, neutral, and positive. During classification, the distances from both hyperplanes determine the predicted class. A further refinement is a TwoPlaneSVMbin classifier. It partitions the space around both hyperplanes into bins, and computes the distribution of the training examples in individual bins. During classification, the distances from both hyperplanes determine the appropriate bin, but the class is determined as the majority class in the bin.",
"The vector space is defined by the features extracted from the Twitter posts. The posts are first pre-processed by standard text processing methods, i.e., tokenization, stemming/lemmatization (if available for a specific language), unigram and bigram construction, and elimination of terms that do not appear at least 5 times in a dataset. The Twitter specific pre-processing is then applied, i.e, replacing URLs, Twitter usernames and hashtags with common tokens, adding emoticon features for different types of emoticons in tweets, handling of repetitive letters, etc. The feature vectors are then constructed by the Delta TF-IDF weighting scheme BIBREF25 .",
"In our previous study BIBREF22 we compared five variants of the SVM classifiers and Naive Bayes on the Twitter sentiment classification task. TwoPlaneSVMbin was always between the top, but statistically indistinguishable, best performing classifiers. It turned out that monitoring the quality of the annotation process has much larger impact on the performance than the type of the classifier used. In this study we fix the classifier, and use TwoPlaneSVMbin in all the experiments."
],
[
"Sentiment values are ordered, and distribution of tweets between the three sentiment classes is often unbalanced. In such cases, accuracy is not the most appropriate performance measure BIBREF7 , BIBREF22 . In this context, we evaluate performance with the following two metrics: Krippendorff's INLINEFORM0 BIBREF26 , and INLINEFORM1 BIBREF27 .",
" INLINEFORM0 was developed to measure the agreement between human annotators, but can also be used to measure the agreement between classification models and a gold standard. It generalizes several specialized agreement measures, takes ordering of classes into account, and accounts for the agreement by chance. INLINEFORM1 is defined as follows: DISPLAYFORM0 ",
"where INLINEFORM0 is the observed disagreement between models, and INLINEFORM1 is a disagreement, expected by chance. When models agree perfectly, INLINEFORM2 INLINEFORM3 , and when the level of agreement equals the agreement by chance, INLINEFORM4 INLINEFORM5 . Note that INLINEFORM6 can also be negative. The two disagreement measures are defined as: DISPLAYFORM0 ",
" DISPLAYFORM0 ",
"The arguments, INLINEFORM0 , and INLINEFORM1 , refer to the frequencies in a coincidence matrix, defined below. INLINEFORM2 (and INLINEFORM3 ) is a discrete sentiment variable with three possible values: negative ( INLINEFORM4 ), neutral (0), or positive ( INLINEFORM5 ). INLINEFORM6 is a difference function between the values of INLINEFORM7 and INLINEFORM8 , for ordered variables defined as: DISPLAYFORM0 ",
"Note that disagreements INLINEFORM0 and INLINEFORM1 between the extreme classes (negative and positive) are four times larger than between the neighbouring classes.",
"A coincidence matrix tabulates all pairable values of INLINEFORM0 from two models. In our case, we have a 3-by-3 coincidence matrix, and compare a model to the gold standard. The coincidence matrix is then the sum of the confusion matrix and its transpose. Each labeled tweet is entered twice, once as a INLINEFORM1 pair, and once as a INLINEFORM2 pair. INLINEFORM3 is the number of tweets labeled by the values INLINEFORM4 and INLINEFORM5 by different models, INLINEFORM6 and INLINEFORM7 are the totals for each value, and INLINEFORM8 is the grand total.",
" INLINEFORM0 is an instance of the INLINEFORM1 score, a well-known performance measure in information retrieval BIBREF28 and machine learning. We use an instance specifically designed to evaluate the 3-class sentiment models BIBREF27 . INLINEFORM2 is defined as follows: DISPLAYFORM0 ",
" INLINEFORM0 implicitly takes into account the ordering of sentiment values, by considering only the extreme labels, negative INLINEFORM1 and positive INLINEFORM2 . The middle, neutral, is taken into account only indirectly. INLINEFORM3 is the harmonic mean of precision and recall for class INLINEFORM4 , INLINEFORM5 . INLINEFORM6 INLINEFORM7 implies that all negative and positive tweets were correctly classified, and as a consequence, all neutrals as well. INLINEFORM8 INLINEFORM9 indicates that all negative and positive tweets were incorrectly classified. INLINEFORM10 does not account for correct classification by chance."
],
[
"We create the gold standard results by splitting the data into the in-sample datasets (abbreviated as in-set), and out-of-sample datasets (abbreviated as out-set). The terminology of the in- and out-set is adopted from Bergmeir et al. BIBREF11 . Tweets are ordered by the time of posting. To emulate a realistic scenario, an out-set always follows the in-set. From each language dataset (Table TABREF2 ) we create INLINEFORM0 in-sets of varying length in multiples of 10,000 consecutive tweets, where INLINEFORM1 . The out-set is the subsequent 10,000 consecutive tweets, or the remainder at the end of each language dataset. This is illustrated in Figure FIGREF10 .",
"The partitioning of the language datasets results in 138 in-sets and corresponding out-sets. For each in-set, we train a TwoPlaneSVMbin sentiment classification model, and measure its performance, in terms of INLINEFORM0 and INLINEFORM1 , on the corresponding out-set. The results are in Tables TABREF11 and TABREF12 . Note that the performance measured by INLINEFORM2 is considerably lower in comparison to INLINEFORM3 , since the baseline for INLINEFORM4 is classification by chance.",
"The 138 in-sets are used to train sentiment classification models and estimate their performance. The goal of this study is to analyze different estimation procedures in terms of how well they approximate the out-set gold standard results shown in Tables TABREF11 and TABREF12 ."
],
[
"There are different estimation procedures, some more suitable for static data, while others are more appropriate for time-series data. Time-ordered Twitter data shares some properties of both types of data. When training an SVM model, the order of tweets is irrelevant and the model does not capture the dynamics of the data. When applying the model, however, new tweets might introduce new vocabulary and topics. As a consequence, the temporal ordering of training and test data has a potential impact on the performance estimates.",
"We therefore compare two classes of estimation procedures. Cross-validation, commonly used in machine learning for model evaluation on static data, and sequential validation, commonly used for time-series data. There are many variants and parameters for each class of procedures. Our datasets are relatively large and an application of each estimation procedure takes several days to complete. We have selected three variants of each procedure to provide answers to some relevant questions.",
"First, we apply 10-fold cross-validation where the training:test set ratio is always 9:1. Cross-validation is stratified when the fold partitioning is not completely random, but each fold has roughly the same class distribution. We also compare standard random selection of examples to the blocked form of cross-validation BIBREF16 , BIBREF11 , where each fold is a block of consecutive tweets. We use the following abbreviations for cross-validations:",
"xval(9:1, strat, block) - 10-fold, stratified, blocked;",
"xval(9:1, no-strat, block) - 10-fold, not stratified, blocked;",
"xval(9:1, strat, rand) - 10-fold, stratified, random selection of examples.",
"In sequential validation, a sample consists of the training set immediately followed by the test set. We vary the ratio of the training and test set sizes, and the number and distribution of samples taken from the in-set. The number of samples is 10 or 20, and they are distributed equidistantly or semi-equidistantly. In all variants, samples cover the whole in-set, but they are overlapping. See Figure FIGREF20 for illustration. We use the following abbreviations for sequential validations:",
"seq(9:1, 20, equi) - 9:1 training:test ratio, 20 equidistant samples,",
"seq(9:1, 10, equi) - 9:1 training:test ratio, 10 equidistant samples,",
"seq(2:1, 10, semi-equi) - 2:1 training:test ratio, 10 samples randomly selected out of 20 equidistant points."
],
[
"We compare six estimation procedures in terms of different types of errors they incur. The error is defined as the difference to the gold standard. First, the magnitude and sign of the errors show whether a method tends to underestimate or overestimate the performance, and by how much (subsection sec:median-errors). Second, relative errors give fractions of small, moderate, and large errors that each procedure incurs (subsection sec:rel-errors). Third, we rank the estimation procedures in terms of increasing absolute errors, and estimate the significance of the overall ranking by the Friedman-Nemenyi test (subsection sec:friedman). Finally, selected pairs of estimation procedures are compared by the Wilcoxon signed-rank test (subsection sec:wilcoxon)."
],
[
"An estimation procedure estimates the performance (abbreviated INLINEFORM0 ) of a model in terms of INLINEFORM1 and INLINEFORM2 . The error it incurs is defined as the difference to the gold standard performance (abbreviated INLINEFORM3 ): INLINEFORM4 . The validation results show high variability of the errors, with skewed distribution and many outliers. Therefore, we summarize the errors in terms of their medians and quartiles, instead of the averages and variances.",
"The median errors of the six estimation procedures are in Tables TABREF22 and TABREF23 , measured by INLINEFORM0 and INLINEFORM1 , respectively.",
"Figure FIGREF24 depicts the errors with box plots. The band inside the box denotes the median, the box spans the second and third quartile, and the whiskers denote 1.5 interquartile range. The dots correspond to the outliers. Figure FIGREF24 shows high variability of errors for individual datasets. This is most pronounced for the Serbian/Croatian/Bosnian (scb) and Portuguese (por) datasets where variation in annotation quality (scb) and a radical topic shift (por) were observed. Higher variability is also observed for the Spanish (spa) and Albanian (alb) datasets, which have poor sentiment annotation quality (see BIBREF22 for details).",
"The differences between the estimation procedures are easier to detect when we aggregate the errors over all language datasets. The results are in Figures FIGREF25 and FIGREF26 , for INLINEFORM0 and INLINEFORM1 , respectively. In both cases we observe that the cross-validation procedures (xval) consistently overestimate the performance, while the sequential validations (seq) underestimate it. The largest overestimation errors are incurred by the random cross-validation, and the largest underestimations by the sequential validation with the training:test set ratio 2:1. We also observe high variability of errors, with many outliers. The conclusions are consistent for both measures, INLINEFORM2 and INLINEFORM3 ."
],
[
"Another useful analysis of estimation errors is provided by a comparison of relative errors. The relative error is the absolute error an estimation procedure incurs divided by the gold standard result: INLINEFORM0 . We chose two, rather arbitrary, thresholds of 5% and 30%, and classify the relative errors as small ( INLINEFORM1 ), moderate ( INLINEFORM2 ), and large ( INLINEFORM3 ).",
"Figure FIGREF28 shows the proportion of the three types of errors, measured by INLINEFORM0 , for individual language datasets. Again, we observe a higher proportion of large errors for languages with poor annotations (alb, spa), annotations of different quality (scb), and different topics (por).",
"Figures FIGREF29 and FIGREF30 aggregate the relative errors across all the datasets, for INLINEFORM0 and INLINEFORM1 , respectively. The proportion of errors is consistent between INLINEFORM2 and INLINEFORM3 , but there are more large errors when the performance is measured by INLINEFORM4 . This is due to smaller error magnitude when the performance is measured by INLINEFORM5 in contrast to INLINEFORM6 , since INLINEFORM7 takes classification by chance into account. With respect to individual estimation procedures, there is a considerable divergence of the random cross-validation. For both performance measures, INLINEFORM8 and INLINEFORM9 , it consistently incurs higher proportion of large errors and lower proportion of small errors in comparison to the rest of the estimation procedures."
],
[
"The Friedman test is used to compare multiple procedures over multiple datasets BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 . For each dataset, it ranks the procedures by their performance. It tests the null hypothesis that the average ranks of the procedures across all the datasets are equal. If the null hypothesis is rejected, one applies the Nemenyi post-hoc test BIBREF33 on pairs of procedures. The performance of two procedures is significantly different if their average ranks differ by at least the critical difference. The critical difference depends on the number of procedures to compare, the number of different datasets, and the selected significance level.",
"In our case, the performance of an estimation procedure is taken as the absolute error it incurs: INLINEFORM0 . The estimation procedure with the lowest absolute error gets the lowest (best) rank. The results of the Friedman-Nemenyi test are in Figures FIGREF32 and FIGREF33 , for INLINEFORM1 and INLINEFORM2 , respectively.",
"For both performance measures, INLINEFORM0 and INLINEFORM1 , the Friedman rankings are the same. For six estimation procedures, 13 language datasets, and the 5% significance level, the critical difference is INLINEFORM2 . In the case of INLINEFORM3 (Figure FIGREF33 ) all six estimation procedures are within the critical difference, so their ranks are not significantly different. In the case of INLINEFORM4 (Figure FIGREF32 ), however, the two best methods are significantly better than the random cross-validation."
],
[
"The Wilcoxon signed-rank test is used to compare two procedures on related data BIBREF34 , BIBREF32 . It ranks the differences in performance of the two procedures, and compares the ranks for the positive and negative differences. Greater differences count more, but the absolute magnitudes are ignored. It tests the null hypothesis that the differences follow a symmetric distribution around zero. If the null hypothesis is rejected one can conclude that one procedure outperforms the other at a selected significance level.",
"In our case, the performance of pairs of estimation procedures is compared at the level of language datasets. The absolute errors of an estimation procedure are averaged across the in-sets of a language. The average absolute error is then INLINEFORM0 , where INLINEFORM1 is the number of in-sets. The results of the Wilcoxon test, for selected pairs of estimation procedures, for both INLINEFORM2 and INLINEFORM3 , are in Figure FIGREF35 .",
"The Wilcoxon test results confirm and reinforce the main results of the previous sections. Among the cross-validation procedures, blocked cross-validation is consistently better than the random cross-validation, at the 1% significance level. Stratified approach is better than non-stratified, but significantly (5% level) only for INLINEFORM0 . The comparison of the sequential validation procedures is less conclusive. The training:test set ratio 9:1 is better than 2:1, but significantly (at the 5% level) only for INLINEFORM1 . With the ratio 9:1 fixed, 20 samples yield better performance estimates than 10 samples, but significantly (5% level) only for INLINEFORM2 . We found no significant difference between the best cross-validation and sequential validation procedures in terms how well they estimate the average absolute errors."
],
[
"In this paper we present an extensive empirical study about the performance estimation procedures for sentiment analysis of Twitter data. Currently, there is no settled approach on how to properly evaluate models in such a scenario. Twitter time-ordered data shares some properties of static data for text mining, and some of time series data. Therefore, we compare estimation procedures developed for both types of data.",
"The main result of the study is that standard, random cross-validation should not be used when dealing with time-ordered data. Instead, one should use blocked cross-validation, a conclusion already corroborated by Bergmeir et al. BIBREF19 , BIBREF11 . Another result is that we find no significant differences between the blocked cross-validation and the best sequential validation. However, we do find that cross-validations typically overestimate the performance, while sequential validations underestimate it.",
"The results are robust in the sense that we use two different performance measures, several comparisons and tests, and a very large collection of data. To the best of our knowledge, we analyze and provide by far the largest set of manually sentiment-labeled tweets publicly available.",
"There are some biased decisions in our creation of the gold standard though, which limit the generality of the results reported, and should be addressed in the future work. An out-set always consists of 10,000 tweets, and immediately follows the in-sets. We do not consider how the performance drops over longer out-sets, nor how frequently should a model be updated. More importantly, we intentionally ignore the issue of dependent observations, between the in- and out-sets, and between the training and test sets. In the case of tweets, short-term dependencies are demonstrated in the form of retweets and replies. Medium- and long-term dependencies are shaped by periodic events, influential users and communities, or individual user's habits. When this is ignored, the model performance is likely overestimated. Since we do this consistently, our comparative results still hold. The issue of dependent observations was already addressed for blocked cross-validation BIBREF35 , BIBREF20 by removing adjacent observations between the training and test sets, thus effectively creating a gap between the two. Finally, it should be noted that different Twitter language datasets are of different sizes and annotation quality, belong to different time periods, and that there are time periods in the datasets without any manually labeled tweets."
],
[
"All Twitter data were collected through the public Twitter API and are subject to the Twitter terms and conditions. The Twitter language datasets are available in a public language resource repository clarin.si at http://hdl.handle.net/11356/1054, and are described in BIBREF22 . There are 15 language files, where the Serbian/Croatian/Bosnian dataset is provided as three separate files for the constituent languages. For each language and each labeled tweet, there is the tweet ID (as provided by Twitter), the sentiment label (negative, neutral, or positive), and the annotator ID (anonymized). Note that Twitter terms do not allow to openly publish the original tweets, they have to be fetched through the Twitter API. Precise details how to fetch the tweets, given tweet IDs, are provided in Twitter API documentation https://developer.twitter.com/en/docs/tweets/post-and-engage/api-reference/get-statuses-lookup. However, upon request to the corresponding author, a bilateral agreement on the joint use of the original data can be reached.",
"The TwoPlaneSVMbin classifier and several other machine learning algorithms are implemented in an open source LATINO library BIBREF36 . LATINO is a light-weight set of software components for building text mining applications, openly available at https://github.com/latinolib.",
"All the performance results, for gold standard and the six estimation procedures, are provided in a form which allows for easy reproduction of the presented results. The R code and data files needed to reproduce all the figures and tables in the paper are available at http://ltorgo.github.io/TwitterDS/."
],
[
"Igor Mozetič and Jasmina Smailović acknowledge financial support from the H2020 FET project DOLFINS (grant no. 640772), and the Slovenian Research Agency (research core funding no. P2-0103).",
"Luis Torgo and Vitor Cerqueira acknowledge financing by project “Coral - Sustainable Ocean Exploitation: Tools and Sensors/NORTE-01-0145-FEDER-000036”, financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF).",
"We thank Miha Grčar and Sašo Rutar for valuable discussions and implementation of the LATINO library."
]
],
"section_name": [
"Abstract",
"Introduction",
"Related work",
"Methods and experiments",
"Data and models",
"Performance measures",
"Gold standard",
"Estimation procedures",
"Results and discussion",
"Median errors",
"Relative errors",
"Friedman test",
"Wilcoxon test",
"Conclusions",
"Data and code availability",
"Acknowledgements"
]
} | {
"answers": [
{
"annotation_id": [
"2913dc7aa957eba425884f4446cf377cce62b244",
"74d87d1dd4df2f7416dc0d3accec833d9efb48c4"
],
"answer": [
{
"evidence": [
"We compare six estimation procedures in terms of different types of errors they incur. The error is defined as the difference to the gold standard. First, the magnitude and sign of the errors show whether a method tends to underestimate or overestimate the performance, and by how much (subsection sec:median-errors). Second, relative errors give fractions of small, moderate, and large errors that each procedure incurs (subsection sec:rel-errors). Third, we rank the estimation procedures in terms of increasing absolute errors, and estimate the significance of the overall ranking by the Friedman-Nemenyi test (subsection sec:friedman). Finally, selected pairs of estimation procedures are compared by the Wilcoxon signed-rank test (subsection sec:wilcoxon).",
"The differences between the estimation procedures are easier to detect when we aggregate the errors over all language datasets. The results are in Figures FIGREF25 and FIGREF26 , for INLINEFORM0 and INLINEFORM1 , respectively. In both cases we observe that the cross-validation procedures (xval) consistently overestimate the performance, while the sequential validations (seq) underestimate it. The largest overestimation errors are incurred by the random cross-validation, and the largest underestimations by the sequential validation with the training:test set ratio 2:1. We also observe high variability of errors, with many outliers. The conclusions are consistent for both measures, INLINEFORM2 and INLINEFORM3 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"We compare six estimation procedures in terms of different types of errors they incur. The error is defined as the difference to the gold standard. First, the magnitude and sign of the errors show whether a method tends to underestimate or overestimate the performance, and by how much (subsection sec:median-errors). ",
"The differences between the estimation procedures are easier to detect when we aggregate the errors over all language datasets. The results are in Figures FIGREF25 and FIGREF26 , for INLINEFORM0 and INLINEFORM1 , respectively. In both cases we observe that the cross-validation procedures (xval) consistently overestimate the performance, while the sequential validations (seq) underestimate it. The largest overestimation errors are incurred by the random cross-validation, and the largest underestimations by the sequential validation with the training:test set ratio 2:1. We also observe high variability of errors, with many outliers. The conclusions are consistent for both measures, INLINEFORM2 and INLINEFORM3 ."
],
"unanswerable": false,
"yes_no": false
},
{
"evidence": [
"In this section we briefly review typical estimation methods used in sentiment classification of Twitter data. In general, for time-ordered data, the estimation methods used are variants of cross-validation, or are derived from the methods used to analyze time series data. We examine the state-of-the-art of these estimation methods, pointing out their advantages and drawbacks.",
"The idea behind the INLINEFORM0 -fold cross-validation is to randomly shuffle the data and split it in INLINEFORM1 equally-sized folds. Each fold is a subset of the data randomly picked for testing. Models are trained on the INLINEFORM2 folds and their performance is estimated on the left-out fold. INLINEFORM3 -fold cross-validation has several practical advantages, such as an efficient use of all the data. However, it is also based on an assumption that the data is independent and identically distributed BIBREF9 which is often not true. For example, in time-ordered data, such as Twitter posts, the data are to some extent dependent due to the underlying temporal order of tweets. Therefore, using INLINEFORM4 -fold cross-validation means that one uses future information to predict past events, which might hinder the generalization ability of models.",
"There are several methods in the literature designed to cope with dependence between observations. The most common are sequential approaches typically used in time series forecasting tasks. Some variants of INLINEFORM0 -fold cross-validation which relax the independence assumption were also proposed. For time-ordered data, an estimation procedure is sequential when testing is always performed on the data subsequent to the training set. Typically, the data is split into two parts, where the first is used to train the model and the second is held out for testing. These approaches are also known in the literature as the out-of-sample methods BIBREF10 , BIBREF11 .",
"The problem of data dependency for cross-validation is addressed by McQuarrie and Tsai BIBREF17 . The modified cross-validation removes observations from the training set that are dependent with the test observations. The main limitation of this method is its inefficient use of the available data since many observations are removed, as pointed out in BIBREF18 . The method is also known as non-dependent cross-validation BIBREF11 ."
],
"extractive_spans": [],
"free_form_answer": "",
"highlighted_evidence": [
"In general, for time-ordered data, the estimation methods used are variants of cross-validation, or are derived from the methods used to analyze time series data. We examine the state-of-the-art of these estimation methods, pointing out their advantages and drawbacks.",
"The idea behind the INLINEFORM0 -fold cross-validation is to randomly shuffle the data and split it in INLINEFORM1 equally-sized folds. Each fold is a subset of the data randomly picked for testing. Models are trained on the INLINEFORM2 folds and their performance is estimated on the left-out fold. INLINEFORM3 -fold cross-validation has several practical advantages, such as an efficient use of all the data. However, it is also based on an assumption that the data is independent and identically distributed BIBREF9 which is often not true. For example, in time-ordered data, such as Twitter posts, the data are to some extent dependent due to the underlying temporal order of tweets. Therefore, using INLINEFORM4 -fold cross-validation means that one uses future information to predict past events, which might hinder the generalization ability of models.",
"There are several methods in the literature designed to cope with dependence between observations.",
"The problem of data dependency for cross-validation is addressed by McQuarrie and Tsai BIBREF17 . The modified cross-validation removes observations from the training set that are dependent with the test observations. The main limitation of this method is its inefficient use of the available data since many observations are removed, as pointed out in BIBREF18 . The method is also known as non-dependent cross-validation BIBREF11 ."
],
"unanswerable": false,
"yes_no": true
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"699dd8e4ef64cb48363e7c054b890cd9d9f750fe",
"7127f20d5dbab34320224df04a3fafce01ecc174"
],
"answer": [
{
"evidence": [
"Throughout our experiments we use only one training algorithm (subsection sec:data), and two performance measures (subsection sec:measures). During training, the performance of the trained model can be estimated only on the in-sample data. However, there are different estimation procedures which yield these approximations. In machine learning, a standard procedure is cross-validation, while for time-ordered data, sequential validation is typically used. In this study, we compare three variants of cross-validation and three variants of sequential validation (subsection sec:eval-proc). The goal is to find the in-sample estimation procedure that best approximates the out-of-sample gold standard. The error an estimation procedure makes is defined as the difference to the gold standard.",
"In sequential validation, a sample consists of the training set immediately followed by the test set. We vary the ratio of the training and test set sizes, and the number and distribution of samples taken from the in-set. The number of samples is 10 or 20, and they are distributed equidistantly or semi-equidistantly. In all variants, samples cover the whole in-set, but they are overlapping. See Figure FIGREF20 for illustration. We use the following abbreviations for sequential validations:",
"seq(9:1, 20, equi) - 9:1 training:test ratio, 20 equidistant samples,",
"seq(9:1, 10, equi) - 9:1 training:test ratio, 10 equidistant samples,",
"seq(2:1, 10, semi-equi) - 2:1 training:test ratio, 10 samples randomly selected out of 20 equidistant points.",
"The Twitter data shares some characteristics of time series and some of static data. A time series is an array of observations at regular or equidistant time points, and the observations are in general dependent on previous observations BIBREF0 . On the other hand, Twitter data is time-ordered, but the observations are short texts posted by Twitter users at any time and frequency. It can be assumed that original Twitter posts are not directly dependent on previous posts. However, there is a potential indirect dependence, demonstrated in important trends and events, through influential users and communities, or individual user's habits. These long-term topic drifts are typically not taken into account by the sentiment analysis models."
],
"extractive_spans": [
"seq(9:1, 20, equi) - 9:1 training:test ratio, 20 equidistant samples,\n\n",
"seq(9:1, 10, equi) - 9:1 training:test ratio, 10 equidistant samples,\n\n",
"seq(2:1, 10, semi-equi) - 2:1 training:test ratio, 10 samples randomly selected out of 20 equidistant points.\n\n"
],
"free_form_answer": "",
"highlighted_evidence": [
" In this study, we compare three variants of cross-validation and three variants of sequential validation (subsection sec:eval-proc). ",
" In all variants, samples cover the whole in-set, but they are overlapping. See Figure FIGREF20 for illustration. We use the following abbreviations for sequential validations:\n\nseq(9:1, 20, equi) - 9:1 training:test ratio, 20 equidistant samples,\n\nseq(9:1, 10, equi) - 9:1 training:test ratio, 10 equidistant samples,\n\nseq(2:1, 10, semi-equi) - 2:1 training:test ratio, 10 samples randomly selected out of 20 equidistant points.",
"equidistant "
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"Social media are becoming an increasingly important source of information about the public mood regarding issues such as elections, Brexit, stock market, etc. In this paper we focus on sentiment classification of Twitter data. Construction of sentiment classifiers is a standard text mining task, but here we address the question of how to properly evaluate them as there is no settled way to do so. Sentiment classes are ordered and unbalanced, and Twitter produces a stream of time-ordered data. The problem we address concerns the procedures used to obtain reliable estimates of performance measures, and whether the temporal ordering of the training and test data matters. We collected a large set of 1.5 million tweets in 13 European languages. We created 138 sentiment models and out-of-sample datasets, which are used as a gold standard for evaluations. The corresponding 138 in-sample datasets are used to empirically compare six different estimation procedures: three variants of cross-validation, and three variants of sequential validation (where test set always follows the training set). We find no significant difference between the best cross-validation and sequential validation. However, we observe that all cross-validation variants tend to overestimate the performance, while the sequential methods tend to underestimate it. Standard cross-validation with random selection of examples is significantly worse than the blocked cross-validation, and should not be used to evaluate classifiers in time-ordered data scenarios.",
"In sequential validation, a sample consists of the training set immediately followed by the test set. We vary the ratio of the training and test set sizes, and the number and distribution of samples taken from the in-set. The number of samples is 10 or 20, and they are distributed equidistantly or semi-equidistantly. In all variants, samples cover the whole in-set, but they are overlapping. See Figure FIGREF20 for illustration. We use the following abbreviations for sequential validations:",
"seq(9:1, 20, equi) - 9:1 training:test ratio, 20 equidistant samples,",
"seq(9:1, 10, equi) - 9:1 training:test ratio, 10 equidistant samples,",
"seq(2:1, 10, semi-equi) - 2:1 training:test ratio, 10 samples randomly selected out of 20 equidistant points."
],
"extractive_spans": [
"9:1 training:test ratio, 20 equidistant samples",
"9:1 training:test ratio, 10 equidistant samples",
"2:1 training:test ratio, 10 samples randomly selected out of 20 equidistant points"
],
"free_form_answer": "",
"highlighted_evidence": [
"We collected a large set of 1.5 million tweets in 13 European languages. We created 138 sentiment models and out-of-sample datasets, which are used as a gold standard for evaluations. The corresponding 138 in-sample datasets are used to empirically compare six different estimation procedures: three variants of cross-validation, and three variants of sequential validation (where test set always follows the training set).",
"In sequential validation, a sample consists of the training set immediately followed by the test set. We vary the ratio of the training and test set sizes, and the number and distribution of samples taken from the in-set. The number of samples is 10 or 20, and they are distributed equidistantly or semi-equidistantly. In all variants, samples cover the whole in-set, but they are overlapping. See Figure FIGREF20 for illustration. We use the following abbreviations for sequential validations:",
"seq(9:1, 20, equi) - 9:1 training:test ratio, 20 equidistant samples,\n\nseq(9:1, 10, equi) - 9:1 training:test ratio, 10 equidistant samples,\n\nseq(2:1, 10, semi-equi) - 2:1 training:test ratio, 10 samples randomly selected out of 20 equidistant points."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"annotation_id": [
"812a3f031d77a77a11911d03e41b036bdaf19388",
"86ec19cb0a83a3140852fdb2c130018f9cc2bcb6"
],
"answer": [
{
"evidence": [
"First, we apply 10-fold cross-validation where the training:test set ratio is always 9:1. Cross-validation is stratified when the fold partitioning is not completely random, but each fold has roughly the same class distribution. We also compare standard random selection of examples to the blocked form of cross-validation BIBREF16 , BIBREF11 , where each fold is a block of consecutive tweets. We use the following abbreviations for cross-validations:",
"xval(9:1, strat, block) - 10-fold, stratified, blocked;",
"xval(9:1, no-strat, block) - 10-fold, not stratified, blocked;",
"xval(9:1, strat, rand) - 10-fold, stratified, random selection of examples."
],
"extractive_spans": [
"10-fold, stratified, blocked;",
"10-fold, not stratified, blocked;",
"10-fold, stratified, random selection of examples."
],
"free_form_answer": "",
"highlighted_evidence": [
"First, we apply 10-fold cross-validation where the training:test set ratio is always 9:1. Cross-validation is stratified when the fold partitioning is not completely random, but each fold has roughly the same class distribution. We also compare standard random selection of examples to the blocked form of cross-validation BIBREF16 , BIBREF11 , where each fold is a block of consecutive tweets. We use the following abbreviations for cross-validations:\n\nxval(9:1, strat, block) - 10-fold, stratified, blocked;\n\nxval(9:1, no-strat, block) - 10-fold, not stratified, blocked;\n\nxval(9:1, strat, rand) - 10-fold, stratified, random selection of examples."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"First, we apply 10-fold cross-validation where the training:test set ratio is always 9:1. Cross-validation is stratified when the fold partitioning is not completely random, but each fold has roughly the same class distribution. We also compare standard random selection of examples to the blocked form of cross-validation BIBREF16 , BIBREF11 , where each fold is a block of consecutive tweets. We use the following abbreviations for cross-validations:",
"xval(9:1, strat, block) - 10-fold, stratified, blocked;",
"xval(9:1, no-strat, block) - 10-fold, not stratified, blocked;",
"xval(9:1, strat, rand) - 10-fold, stratified, random selection of examples."
],
"extractive_spans": [
"xval(9:1, strat, block) - 10-fold, stratified, blocked;\n\n",
"xval(9:1, no-strat, block) - 10-fold, not stratified, blocked;\n\n",
"xval(9:1, strat, rand) - 10-fold, stratified, random selection of examples.\n\n"
],
"free_form_answer": "",
"highlighted_evidence": [
" We use the following abbreviations for cross-validations:\n\nxval(9:1, strat, block) - 10-fold, stratified, blocked;\n\nxval(9:1, no-strat, block) - 10-fold, not stratified, blocked;\n\nxval(9:1, strat, rand) - 10-fold, stratified, random selection of examples.",
" We use the following abbreviations for cross-validations:\n\nxval(9:1, strat, block) - 10-fold, stratified, blocked;\n\nxval(9:1, no-strat, block) - 10-fold, not stratified, blocked;\n\nxval(9:1, strat, rand) - 10-fold, stratified, random selection of examples."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"66a5063ad70002ae20cb10b43b0b068f4b04d106",
"926991e6a59238e9838c2606427bc339e7fee4f7"
],
"answer": [
{
"evidence": [
"FLOAT SELECTED: Table 1. Sentiment label distribution of Twitter datasets in 13 languages. The last column is a qualitative assessment of the annotation quality, based on the levels of the self- and inter-annotator agreement.",
"Our experimental study is performed on a large collection of nearly 1.5 million Twitter posts, which are domain-free and in 13 different languages. A realistic scenario is emulated by partitioning the data into 138 datasets by language and time window. Each dataset is split into an in-sample (a training plus test set), where estimation procedures are applied to approximate the performance of a model, and an out-of-sample used to compute the gold standard. Our goal is to understand the ability of each estimation procedure to approximate the true error incurred by a given model on the out-of-sample data."
],
"extractive_spans": [],
"free_form_answer": "Albanian, Bulgarian, English, German, Hungarian, Polish, Portughese, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish",
"highlighted_evidence": [
"FLOAT SELECTED: Table 1. Sentiment label distribution of Twitter datasets in 13 languages. The last column is a qualitative assessment of the annotation quality, based on the levels of the self- and inter-annotator agreement.",
"Our experimental study is performed on a large collection of nearly 1.5 million Twitter posts, which are domain-free and in 13 different languages."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"We collected a large corpus of nearly 1.5 million Twitter posts written in 13 European languages. This is, to the best of our knowledge, by far the largest set of sentiment labeled tweets publicly available. We engaged native speakers to label the tweets based on the sentiment expressed in them. The sentiment label has three possible values: negative, neutral or positive. It turned out that the human annotators perceived the values as ordered. The quality of annotations varies though, and is estimated from the self- and inter-annotator agreements. All the details about the datasets, the annotator agreements, and the ordering of sentiment values are in our previous study BIBREF22 . The sentiment distribution and quality of individual language datasets is in Table TABREF2 . The tweets in the datasets are ordered by tweet ids, which corresponds to ordering by the time of posting.",
"FLOAT SELECTED: Table 1. Sentiment label distribution of Twitter datasets in 13 languages. The last column is a qualitative assessment of the annotation quality, based on the levels of the self- and inter-annotator agreement."
],
"extractive_spans": [],
"free_form_answer": "Albanian\nBulgarian\nEnglish\nGerman\nHungarian\nPolish\nPortuguese\nRussian\nSer/Cro/Bos\nSlovak\nSlovenian\nSpanish\nSwedish",
"highlighted_evidence": [
"The sentiment distribution and quality of individual language datasets is in Table TABREF2 . The tweets in the datasets are ordered by tweet ids, which corresponds to ordering by the time of posting.",
"FLOAT SELECTED: Table 1. Sentiment label distribution of Twitter datasets in 13 languages. The last column is a qualitative assessment of the annotation quality, based on the levels of the self- and inter-annotator agreement."
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"a0b403873302db7cada39008f04d01155ef68f4f"
]
},
{
"annotation_id": [
"da88f3c54463680b09c01537bc94c2cb58a6a340",
"f641163983fcc64d3faf38ec129533fd072df44b"
],
"answer": [
{
"evidence": [
"The complexity of Twitter data raises some challenges on how to perform such estimations, as, to the best of our knowledge, there is currently no settled approach to this. Sentiment classes are typically ordered and unbalanced, and the data itself is time-ordered. Taking these properties into account is important for the selection of appropriate estimation procedures."
],
"extractive_spans": [
"time-ordered"
],
"free_form_answer": "",
"highlighted_evidence": [
"The complexity of Twitter data raises some challenges on how to perform such estimations, as, to the best of our knowledge, there is currently no settled approach to this. Sentiment classes are typically ordered and unbalanced, and the data itself is time-ordered. Taking these properties into account is important for the selection of appropriate estimation procedures."
],
"unanswerable": false,
"yes_no": null
},
{
"evidence": [
"In the paper we address the task of sentiment analysis of Twitter data. The task encompasses identification and categorization of opinions (e.g., negative, neutral, or positive) written in quasi-natural language used in Twitter posts. We focus on estimation procedures of the predictive performance of machine learning models used to address this task. Performance estimation procedures are key to understand the generalization ability of the models since they present approximations of how these models will behave on unseen data. In the particular case of sentiment analysis of Twitter data, high volumes of content are continuously being generated and there is no immediate feedback about the true class of instances. In this context, it is fundamental to adopt appropriate estimation procedures in order to get reliable estimates about the performance of the models."
],
"extractive_spans": [
"negative",
"neutral",
"positive"
],
"free_form_answer": "",
"highlighted_evidence": [
"In the paper we address the task of sentiment analysis of Twitter data. The task encompasses identification and categorization of opinions (e.g., negative, neutral, or positive) written in quasi-natural language used in Twitter posts. "
],
"unanswerable": false,
"yes_no": null
}
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
],
"nlp_background": [
"five",
"five",
"five",
"five",
"five"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no"
],
"question": [
"Do the authors offer any potential reasons why cross-validation variants tend to overestimate the performance, while the sequential methods tend to underestimate it?",
"Which three variants of sequential validation are examined?",
"Which three variants of cross-validation are examined?",
"Which European languages are targeted?",
"In what way are sentiment classes ordered?"
],
"question_id": [
"9133a85730c4090fe8b8d08eb3d9146efe7d7037",
"42279c3a202a93cfb4aef49212ccaf401a3f8761",
"9ca85242ebeeafa88a0246986aa760014f6094f2",
"8641156c4d67e143ebbabbd79860349242a11451",
"2a120f358f50c377b5b63fb32633223fa4ee2149"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"search_query": [
"twitter",
"twitter",
"twitter",
"twitter",
"twitter"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
]
} | {
"caption": [
"Table 1. Sentiment label distribution of Twitter datasets in 13 languages. The last column is a qualitative assessment of the annotation quality, based on the levels of the self- and inter-annotator agreement.",
"Fig 1. Creation of the estimation and gold standard data. Each labeled language dataset (Table 1) is partitioned into L in-sets and corresponding outsets. The in-sets always start at the first tweet and are progressively longer in multiples of 10,000 tweets. The corresponding out-set is the subsequent 10,000 consecutive tweets, or the remainder at the end of the language dataset.",
"Table 2. Gold standard performance results as measured by Alpha. The baseline, Alpha = 0, indicates classification by chance.",
"Table 3. Gold standard performance results as measured by F1. The baseline, F1¼ 0, indicates that all negative and positive examples are classified incorrectly.",
"Fig 2. Sampling of an in-set for sequential validation. A sample consists of a training set, immediately followed by a test set. We consider two scenarios: (A) The ratio of the training and test set is 9:1, and the sample is shifted along 10 or 20 equidistant points. (B) The training:test set ratio is 2:1 and the sample is positioned at 10 randomly selected points out of 20 equidistant points.",
"Table 4. Median errors, measured by Alpha, for individual language datasets and six estimation procedures.",
"Table 5. Median errors, measured by F1, for individual language datasets and six estimation procedures.",
"Fig 3. Box plots of errors of six estimation procedures for 13 language datasets. Errors are measured in terms of Alpha.",
"Fig 4. Box plots of errors of six estimation procedures aggregated over all language datasets. Errors are measured in terms of Alpha.",
"Fig 5. Box plots of errors of six estimation procedures aggregated over all language datasets. Errors are measured in terms of F1.",
"Fig 6. Proportion of relative errors, measured by Alpha, per estimation procedure and individual language dataset. Small errors (< 5%) are in blue, moderate ([5, 30]%) in green, and large errors (> 30%) in red.",
"Fig 7. Proportion of relative errors, measured by Alpha, per estimation procedure and aggregated over all 138 datasets. Small errors (< 5%) are in blue, moderate ([5, 30]%) in green, and large errors (> 30%) in red.",
"Fig 8. Proportion of relative errors, measured by F1, per estimation procedure and aggregated over all 138 datasets. Small errors (< 5%) are in blue, moderate ([5, 30]%) in green, and large errors (> 30%) in red.",
"Fig 9. Ranking of the six estimation procedures according to the Friedman-Nemenyi test. The average ranks are computed from absolute errors, measured by Alpha. The black bars connect ranks that are not significantly different at the 5% level.",
"Fig 10. Ranking of the six estimation procedures according to the Friedman-Nemenyi test. The average ranks are computed from absolute errors, measured by F1. The black bar connects ranks that are not significantly different at the 5% level.",
"Fig 11. Differences between pairs of estimation procedures according to the Wilcoxon signed-rank test. Compared are the average absolute errors, measured by Alpha (top) and F1(bottom). Thick solid lines denote significant differences at the 1% level, normal solid lines significant differences at the 5% level, and dashed lines insignificant differences. Arrows point from a procedure which incurs smaller errors to a procedure with larger errors."
],
"file": [
"5-Table1-1.png",
"7-Figure1-1.png",
"8-Table2-1.png",
"9-Table3-1.png",
"10-Figure2-1.png",
"11-Table4-1.png",
"11-Table5-1.png",
"12-Figure3-1.png",
"13-Figure4-1.png",
"14-Figure5-1.png",
"15-Figure6-1.png",
"16-Figure7-1.png",
"17-Figure8-1.png",
"17-Figure9-1.png",
"18-Figure10-1.png",
"18-Figure11-1.png"
]
} | [
"Which European languages are targeted?"
] | [
[
"1803.05160-Data and models-0",
"1803.05160-5-Table1-1.png",
"1803.05160-Introduction-5"
]
] | [
"Albanian\nBulgarian\nEnglish\nGerman\nHungarian\nPolish\nPortuguese\nRussian\nSer/Cro/Bos\nSlovak\nSlovenian\nSpanish\nSwedish"
] | 415 |