sections
listlengths 0
910
| pub_date
stringclasses 722
values | doi
stringlengths 0
570
| references
listlengths 0
835
| formulas
listlengths 0
679
| title
stringlengths 0
235
| abstract
stringlengths 0
7.77k
| authors
stringlengths 0
11.9k
| figures
listlengths 0
270
| citation_data
stringlengths 2
160k
⌀ |
---|---|---|---|---|---|---|---|---|---|
[
{
"figure_ref": [],
"heading": "Introduction",
"publication_ref": [
"b3",
"b37",
"b4",
"b11",
"b14",
"b34",
"b13",
"b25",
"b19",
"b42"
],
"table_ref": [],
"text": "Pre-trained language models (PLMs) have quickly become a staple in the field of natural language processing. With the growing demand for data for training these models, developing efficient finetuning methods has become critical. This is particularly relevant for many domains and languages where obtaining large amounts of labeled training data is difficult or downright impossible. In such low-resource settings, it becomes essential to effectively leverage and adapt PLMs while minimizing the need for extensive labeled data.\nData labeling is notoriously time-consuming and expensive, often hindering the development of sizable labeled datasets required for training highperformance models. Active learning (AL) (Cohn et al., 1996;Settles, 2009) has emerged as a potential solution to this challenge. In contrast to passive learning, in which the training set is sampled at random, AL encompasses a unique family of machine learning algorithms specifically designed to reduce labeling costs by reducing label complexity, i.e., the number of labels required by an acquisition model to achieve a certain level of performance (Dasgupta, 2011). With the advent of PLMs, AL research has pivoted towards investigating training regimes for PLMs, such as task-adaptive pre-training (TAPT; Gururangan et al., 2020), that could be combined with AL to further reduce the label complexity.\nWhile AL aims at directly minimizing the label complexity of learning, training efficiency can also be improved by reducing the parameter complexity of the model. This becomes more important as PLMs grow larger, and fine-tuning becomes increasingly challenging due to the sheer number of parameters involved. To address this issue, adapters (Houlsby et al., 2019) have been introduced as compact modules that can be incorporated between the layers of PLMs. Adapters enable considerable parameter-sharing, facilitating parameterefficient fine-tuning (PEFT) through modular learning (Pfeiffer et al., 2023). In this process, only the parameters of the adapters are updated during the tuning for a specific downstream task. Recent research (He et al., 2021;Li and Liang, 2021;Karimi Mahabadi et al., 2021) has revealed that some PEFT methods outperform full fine-tuning (FFT) in low-resource settings, potentially due to better stability and a decreased risk of overfitting. In contrast, FFT has been shown to exhibit instability in scenarios with limited data.\nDespite the promising results demonstrated by PEFT methods in low-resource settings, there is a striking gap in research on parameter-efficient training with respect to how PEFT interacts with AL. Given that the majority of real-world AL scenarios involve a restricted amount of data, PEFT methods emerge as strong candidates for AL acquisition models. However, there has been no exploration of AL in conjunction with adapters. Investigating this uncharted territory can further advance our understanding of AL and reveal novel strategies for optimizing performance in low-resource settings.\nIn this paper, we present an empirical study on the behavior of PEFT in low-resource settings for text classification tasks. We analyze PEFT with and without AL and compare it against FFT. While our results confirm that PEFT exhibits superior performance in low-resource setups compared to FFT, we show that the improved performance with PEFT extends to AL scenarios in terms of performance gains over passive learning. Furthermore, we analyze the efficacy of TAPT in conjunction with AL and PEFT. We find that TAPT is beneficial in AL scenarios for both PEFT and fully fine-tuned models, thus representing a viable technique for improving performance in low-resource settings. Finally, aiming to illuminate why PEFT and TAPT improve AL performance in low-resource settings, we analyze the properties of PEFT and FFT via forgetting dynamics (Toneva et al., 2019) and PLMs' instance-level representations. We find that AL methods choose fewer unforgettable and more moderately forgettable examples when combined with PEFT and TAPT, where forgetfulness indicates the model's tendency to learn and forget the gold label of a particular instance. Compared to FFT, we observe that PEFT yields representations in the early and middle layers of a model that are more similar to the representations of the base PLM. We hypothesize that this property mitigates the issue of forgetting the knowledge obtained during pretraining when fine-tuning for downstream tasks.\nIn summary, we show that in AL low-resource settings for text classification, (1) PEFT yields greater performance improvements compared to FFT and (2) TAPT enhances the overall classification performance of adapters and is well-suited for AL scenarios. We also show that (3) AL methods choose fewer unforgettable and more moderately forgettable examples with PEFT and that (4) PEFT produces instance-level representations of early and middle layers that are more similar to the base PLM than FFT. Our results uncover the intrica-cies of positive interactions between AL, PEFT, and TAPT, providing empirical justification for their combined use in low-resource settings."
},
{
"figure_ref": [],
"heading": "Related Work",
"publication_ref": [
"b7",
"b29",
"b38",
"b18",
"b35",
"b30",
"b45",
"b6",
"b10",
"b44",
"b43",
"b28",
"b17",
"b0",
"b23",
"b32",
"b25",
"b27",
"b13",
"b20"
],
"table_ref": [],
"text": "Our research involves combining AL with PLMs and investigating the use of PEFT techniques within the confines of low-resource settings.\nAL with PLMs. Until recently, the conventional approach for integrating PLMs with AL involved performing full fine-tuning with a fixed number of training epochs and training the model from scratch in each AL step (Ein-Dor et al., 2020;Margatina et al., 2021;Shelmanov et al., 2021;Karamcheti et al., 2021;Schröder et al., 2022). However, studies by Mosbach et al. (2021) and Zhang et al. (2021) revealed that fine-tuning in low-resource setups is prone to instability, particularly when training for only a few epochs. This instability, often sensitive to weight initialization and data ordering (Dodge et al., 2020), presents a significant challenge for AL, which frequently operates in lowresource settings. Recent research has looked into the impact of PLM training regimes on AL performance (Grießhaber et al., 2020;Yuan et al., 2020;Yu et al., 2022), suggesting that the choice of training regime is more critical than the choice of the AL method. Notably, TAPT has proven particularly effective in enhancing AL performance (Margatina et al., 2022;Jukić and Šnajder, 2023).\nAdapters in low-resource settings. Research on adapters in low-resource settings has primarily focused on areas such as cross-lingual transfer for low-resource languages (Ansell et al., 2021;Lee et al., 2022;Parović et al., 2022), where the emphasis lies on exploring diverse methods of fusing adapters. In monolingual settings with scarce data, adapters have been found to outperform full finetuning (Li and Liang, 2021;Mao et al., 2022). A study by He et al. (2021) demonstrated that adapterbased tuning exhibits enhanced stability and generalization capabilities by virtue of being less sensitive to learning rates than traditional fine-tuning methods. While incorporating task adaptation techniques, such as TAPT, has been shown to match or even improve performance over FFT in lowresource setups, Kim et al. (2021) noted an interesting caveat: the benefits of integrating TAPT with adapters tend to taper off as the amount of data increases.\nDespite the established effectiveness of adapters in setups with limited resources, their integration into AL frameworks -which frequently face analogous resource constraints -remains an untapped area of research. This gap is particularly notable given that AL's iterative learning process could significantly benefit from adapters' parameter efficiency and transferability, especially in scenarios where data scarcity or labeling costs are primary concerns."
},
{
"figure_ref": [],
"heading": "Preliminaries",
"publication_ref": [],
"table_ref": [],
"text": "We now describe the experimental setup, providing details on the datasets as well as the PEFT and AL methods used in our study."
},
{
"figure_ref": [],
"heading": "Datasets",
"publication_ref": [
"b31",
"b26",
"b39",
"b46"
],
"table_ref": [],
"text": "We employ four single-text classification tasks commonly used for AL evaluation: (1) the subjectivity dataset (SUBJ; Pang and Lee, 2004), designed to assess the subjectivity of a given text; (2) the question type classification dataset (TREC; Li and Roth, 2002), designed for categorizing questions according to their types; (3) the Stanford Sentiment Treebank (SST; Socher et al., 2013), which focuses on sentiment analysis; (4) AG's news classification dataset (AGN; Zhang et al., 2015), which classifies news articles into different categories. We provide the dataset statistics in the appendix for further reference (cf. Appendix Table 3)."
},
{
"figure_ref": [],
"heading": "PEFT methods",
"publication_ref": [
"b14",
"b25",
"b16",
"b27",
"b12",
"b5"
],
"table_ref": [],
"text": "We consider four prototypical PEFT techniques:\nAdapter incorporates trainable bottleneck layers after both the multi-head attention and feedforward block in each Transformer layer (Houlsby et al., 2019);\nPrefix-tuning adds new parameters in the multihead attention blocks within each Transformer layer (Li and Liang, 2021);\nLoRA (Low-rank adaptation) represents an additive method that incorporates trainable lowrank decomposition matrices into the layers of a pre-trained model (Hu et al., 2022);\nUniPELT combines multiple PEFT approaches, namely LoRA, Prefix-tuning, and Adapter, in a single unified setup (Mao et al., 2022). Each constituent is a submodule, and UniPELT employs gating mechanisms to activate them effectively.\nAll of the above PEFT methods fall under the category of lightweight fine-tuning. While prefixtuning does not technically qualify as an adapter, He et al. (2022) demonstrated that it shares formal similarities with adapters, with prefix-tuning performing weighted addition and an adapter employing unweighted addition. We refer to all four considered methods as adapters for terminological simplicity. We use BERT (Devlin et al., 2019) as the base PLM for every adapter. Additionally, we adhere to the hyperparameter settings for each adapter as recommended in the respective papers that introduced them (cf. Appendix A.2 for details)."
},
{
"figure_ref": [],
"heading": "AL methods",
"publication_ref": [
"b24",
"b8",
"b40",
"b9"
],
"table_ref": [],
"text": "Our study considers five sampling strategies, including random selection (RND) as a passive learning baseline. The other four strategies are AL methods originating from different families, chosen for their robustness (ability to perform well across various tasks) and widespread usage in the field: Maximum entropy (ENT; Lewis and Gale, 1994) comes from the family of uncertainty strategies. The method queries instances where the model is least certain based on the maximum entropy criterion of the prediction output;\nMonte Carlo dropout (MC; Gal and Ghahramani, 2016) resembles ENT but utilizes the stochasticity of forward passes with dropout layers (Srivastava et al., 2014) to estimate the entropy for a given instance;\nCore-set (CS; Sener and Savarese, 2018) encourages instance diversity by using the learned representations of the acquisition model. This method aims to minimize the distance between an example in the unlabeled set and its closest counterpart in the labeled subset;\nDiscriminative active learning (DAL; Gissin and Shalev-Shwartz, 2019) frames AL as a binary classification of instances into those that are labeled and those that are not, with the objective of making the labeled and unlabeled sets indistinguishable."
},
{
"figure_ref": [],
"heading": "Experimental setup",
"publication_ref": [
"b17"
],
"table_ref": [],
"text": "In AL runs, we select 50 new examples in each step of each AL experiment, using 100 examples for the warm start (randomly sampled labeled data to initiate the model). To probe different PEFT approaches with and without AL in low-resource settings, we establish a labeling budget limit of 1, 000 instances. To sidestep the need for a validation set in our experiments, which is typically unavailable in real-world AL scenarios, we adopt the Besov early stopping (Jukić and Šnajder, 2023). This method utilizes the smoothness of Transformer layers to decide at which epoch to stop training.\nIn the case of TAPT, we pre-train the base model on a masked language modeling task using unlabeled training data. For adapters, we only update the injected parameters while keeping the remaining parameters of the base model frozen. This approach aligns with the primary function of adapters, which is to utilize a common base model across diverse tasks. For every setting, we perform five runs using different random seeds. We report the average F 1 score at each sampling step (with and without AL for FFT and PEFT) to show the corresponding learning curve averaged over five runs. We provide details on training and hyperparameters in Appendix A.5."
},
{
"figure_ref": [],
"heading": "Evaluation",
"publication_ref": [
"b35",
"b17"
],
"table_ref": [],
"text": "To evaluate the overall performance of an AL method, we employ the area under the performance curve (AUC). In each individual AL step with a specific quantity of labeled examples, we measure the classification performance in terms of the F 1 score. The overall AUC is calculated using the F 1 scores obtained at each step. We advocate for using AUC alongside the AL curves, as AUC serves as a suitable approximation of AL feasibility through a summary numeric score, as recommended in recent AL literature (Schröder et al., 2022;Jukić and Šnajder, 2023).\nAs our experiments involve different training regimes, we compare each AL sampling strategy S AL to passive learning S PL within the same training regime to isolate the effects of AL. The primary objective of AL is to improve label efficiency over passive learning. To test whether AL is successful, we calculate the relative improvement over passive learning (RIPL), which we define as follows:\nRIPL(S AL , S PL ) = AUC(S AL ) -AUC(S PL ) 1 -AUC(S PL )\nIntuitively, RIPL estimates the proportion of maximum possible improvement achievable by a given AL method compared to the passive learning baseline. A score of 1 indicates the maximum theoret-ical improvement, which would be tantamount to attaining an F 1 score of 1 in the initial sampling step and sustaining that score throughout all steps. Conversely, a negative score indicates that the AL method performs worse than passive learning."
},
{
"figure_ref": [],
"heading": "Experiments",
"publication_ref": [],
"table_ref": [],
"text": "In this section, we first examine the performance of PEFT methods in comparison to FFT with passive learning and then proceed to analyze the application of PEFT in AL settings."
},
{
"figure_ref": [],
"heading": "PEFT vs. FFT",
"publication_ref": [
"b25",
"b27",
"b13"
],
"table_ref": [
"tab_0"
],
"text": "Previous research on the use of adapters in lowresource settings (Li and Liang, 2021;Mao et al., 2022;He et al., 2021) has demonstrated that adapters perform comparable to, and sometimes even better than FFT. However, these findings were based on comparing FFT to a single adapter variant on a full dataset or evaluating the performance at only a few discrete points.\nIn the first part of our experiments, we build upon these findings by conducting a more nuanced analysis. We generate detailed learning curves that facilitate the comparison of multiple adapters with FFT under the passive learning setup. Our comparison, summarized by the AUC metric in Table 1, reveals that UniPELT and Prefix-tuning consistently outperform FFT with a significant difference across all datasets used in our study. Conversely, the performance of Adapter and LoRA is mostly comparable to FFT, although there are cases where they either outperform or underperform FFT. In cases in which Adapter and LoRA perform better than FFT with significant differences, the degree of improvement is smaller than what is observed with UniPELT and Prefix-tuning.\nNext, we look into how the models' performance changes as the training set increases. To that end, we show the corresponding learning curves for adapters and FFT in Figure 1. The performance disparities between adapters and FFT become more apparent under conditions of extreme data scarcity (100-300 labeled instances). Notably, the greatest differences in performance occur at the initial step (only 100 labels). This highlights the promise of adapter-based methods in low-resource settings, particularly for Prefix-tuning and UniPELT. "
},
{
"figure_ref": [
"fig_0"
],
"heading": "PEFT with AL",
"publication_ref": [],
"table_ref": [
"tab_1"
],
"text": "Motivated by our initial findings on using PEFT under the passive learning setup, where PEFT exhibited promising properties in low-resource settings, we further explore the behavior of adapters in AL scenarios. We evaluate individual PEFT methods in AL scenarios with and without using TAPT in terms of gains over random sampling (passive learning) using the RIPL metric described in Section 3.5. entropy-based methods and TAPT when adapters are employed. Furthermore, we observe that without TAPT, adapters achieve larger gains over FFT. However, when TAPT is applied, FFT becomes comparable to PEFT, although Prefix-tuning and UniPELT still yield the greatest improvements, depending on the dataset and AL method used. In Figure 2, we select the adapters that achieved the best improvement according to Table 2 without TAPT and show their RIPL value compared against FFT as well as their corresponding version when TAPT is applied. We conjecture that TAPT reduces the performance gap between adapters and FFT by inducing FFT to emulate PEFT in aspects such as training dynamics and representation space -a hypothesis we explore in more detail in Section 5.\nWe further investigate the behavior of adapters with AL throughout the individual steps. Figure 3 shows the learning curves for corresponding adapter models with and without applying TAPT. Due to space constraints, we show the learning curves only for the SUBJ dataset, as similar trends occur for other datasets. Without TAPT, the performance of adapters is largely independent of the specific AL method used, where Prefix-tuning and UniPELT consistently outperform Adapter and LoRA across all AL steps. With TAPT, the differ- ences between AL and random sampling are more pronounced starting from the early steps, typically already with 200 instances. In contrast, the gap becomes more apparent only with 500 or more instances when TAPT is not employed."
},
{
"figure_ref": [],
"heading": "Analysis",
"publication_ref": [],
"table_ref": [],
"text": "In Section 4, we have demonstrated that PEFT exhibits larger gains than FFT when combined with AL in low-resource settings, which is also accompanied by superior performance with passive leaning.\nTo better understand why PEFT displays superior behavior with limited data, we now examine two specific properties of adapters and FFT models. First, we analyze the influence of TAPT on the forgetting dynamics during training. We continue with example-level representation analysis, where we investigate the representation similarity of PEFT and FFT to their respective base models."
},
{
"figure_ref": [],
"heading": "Forgetting dynamics",
"publication_ref": [
"b42",
"b18"
],
"table_ref": [],
"text": "We employ forgetting dynamics to compare PEFT and FFT's learning stability and their impact on AL data selection. The underlying hypothesis is that having fewer forgetting events in adapters would indicate a more stable and effective learning process. In utilizing forgetting dynamics, we draw upon the study by Toneva et al. (2019), focusing on the occurrence of forgetting events -cases where a specific training example transitions from correct to incorrect classification over the course of multiple learning epochs. More specifically, we divide the instances into three categories: (1) unforgettable instances, i.e., the ones that have never experienced a forgetting event during training, (2) instances that have encountered one or two forgetting events, labeled as moderately forgettable, and\n(3) instances subjected to three or more forgetting events, referred to as highly forgettable instances. As pointed out in the original study, moderately forgettable, ambiguous instances are more valuable for the learning model than unforgettable, easy instances. However, it is worth noting that AL is often hindered by too hard or impossible-to-learn examples (Karamcheti et al., 2021), which roughly correspond to the highly forgettable examples.\nFigure 4 shows the distribution of instances across the three categories of forgetting events for SUBJ and TREC datasets. We focus on these two datasets as examples of a simple binary classification task and a more complex multi-class classi-fication task, respectively. Specifically, we compare RND with MC, which achieves consistent performance improvements across all datasets. Our findings suggest that FFT tends to select a higher number of unforgettable instances and fewer moderately forgettable instances when compared to adapters. Interestingly, the adapters that perform best -Prefix-tuning and UniPELT -appear to favor moderately forgettable instances. However, when TAPT is applied, the discrepancies in forgetting profiles between FFT and the top two adapters, Prefix-tuning and UniPELT, seem to diminish. In contrast, TAPT amplifies the differences between FFT and the other two adapters, LoRA and Adapter, which typically show smaller improvements than Prefix-tuning and UniPELT. Given their superior AL performance, we hypothesize that the forgetting profiles of Prefix-tuning and UniPELT are more favorable compared to other adapters. Moreover, FFT with TAPT approaches the performance of the superior adapters and simultaneously develops a forgetting profile similar to theirs."
},
{
"figure_ref": [],
"heading": "Representation analysis",
"publication_ref": [
"b13",
"b25",
"b27",
"b41",
"b1",
"b21",
"b13"
],
"table_ref": [],
"text": "To bolster our findings, we explore the representations of adapters and FFT models. As suggested in previous research (He et al., 2021;Li and Liang, 2021;Mao et al., 2022), adapters often display greater stability in terms of loss, especially in scenarios with limited resources. Our aim is to examine the stability of their representations and their relationship with overall AL performance.\nWe draw inspiration from research by Stephenson et al. (2021) and Baldock et al. (2021), which suggests that different layers of networks specialize in different features -earlier layers tend to acquire more generalized knowledge, while the deeper layers are more focused on task-specific information. This leads us to a layerwise examination of similarity. To analyze the effect of PEFT and FFT on AL selection with respect to their layerwise similarity to the base model, we utilize centered kernel alignment (CKA) as a similarity measure between two sets of representations (Kornblith et al., 2019). It has been shown that PEFT methods result in representations closer to the base model at the token level (He et al., 2021). We extend the analysis to example-level representation to explore the behavior of models with AL. We opt for CKA as it is designed to be invariant to invertible linear transformation and still can measure meaningful similari- ties between representations of higher dimensions than the number of data points. This stands in contrast to other metrics, which frequently falter when dealing with high-dimensional representations.\nFor a more direct comparison between PEFT and FFT, we analyze the differences between their respective similarities to their base models. Specifically, we compute the difference CKA(adapter, base)-CKA(FFT, base) for a specific adapter or FFT and their base models. We hypothesize that superior PEFT performance with AL compared to FFT will be accompanied by a more similar early layer representation to the base model in PEFT. Figure 5 visualizes the layerwise difference in similarity between the base model and the adapter model and between the base model and the FFT model. We find that PEFT representations are more similar to the base model in the early and middle layers when compared to FFT. This holds for all AL methods, with differences more pronounced than in passive learning. Specifically, up to the eighth layer, representations are much more similar in adapters than in FFT models. In the final four layers, the difference in CKA scores between the adapter and FFT model is close to zero. Interestingly, the penultimate layer is more similar in the FFT model with respect to the base model.\nWhen fine-tuning on a downstream task, we believe that the increased stability of PEFT in earlier layers, relative to FFT, is instrumental in retaining the foundational knowledge from the PLM's pretraining phase. Conversely, PEFT exhibits more substantial transformations in the later, more taskspecific layers. This ensures the preservation of essential pre-trained knowledge while allowing for task-relevant flexibility. We speculate that this strategic balance in PEFT influences its propensity to select moderately forgettable instances when combined with AL, contributing to its enhanced performance over FFT. These instances are neither too trivial to provide no learning value, nor are they too complex to risk misinterpretation, thereby enhancing the effectiveness of learning."
},
{
"figure_ref": [],
"heading": "Conclusion",
"publication_ref": [],
"table_ref": [],
"text": "Our study has shed light on the advantages of parameter-efficient fine-tuning (PEFT) in lowresource settings, confirming its superiority over full fine-tuning (FFT) methods. Importantly, we have demonstrated that the integration of PEFT with active learning (AL) can offer substantial performance gains compared to passive learning, even in settings where labeled data is scarce. Furthermore, we highlighted the potential of task-adaptive pre-training (TAPT) to improve model performance further when used in conjunction with both PEFT and AL. We found that AL methods, in combination with PEFT, tend to select fewer unforgettable instances and more moderately forgettable examples. We further found that PEFT maintains the integrity of early and middle layer representations similar to the base model. We conjecture that this property mitigates forgetting during downstream task fine-tuning. These insights inform us of a possible underpinning mechanism that contributes to PEFT's superior performance and stability in low-resource settings. Overall, our work highlights the potential of PEFT and AL and establishes a foundation for developing increasingly efficient and cost-effective approaches for training models in low-resource settings."
},
{
"figure_ref": [],
"heading": "Limitations",
"publication_ref": [],
"table_ref": [],
"text": "While our study advances the understanding of PEFT and AL's interaction in low-resource settings and uncovers intriguing insights about the forgetting dynamics during fine-tuning, it has a number of limitations.\nTo begin with, we have focused on text classification tasks, which are but one aspect of the wide range of potential applications for PLMs. Different tasks such as question answering, translation, or summarization might exhibit different behaviors under the same conditions. Consequently, the observed advantages of PEFT in the context of AL might not necessarily translate to other NLP tasks.\nNext, our results are limited to the specific PLMs, AL strategies, and PEFT methods we have examined in this study. While we have attempted to be comprehensive in our experiments, the outcomes might vary with different models, strategies, or methods. For example, the effectiveness of AL combined with PEFT might differ if other AL strategies are employed. Similarly, different types of adapter architectures could potentially lead to different results.\nAlthough we found that PEFT methods produce instance-level representations of early and middle layers more similar to the base PLM than FFT, a comprehensive understanding of how and why this similarity leads to increased stability and performance in low-resource settings is still lacking. Our hypothesis about the role of early and middle layer stability in mitigating the issue of forgetting the knowledge obtained during pre-training needs further substantiation.\nFinally, it is important to acknowledge the complexity and multifaceted nature of forgetting dynamics. While our investigation provides valuable insights about the interaction of forgetting with PEFT and TAPT in AL scenarios, a deeper understanding of the mechanisms of forgetting in the context of large PLMs is needed. Particularly, it would be interesting to investigate whether the balance between unforgettable and moderately forgettable instances selected by the AL methods changes as the size of the model or the amount of available data changes.\nFuture work should aim to address these limitations and further explore the mechanisms behind the promising results obtained with the combination of PEFT and AL. This will contribute to a more comprehensive understanding of the interaction between AL and PLMs, and help refine strategies for efficient fine-tuning in low-resource settings."
},
{
"figure_ref": [],
"heading": "",
"publication_ref": [],
"table_ref": [],
"text": "Table 3: Dataset sizes by splits. Although we do not use a validation set (VAL) in our experiments, we report its size for completeness. For the AGN dataset, we performed uniform subsampling to ensure the computational feasibility of the experiments."
},
{
"figure_ref": [],
"heading": "A Reproducibility",
"publication_ref": [],
"table_ref": [],
"text": "A.1 Dataset statistics\nThe sizes of the datasets per split are provided in Table 3. Predominantly, the datasets encompass texts in English."
},
{
"figure_ref": [],
"heading": "A.2 Adapters",
"publication_ref": [
"b33"
],
"table_ref": [],
"text": "We use the implementation of adapters from AdapterHub (Pfeiffer et al., 2020).\nAdapter We set reduction factor to 16 and use swish function as nonlinearity.\nLoRA We include LoRA to the self-attention weights, intermediate, and output MLP weights of a model. We set the rank of the LoRA layer and the scaling factor α to 8. "
},
{
"figure_ref": [],
"heading": "Prefix-tuning",
"publication_ref": [],
"table_ref": [],
"text": ""
},
{
"figure_ref": [],
"heading": "A.4 Preprocessing",
"publication_ref": [],
"table_ref": [],
"text": "We undertake a few pre-processing steps: convert all tokens to lowercase, eliminate nonalphanumeric tokens, and limit the token sequence to a maximum length of 200."
},
{
"figure_ref": [],
"heading": "A.5 Hyperparameters",
"publication_ref": [],
"table_ref": [],
"text": "We use a fixed learning rate of 2 × 10 -5 for FFT and 10 -4 for adapters. Additionally, we set the gradient clipping to 1 during training. In our implementation of TAPT, we randomly mask 15% of tokens for both FFT models and adapters and train the model for 50 epochs with the learning rate set to 10 -5 ."
},
{
"figure_ref": [],
"heading": "A.6 Computing infrastructure",
"publication_ref": [],
"table_ref": [],
"text": "We conducted our experiments on 4× AMD Ryzen Threadripper 3970X 32-Core Processors and 4× NVIDIA GeForce RTX 3090 GPUs with 24GB of RAM. We used PyTorch version 1.9.0 and CUDA 11.4."
},
{
"figure_ref": [],
"heading": "A.7 Average runtime",
"publication_ref": [],
"table_ref": [],
"text": "We report the average runtime of experiments in Table 4."
},
{
"figure_ref": [],
"heading": "B Additional Results",
"publication_ref": [],
"table_ref": [],
"text": "We report the results that were omitted from the main part of the paper due to space constraints. Table 5 shows AUC scores for different combinations of AL methods and adapters, complementing the relative improvement scores as AUC represents absolute scores for each configuration. In Figure 6, we display the difference in similarities of adapters and FFT compared to their base models on the remaining three datasets. UniPELT .934 .943 .944 .943 .942 .943 .952 .953 .952 .952 TREC UniPELT .877 .894 .897 .887 .902 .896 .927 .931 .925 .921 SST UniPELT .836 .842 .843 .843 .837 .871 .882 .884 .882 .881 AGN UniPELT .875 .884 .887 .886 .887 .887 .908 .904 .900 .896 Table 5: AUC scores for AL methods with different adapters shown separately without TAPT and with TAPT. We include random sampling for comparison with AL methods. Values in bold denote the best result for a particular dataset within the same regime (with or without TAPT). "
}
] | 2023-10-23 | 10.18653/v1/2021.findings-emnlp.410 | [
{
"authors": "Alan Ansell; Maria Edoardo; Jonas Ponti; Sebastian Pfeiffer; Goran Ruder; Ivan Glavaš; Anna Vulić; Korhonen",
"journal": "Association for Computational Linguistics",
"ref_id": "b0",
"title": "MAD-G: Multilingual adapter generation for efficient cross-lingual transfer",
"year": "2021"
},
{
"authors": "Robert Baldock; Hartmut Maennel; Behnam Neyshabur",
"journal": "",
"ref_id": "b1",
"title": "Deep learning through the lens of example difficulty",
"year": "2021"
},
{
"authors": "Curran Associates; Inc ",
"journal": "",
"ref_id": "b2",
"title": "",
"year": ""
},
{
"authors": "Zoubin David A Cohn; Michael I Ghahramani; Jordan",
"journal": "Journal of artificial intelligence research",
"ref_id": "b3",
"title": "Active learning with statistical models",
"year": "1996"
},
{
"authors": "Sanjoy Dasgupta",
"journal": "",
"ref_id": "b4",
"title": "Two faces of active learning",
"year": "2009"
},
{
"authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova",
"journal": "Association for Computational Linguistics",
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"year": "2019"
},
{
"authors": "Jesse Dodge; Gabriel Ilharco; Roy Schwartz; Ali Farhadi; Hannaneh Hajishirzi; Noah Smith",
"journal": "",
"ref_id": "b6",
"title": "Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping",
"year": "2020"
},
{
"authors": "Liat Ein-Dor; Alon Halfon; Ariel Gera; Eyal Shnarch; Lena Dankin; Leshem Choshen; Marina Danilevsky; Ranit Aharonov; Yoav Katz; Noam Slonim",
"journal": "Association for Computational Linguistics",
"ref_id": "b7",
"title": "Active Learning for BERT: An Empirical Study",
"year": "2020"
},
{
"authors": "Yarin Gal; Zoubin Ghahramani",
"journal": "PMLR",
"ref_id": "b8",
"title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning",
"year": "2016"
},
{
"authors": "Daniel Gissin; Shai Shalev-Shwartz",
"journal": "",
"ref_id": "b9",
"title": "Discriminative active learning",
"year": "2019"
},
{
"authors": "Daniel Grießhaber; Johannes Maucher; Ngoc Thang Vu",
"journal": "International Committee on Computational Linguistics",
"ref_id": "b10",
"title": "Fine-tuning BERT for low-resource natural language understanding via active learning",
"year": "2020"
},
{
"authors": "Suchin Gururangan; Ana Marasović; Swabha Swayamdipta; Kyle Lo; Iz Beltagy; Doug Downey; Noah A Smith",
"journal": "Association for Computational Linguistics",
"ref_id": "b11",
"title": "Don't stop pretraining: Adapt language models to domains and tasks",
"year": "2020"
},
{
"authors": "Junxian He; Chunting Zhou; Xuezhe Ma; Taylor Berg-Kirkpatrick; Graham Neubig",
"journal": "",
"ref_id": "b12",
"title": "Towards a unified view of parameter-efficient transfer learning",
"year": "2022"
},
{
"authors": "Ruidan He; Linlin Liu; Hai Ye; Qingyu Tan; Bosheng Ding; Liying Cheng; Jiawei Low; Lidong Bing; Luo Si",
"journal": "Association for Computational Linguistics",
"ref_id": "b13",
"title": "On the effectiveness of adapter-based tuning for pretrained language model adaptation",
"year": "2021"
},
{
"authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly",
"journal": "",
"ref_id": "b14",
"title": "Parameter-efficient transfer learning for NLP",
"year": "2019"
},
{
"authors": " Pmlr",
"journal": "",
"ref_id": "b15",
"title": "",
"year": ""
},
{
"authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen",
"journal": "",
"ref_id": "b16",
"title": "LoRA: Low-rank adaptation of large language models",
"year": "2022"
},
{
"authors": "Josip Jukić; Jan Šnajder",
"journal": "Association for Computational Linguistics",
"ref_id": "b17",
"title": "Smooth sailing: Improving active learning for pre-trained language models with representation smoothness analysis",
"year": "2023"
},
{
"authors": "Siddharth Karamcheti; Ranjay Krishna; Li Fei-Fei; Christopher Manning",
"journal": "Association for Computational Linguistics",
"ref_id": "b18",
"title": "Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering",
"year": "2021"
},
{
"authors": "Rabeeh Karimi Mahabadi; Sebastian Ruder; Mostafa Dehghani; James Henderson",
"journal": "Association for Computational Linguistics",
"ref_id": "b19",
"title": "Parameterefficient multi-task fine-tuning for transformers via shared hypernetworks",
"year": "2021"
},
{
"authors": "Seungwon Kim; Alex Shum; Nathan Susanj; Jonathan Hilgart",
"journal": "Association for Computational Linguistics",
"ref_id": "b20",
"title": "Revisiting pretraining with adapters",
"year": "2021"
},
{
"authors": "Simon Kornblith; Mohammad Norouzi; Honglak Lee; Geoffrey Hinton",
"journal": "",
"ref_id": "b21",
"title": "Similarity of neural network representations revisited",
"year": "2019"
},
{
"authors": " Pmlr",
"journal": "",
"ref_id": "b22",
"title": "",
"year": ""
},
{
"authors": "Jaeseong Lee; Seung-Won Hwang; Taesup Kim",
"journal": "Association for Computational Linguistics",
"ref_id": "b23",
"title": "FAD-X: Fusing adapters for cross-lingual transfer to low-resource languages",
"year": "2022"
},
{
"authors": "D David; William A Lewis; Gale",
"journal": "Springer",
"ref_id": "b24",
"title": "A sequential algorithm for training text classifiers",
"year": "1994"
},
{
"authors": "Lisa Xiang; Percy Li; Liang",
"journal": "Association for Computational Linguistics",
"ref_id": "b25",
"title": "Prefix-tuning: Optimizing continuous prompts for generation",
"year": "2021"
},
{
"authors": "Xin Li; Dan Roth",
"journal": "",
"ref_id": "b26",
"title": "Learning question classifiers",
"year": "2002"
},
{
"authors": "Yuning Mao; Lambert Mathias; Rui Hou; Amjad Almahairi; Hao Ma; Jiawei Han; Scott Yih; Madian Khabsa",
"journal": "Association for Computational Linguistics",
"ref_id": "b27",
"title": "UniPELT: A unified framework for parameter-efficient language model tuning",
"year": "2022"
},
{
"authors": "Katerina Margatina; Loic Barrault; Nikolaos Aletras",
"journal": "",
"ref_id": "b28",
"title": "On the importance of effectively adapting pretrained language models for active learning",
"year": "2022"
},
{
"authors": "Katerina Margatina; Giorgos Vernikos; Loïc Barrault; Nikolaos Aletras",
"journal": "Association for Computational Linguistics",
"ref_id": "b29",
"title": "Active learning by acquiring contrastive examples",
"year": "2021"
},
{
"authors": "Marius Mosbach; Maksym Andriushchenko; Dietrich Klakow",
"journal": "",
"ref_id": "b30",
"title": "On the stability of fine-tuning BERT: Misconceptions, explanations, and strong baselines",
"year": "2021"
},
{
"authors": "Bo Pang; Lillian Lee",
"journal": "",
"ref_id": "b31",
"title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts",
"year": "2004"
},
{
"authors": "Marinela Parović; Goran Glavaš; Ivan Vulić; Anna Korhonen",
"journal": "Association for Computational Linguistics",
"ref_id": "b32",
"title": "BAD-X: Bilingual adapters improve zero-shot cross-lingual transfer",
"year": "2022"
},
{
"authors": "Jonas Pfeiffer; Andreas Rücklé; Clifton Poth; Aishwarya Kamath; Ivan Vulić; Sebastian Ruder; Kyunghyun Cho; Iryna Gurevych",
"journal": "Association for Computational Linguistics",
"ref_id": "b33",
"title": "AdapterHub: A framework for adapting transformers",
"year": "2020"
},
{
"authors": "Jonas Pfeiffer; Sebastian Ruder; Ivan Vulić; Maria Edoardo; Ponti",
"journal": "",
"ref_id": "b34",
"title": "Modular deep learning",
"year": "2023"
},
{
"authors": "Christopher Schröder; Andreas Niekler; Martin Potthast",
"journal": "Association for Computational Linguistics",
"ref_id": "b35",
"title": "Revisiting uncertainty-based query strategies for active learning with transformers",
"year": "2022"
},
{
"authors": "Ozan Sener; Silvio Savarese",
"journal": "",
"ref_id": "b36",
"title": "Active learning for convolutional neural networks: A core-set approach",
"year": "2018"
},
{
"authors": "Burr Settles",
"journal": "",
"ref_id": "b37",
"title": "Active learning literature survey",
"year": "2009"
},
{
"authors": "Artem Shelmanov; Dmitri Puzyrev; Lyubov Kupriyanova; Denis Belyakov; Daniil Larionov; Nikita Khromov; Olga Kozlova; Ekaterina Artemova; V Dmitry; Alexander Dylov; Panchenko",
"journal": "Association for Computational Linguistics",
"ref_id": "b38",
"title": "Active learning for sequence tagging with deep pre-trained models and Bayesian uncertainty estimates",
"year": "2021"
},
{
"authors": "Richard Socher; John Bauer; Christopher D Manning; Andrew Y Ng",
"journal": "Association for Computational Linguistics",
"ref_id": "b39",
"title": "Parsing with compositional vector grammars",
"year": "2013"
},
{
"authors": "Nitish Srivastava; Geoffrey Hinton; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov",
"journal": "Journal of Machine Learning Research",
"ref_id": "b40",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"year": "2014"
},
{
"authors": "Cory Stephenson; Suchismita Padhy; Abhinav Ganesh; Yue Hui; Hanlin Tang; Sueyeon Chung",
"journal": "",
"ref_id": "b41",
"title": "On the geometry of generalization and memorization in deep neural networks",
"year": "2021"
},
{
"authors": "Mariya Toneva; Alessandro Sordoni; Remi Tachet Des Combes; Adam Trischler; Yoshua Bengio; Geoffrey J Gordon",
"journal": "",
"ref_id": "b42",
"title": "An empirical study of example forgetting during deep neural network learning",
"year": "2019"
},
{
"authors": "Yue Yu; Lingkai Kong; Jieyu Zhang; Rongzhi Zhang; Chao Zhang",
"journal": "Association for Computational Linguistics",
"ref_id": "b43",
"title": "AcTune: Uncertainty-based active self-training for active fine-tuning of pretrained language models",
"year": "2022"
},
{
"authors": "Michelle Yuan; Hsuan-Tien Lin; Jordan Boyd-Graber",
"journal": "Association for Computational Linguistics",
"ref_id": "b44",
"title": "Cold-start active learning through selfsupervised language modeling",
"year": "2020"
},
{
"authors": "Tianyi Zhang; Felix Wu; Arzoo Katiyar; Kilian Q Weinberger; Yoav Artzi",
"journal": "",
"ref_id": "b45",
"title": "Revisiting few-sample BERT fine-tuning",
"year": "2021"
},
{
"authors": "Xiang Zhang; Junbo Zhao; Yann Lecun",
"journal": "Advances in neural information processing systems",
"ref_id": "b46",
"title": "Character-level convolutional networks for text classification",
"year": "2015"
}
] | [
{
"formula_coordinates": [
4,
78.41,
688.88,
201.98,
25.55
],
"formula_id": "formula_0",
"formula_text": "RIPL(S AL , S PL ) = AUC(S AL ) -AUC(S PL ) 1 -AUC(S PL )"
}
] | Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource Settings | Pre-trained language models (PLMs) have ignited a surge in demand for effective finetuning techniques, particularly in low-resource domains and languages. Active learning (AL), a set of algorithms designed to decrease labeling costs by minimizing label complexity, has shown promise in confronting the labeling bottleneck. In parallel, adapter modules designed for parameter-efficient fine-tuning (PEFT) have demonstrated notable potential in low-resource settings. However, the interplay between AL and adapter-based PEFT remains unexplored. We present an empirical study of PEFT behavior with AL in low-resource settings for text classification tasks. Our findings affirm the superiority of PEFT over full-fine tuning (FFT) in low-resource settings and demonstrate that this advantage persists in AL setups. We further examine the properties of PEFT and FFT through the lens of forgetting dynamics and instance-level representations, where we find that PEFT yields more stable representations of early and middle layers compared to FFT. Our research underscores the synergistic potential of AL and PEFT in low-resource settings, paving the way for advancements in efficient and effective fine-tuning. | Josip Jukić; Jan Šnajder Takelab | [
{
"figure_caption": "Figure 3 :3Figure3: AL learning curves compared with random sampling on the SUBJ dataset. The first and the second rows show learning curves for adapters without and with TAPT, respectively. The third row shows learning curves for FFT, without and with TAPT. The results are averaged over five runs, and the shaded bands denote the standard deviation. Best viewed on a computer screen.",
"figure_data": "",
"figure_id": "fig_0",
"figure_label": "3",
"figure_type": "figure"
},
{
"figure_caption": "Figure 4 :Figure 5 :45Figure4: Forgetting dynamics for random sampling (passive learning) and AL with MC without and with TAPT on SUBJ and TREC. The x-axis shows the number of instances in each of the forgetting categories: the \"never\" category representing unforgettable instances, moderately forgettable instances, and highly forgettable instances.",
"figure_data": "",
"figure_id": "fig_1",
"figure_label": "45",
"figure_type": "figure"
},
{
"figure_caption": "Learning curves under the passive learning setup with different PEFT methods and FFT. The results are averaged over five runs. The shaded bands denote the standard deviation. Best viewed on a computer screen. .847 † .847 † .875 † UniPELT .934 † .877 † .836 † .875 † The performance of adapters and FFT in a passive learning setup in terms of the AUC metric (based on F 1 score) averaged over five runs. Numbers in bold represent the best-performing variant for a particular dataset. The \" †\" symbol indicates when the mean AUC of an adapter is significantly different from the corresponding mean AUC of FFT (p < .05 using a two-sided Man-Whitney U test adjusted for family-wise error rate with the Holm-Bonferroni method).",
"figure_data": "F1 score0.86 0.88 0.90 0.92 0.94200400 # of labeled data 600 800 1000 FFT Adapter LoRA Prefix-tuning UniPELTF1 score0.4 0.6 0.8200400 # of labeled data 600 800 1000 FFT Adapter LoRA Prefix-tuning UniPELTF1 score0.60 0.65 0.70 0.75 0.80 0.85200400 # of labeled data 600 800 1000 FFT Adapter LoRA Prefix-tuning UniPELTF1 score0.90 0.75 0.80 0.85200400 # of labeled data 600 800 1000 FFT Adapter LoRA Prefix-tuning UniPELT(a) SUBJ(b) TREC(c) SST(d) AGNFigure 1: SUBJTRECSSTAGNadaptersAdapter LoRA Prefix-tuning .936 † FFT .926 .929 .928.804 .800 † .871 † .750 † .798 † .860 .810 .787 .860",
"figure_id": "tab_0",
"figure_label": "1",
"figure_type": "table"
},
{
"figure_caption": "",
"figure_data": "shows the results for different combinationsof AL methods and adapters, evaluated through theRIPL metric. We complement these results withabsolute values in terms of AUC (cf. Appendix Ta-ble 5). For FFT without TAPT, DAL achieved thehighest RIPL score on two datasets, while CS andMC topped the chart on one dataset each. When weincorporated TAPT, ENT yielded the best results onthree out of four datasets, with CS leading on one.Looking at adapters, the most successful AL meth-ods without TAPT vary, depending on the specificadapter and dataset in question. Interestingly, whenTAPT is applied, the best results for all adapters areobtained either by ENT or MC. We speculate thiscould be attributed to solid compatibility between",
"figure_id": "tab_1",
"figure_label": "2",
"figure_type": "table"
},
{
"figure_caption": "Improvement over passive learning in terms of the RIPL metric for four AL methods considered (ENT, MC, CS, and DAL) and for all combinations of adapters and datasets considered, shown separately without TAPT and with TAPT. Positive values indicate improvement over passive learning, while negative values indicate performance drops compared to passive learning. Values in bold denote the best result for a particular dataset across different adapters and AL methods within the same regime (with or without TAPT).",
"figure_data": "without TAPTwith TAPTENTMCCSDALENTMCCSDALFFT .050 .059.061 .077 .140 .140 .142 .126SUBJAdapter .112 .102 LoRA .127 .115 Prefix-tuning .095 .110.100 .092 .137 .151 .111 .067 .091 .081 .165 .160 .122 .100 .106 .111 .186 .181 .170 .151UniPELT .129 .153.131 .128 .159 .167 .163 .157FFT .011 .022.038 .034 .162 .180 .141 .159TRECAdapter .027 .069 LoRA .098 .065 Prefix-tuning .093 .105.137 .084 .124 .146 .079 .154 .048 .007 .254 .237 .243 .074 .068 .093 .246 .227 .205 .241UniPELT .138 .165.082 .200 .302 .334 .276 .236FFT .002 .011 -.039 .004 .080 .079 .075 .070SSTAdapter .015 .048 LoRA .001 .007 Prefix-tuning .049 .060.025 .002 .035 .034 .028 .008 .064 .031 .036 .022 .032 .014 .114 .031 .152 .143 .137 .126UniPELT .037 .043.040 .008 .082 .101 .083 .080FFT .014 .032.007 .092 .134 .021 .089 .017AGNAdapter .074 .046 LoRA .020 .025 Prefix-tuning .054 .023.015 .062 .115 .089 .077 .080 .067 .016 .028 .102 .071 .023 .040 .033 .035 .143 .098 .092UniPELT .074 .096.089 .095 .185 .151 .112 .081AdapterLoRAPrefix-tuningUniPELTF1 score0.88 0.90 0.92 0.94200 400 600 800 1000 # of labeled dataF1 score0.88 0.90 0.92 0.94200 400 600 800 1000 # of labeled dataF1 score0.95 0.92 0.93 0.94200 400 600 800 1000 # of labeled dataF1 score0.90 0.95 0.91 0.92 0.93 0.94# of labeled data 200 400 600 800 1000F1 score0.93 0.94 0.95200 400 600 800 1000 # of labeled data Adapter + TAPTF1 score0.92 0.93 0.94 0.95200 400 600 800 1000 # of labeled data LoRA + TAPTF1 score0.92 0.93 0.94 0.95 0.96200 400 600 800 1000 # of labeled data Prefix-tuning + TAPTF1 score0.92 0.93 0.94 0.95 0.96# of labeled data 200 400 600 800 1000 UniPELT + TAPTFFT0.960FFT + TAPTF1 score0.88 0.90 0.92 0.94F1 score0.940 0.945 0.950 0.955RND ENT MCCS DAL0.86200 400 600 800 1000 # of labeled data0.935# of labeled data 200 400 600 800 1000",
"figure_id": "tab_2",
"figure_label": "2",
"figure_type": "table"
}
] | [{"Category": "Methodological Basis", "Citation": "(Cohn et al., 1996)", "Explanation": "The cited work introduces the concept of active learning as a potential solution to the challenge of data labeling in low-resource settings, which the citing paper builds upon in its research on efficient finetuning methods for PLMs."}, {"Category": "Methodological Basis", "Citation": "(Settles, 2009)", "Explanation": "The cited work provides a more in-depth discussion of active learning and its potential benefits in reducing labeling costs, which the citing paper further explores in the context of PLMs and low-resource settings."}, {"Category": "Methodological Basis", "Citation": "(Dasgupta, 2011)", "Explanation": "The cited work highlights the importance of label complexity in active learning and the need to reduce it for efficient model training, which the citing paper addresses in its research on efficient finetuning methods for PLMs in low-resource settings."}, {"Category": "Methodological Basis", "Citation": "(Gururangan et al., 2020)", "Explanation": "The cited work introduces the concept of task-adaptive pre-training (TAPT), which the citing paper adopts in their research to further reduce the label complexity in AL research."}, {"Category": "Extension or Continuation", "Citation": "(Houlsby et al., 2019)", "Explanation": "The cited work introduces the concept of adapters as compact modules for fine-tuning PLMs, which the citing paper extends by discussing the use of adapters for parameter-efficient fine-tuning (PEFT) in AL research."}, {"Category": "Data Source", "Citation": "(Pfeiffer et al., 2023)", "Explanation": "The cited work discusses the use of modular learning in PEFT, which the citing paper references as a method for parameter-efficient fine-tuning in AL research."}, {"Category": "Supporting Evidence", "Citation": "(He et al., 2021;Li and Liang, 2021;Karimi Mahabadi et al., 2021)", "Explanation": "The cited works have revealed that PEFT methods outperform full fine-tuning in low-resource settings, which is a key finding that supports the claims made in the citing paper about the potential benefits of PEFT in this context."}, {"Category": "Supporting Evidence", "Citation": "(Toneva et al., 2019)", "Explanation": "The cited work by Toneva et al. (2019) provides a method for analyzing the properties of PEFT and FFT, which the citing paper uses to understand the reason for the improved performance of PEFT in low-resource AL scenarios."}, {"Category": "Methodological Basis", "Citation": "(Ein-Dor et al., 2020)", "Explanation": "The cited work by Ein-Dor et al. (2020) provides a conventional approach for integrating PLMs with AL, which the citing paper adopts in their research to investigate the use of PEFT techniques in low-resource settings."}, {"Category": "Methodological Basis", "Citation": "(Margatina et al., 2021)", "Explanation": "The cited work by Margatina et al. (2021) also contributes to the research on combining PLMs with AL, providing a method for fine-tuning the model in each AL step."}, {"Category": "Methodological Basis", "Citation": "(Shelmanov et al., 2021)", "Explanation": "The cited work by Shelmanov et al. (2021) further adds to the research on integrating PLMs with AL, by discussing the use of fine-tuning in each AL step."}, {"Category": "Methodological Basis", "Citation": "(Karamcheti et al., 2021)", "Explanation": "The cited work by Karamcheti et al. (2021) also contributes to the research on combining PLMs with AL, by exploring the use of fine-tuning in each AL step."}, {"Category": "Methodological Basis", "Citation": "(Schr\u00f6der et al., 2022)", "Explanation": "The cited work by Schr\u00f6der et al. (2022) further adds to the research on integrating PLMs with AL, by discussing the use of fine-tuning in each AL step."}, {"Category": "Extension or Continuation", "Citation": "(Mosbach et al., 2021)", "Explanation": "The cited work by Mosbach et al. (2021) extends the research on fine-tuning in low-resource settings, by discussing the instability of the process and its impact on AL."}, {"Category": "Extension or Continuation", "Citation": "(Zhang et al., 2021)", "Explanation": "The cited work by Zhang et al. (2021) also extends the research on fine-tuning in low-resource settings, by discussing the instability of the process and its impact on AL."}, {"Category": "Data Source", "Citation": "(Dodge et al., 2020)", "Explanation": "The cited work by Dodge et al. (2020) provides a data source for the research on fine-tuning in low-resource settings, by discussing the sensitivity of the process to weight initialization and data ordering."}, {"Category": "Supporting Evidence", "Citation": "(Grie\u00dfhaber et al., 2020)", "Explanation": "The cited work by Grie\u00dfhaber et al. (2020) provides evidence that the choice of training regime is more critical than the choice of the AL method in improving AL performance."}, {"Category": "Supporting Evidence", "Citation": "(Yuan et al., 2020)", "Explanation": "The cited work by Yuan et al. (2020) further supports the claim that the training regime is more important than the AL method in enhancing AL performance."}, {"Category": "Supporting Evidence", "Citation": "(Yu et al., 2022)", "Explanation": "The cited work by Yu et al. (2022) provides additional evidence that the training regime is a critical factor in improving AL performance."}, {"Category": "Extension or Continuation", "Citation": "(Margatina et al., 2022)", "Explanation": "The cited work by Margatina et al. (2022) extends the research on the effectiveness of TAPT in enhancing AL performance by providing further insights and data."}, {"Category": "Extension or Continuation", "Citation": "(Juki\u0107 and \u0160najder, 2023)", "Explanation": "The cited work by Juki\u0107 and \u0160najder (2023) continues the research on TAPT by exploring new dimensions and variables in enhancing AL performance."}, {"Category": "Supporting Evidence", "Citation": "(Ansell et al., 2021)", "Explanation": "The cited work by Ansell et al. (2021) provides evidence on the effectiveness of cross-lingual transfer for low-resource languages in the context of adapters."}, {"Category": "Supporting Evidence", "Citation": "(Lee et al., 2022)", "Explanation": "The cited work by Lee et al. (2022) further supports the research on the use of adapters in low-resource settings for cross-lingual transfer."}, {"Category": "Supporting Evidence", "Citation": "(Parovi\u0107 et al., 2022)", "Explanation": "The cited work by Parovi\u0107 et al. (2022) provides additional insights on the use of adapters in low-resource settings for cross-lingual transfer."}, {"Category": "Supporting Evidence", "Citation": "(Li and Liang, 2021)", "Explanation": "The cited work by Li and Liang (2021) supports the research on the use of adapters in monolingual settings with scarce data."}, {"Category": "Supporting Evidence", "Citation": "(Mao et al., 2022)", "Explanation": "The cited work by Mao et al. (2022) further supports the research on the use of adapters in monolingual settings with scarce data."}, {"Category": "Supporting Evidence", "Citation": "(He et al., 2021)", "Explanation": "The cited work by He et al. (2021) provides evidence on the stability and generalization capabilities of adapter-based tuning in monolingual settings with scarce data."}, {"Category": "Supporting Evidence", "Citation": "(Kim et al., 2021)", "Explanation": "The cited work by Kim et al. (2021) provides evidence that the benefits of integrating TAPT with adapters tend to taper off as the amount of data increases, which is relevant to the discussion in the citing paper about the limitations of using adapters in low-resource setups."}, {"Category": "Data Source", "Citation": "(Pang and Lee, 2004)", "Explanation": "The cited work by Pang and Lee serves as the data source for the SUBJ dataset used in the citing paper for the single-text classification task."}, {"Category": "Data Source", "Citation": "(Li and Roth, 2002)", "Explanation": "The cited work by Li and Roth is the data source for the TREC dataset used in the single-text classification task in the citing paper."}, {"Category": "Data Source", "Citation": "(Socher et al., 2013)", "Explanation": "The cited work by Socher et al. is the data source for the SST dataset used in the single-text classification task in the citing paper."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2015)", "Explanation": "The cited work by Zhang et al. is the data source for the AGN dataset used in the single-text classification task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Houlsby et al., 2019)", "Explanation": "The cited work introduces the concept of trainable bottleneck layers in Transformer layers, which the citing paper adopts in the development of the Adapter PEFT technique."}, {"Category": "Methodological Basis", "Citation": "(Li and Liang, 2021)", "Explanation": "The cited work presents the Prefix-tuning PEFT technique, which the citing paper incorporates in the development of the UniPELT method by adding new parameters in the multi-head attention blocks of Transformer layers."}, {"Category": "Methodological Basis", "Citation": "(Hu et al., 2022)", "Explanation": "The cited work introduces the LoRA PEFT technique, which the citing paper incorporates in the development of the UniPELT method by representing an additive method that incorporates trainable low-rank decomposition matrices in the layers of a pre-trained model."}, {"Category": "Methodological Basis", "Citation": "(Mao et al., 2022)", "Explanation": "The cited work presents the UniPELT PEFT method, which the citing paper considers as a combination of multiple PEFT approaches, including LoRA, Prefix-tuning, and Adapter, in a single unified setup with gating mechanisms for effective activation."}, {"Category": "Methodological Basis", "Citation": "(Devlin et al., 2019)", "Explanation": "The cited work by Devlin et al. (2019) provides the base PLM (BERT) that the citing paper uses as the foundation for their research on adapters."}, {"Category": "Methodological Basis", "Citation": "(Lewis and Gale, 1994)", "Explanation": "The cited work by Lewis and Gale (1994) provides the maximum entropy (ENT) strategy for sampling instances in the field of uncertainty strategies, which the citing paper adopts as a method for instance selection."}, {"Category": "Methodological Basis", "Citation": "(Gal and Ghahramani, 2016)", "Explanation": "The cited work by Gal and Ghahramani (2016) introduces the Monte Carlo dropout (MC) method for instance selection based on the stochasticity of forward passes with dropout layers, which the citing paper utilizes in the field of uncertainty strategies."}, {"Category": "Methodological Basis", "Citation": "(Srivastava et al., 2014)", "Explanation": "The cited work by Srivastava et al. (2014) presents the use of dropout layers in forward passes, which the citing paper references in the context of the Monte Carlo dropout (MC) method for instance selection in the field of uncertainty strategies."}, {"Category": "Methodological Basis", "Citation": "(Sener and Savarese, 2018)", "Explanation": "The cited work by Sener and Savarese (2018) introduces the core-set (CS) method for instance selection in the field of learning representations of the acquisition model, which the citing paper adopts as a method for encouraging instance diversity."}, {"Category": "Methodological Basis", "Citation": "(Schr\u00f6der et al., 2022)", "Explanation": "The cited work provides a recommendation for using AUC as a suitable approximation of AL feasibility, which the citing paper adopts in their research to evaluate the performance of AL methods."}, {"Category": "Methodological Basis", "Citation": "(Juki\u0107 and \u0160najder, 2023)", "Explanation": "The cited work also recommends using AUC as a summary numeric score in AL, which the citing paper adopts in their research to evaluate the performance of AL methods."}, {"Category": "Methodological Basis", "Citation": "(Li and Liang, 2021)", "Explanation": "The cited work by Li and Liang provides the basis for the use of adapters in low-resource settings in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Mao et al., 2022)", "Explanation": "The cited work by Mao et al. contributes to the understanding of the use of adapters in low-resource settings in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(He et al., 2021)", "Explanation": "The cited work by He et al. further builds upon the research on the use of adapters in low-resource settings in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Li and Liang, 2021)", "Explanation": "The citing paper extends the research on the use of adapters in low-resource settings by conducting a more nuanced analysis and comparing multiple adapter variants with FFT under the passive learning setup."}, {"Category": "Extension or Continuation", "Citation": "(Mao et al., 2022)", "Explanation": "The citing paper further extends the research on the use of adapters in low-resource settings by generating detailed learning curves to facilitate the comparison of multiple adapters with FFT in the passive learning setup."}, {"Category": "Extension or Continuation", "Citation": "(He et al., 2021)", "Explanation": "The citing paper continues the research on the use of adapters in low-resource settings by looking into how the models' performance changes as the training set increases."}, {"Category": "Methodological Basis", "Citation": "(Toneva et al., 2019)", "Explanation": "The cited work by Toneva et al. (2019) provides a methodology for analyzing forgetting dynamics in training examples, which the citing paper adopts to study the occurrence of forgetting events in adapters and their impact on AL data selection."}, {"Category": "Methodological Basis", "Citation": "(He et al., 2021)", "Explanation": "The cited work by He et al. (2021) provides the inspiration for the layerwise examination of similarity in the citing paper, which is used to analyze the effect of PEFT and FFT on AL selection with respect to their layerwise similarity to the base model."}, {"Category": "Methodological Basis", "Citation": "(Li and Liang, 2021)", "Explanation": "The cited work by Li and Liang (2021) is used to bolster the findings of the citing paper by exploring the stability of representations in scenarios with limited resources."}, {"Category": "Methodological Basis", "Citation": "(Mao et al., 2022)", "Explanation": "The cited work by Mao et al. (2022) contributes to the analysis of the stability of representations in the citing paper, providing insights into the use of adapters in scenarios with limited resources."}, {"Category": "Data Source", "Citation": "(Stephenson et al., 2021)", "Explanation": "The data source cited by Stephenson et al. (2021) is used to draw inspiration for the layerwise examination of similarity in the citing paper, which is conducted to analyze the effect of PEFT and FFT on AL selection with respect to their layerwise similarity to the base model."}, {"Category": "Data Source", "Citation": "(Baldock et al., 2021)", "Explanation": "The data source cited by Baldock et al. (2021) is used in the citing paper to support the claim that different layers of networks specialize in different features, with earlier layers acquiring more generalized knowledge and deeper layers focusing on task-specific information."}, {"Category": "Methodological Basis", "Citation": "(Pfeiffer et al., 2020)", "Explanation": "The cited work provides the implementation of adapters used in the citing paper, which serves as a methodological basis for the research conducted in the citing paper."}] |
[
{
"figure_ref": [],
"heading": "Introduction",
"publication_ref": [
"b12",
"b36",
"b15",
"b34",
"b2",
"b3",
"b8"
],
"table_ref": [],
"text": "Document comprehension involves interpreting words that can alter the meaning of the text based on their placement. For example, in the sentence \"the movie was boring, but I was surprised by the ending\", the word but contrasts ideas. While traditional vector-based text representation methods lack the ability to capture the structural information of a text effectively, graph-based representation strategies explicitly seek to model relationships among different text elements (nodes) through associations between pairs of them (edges), capturing dependencies between text units and leveraging language structure.\nWhile such ideas have a long history (Hassan and Banea, 2006;Mihalcea and Tarau, 2004, inter alia), the rise of Graph Neural Network (GNN) models in recent years has made it particularly appealing to convert even unstructured data into graphs. The model can then capture relevant patterns while accounting for dependencies between graph nodes via message passing.\nFor text classification, numerous graph-based text representation schemes have been proposed and demonstrated the efficacy of graphs. However, most of them were designed for particular domainspecific tasks and validated only on short documents using a restricted set of model architectures (Yao et al., 2019;Huang et al., 2022;Wang et al., 2023). Moreover, some of these proposals predate the introduction of GNNs and were validated using graph mining or classical machine learning models, making it challenging to determine the applicability and effectiveness of graphs in broader settings (Castillo et al., 2015(Castillo et al., , 2017)).\nText classification increasingly extends beyond simple topic classification tasks, encompassing real-world challenges such as noisy texts, imbalanced labels, and much longer documents consisting of more than just a few paragraphs. Hence, a comprehensive assessment of the merits and drawbacks of different graph representations and methods in more diverse scenarios is needed.\nThis work presents a thorough empirical investigation of previously proposed graph-based text representation methods, evaluating how graphs generalize across diverse text classification tasks. We analyze their effectiveness with several GNN-based architectures and setups across five prominent text classification datasets from a broad range of domains. Unlike previous work (Galke and Scherp, 2022), our study considers diverse datasets with both short and longer documents, as well as unbalanced classification scenarios. Additionally, we evaluate the efficacy vs. efficiency of the proposals, an aspect usually neglected in previous studies.\nFor each graph method, we conducted extensive experiments using 3 types of convolutional layers as different message-passing strategies for 12 GNN architecture variants, each using one out of 4 pretrained word embedding techniques as node feature vector initialization. This allows us to shed light on what are the most successful choices of GNN architectures for learning from them.\nOur study finds that graph methods are a competitive and particularly efficient choice for solving classification tasks. This is because GNNs can capture both local and global dependencies between structural components. Therefore, they can capture rich semantic relationships and dependencies that are important for the task. Additionally, unlike many sequence models, GNNs can naturally handle variable-length inputs by operating on the graph structure, without any need to map every data sample to a fixed-sized vector or truncate them at a fixed maximum sequence length. While longer documents can be particularly challenging, our study finds that GNN methods hold particular promise for longer documents, an aspect unexplored in prior research. However, the graph's effectiveness depends on the textual input features and domain. Based on our experimental results, we provide a discussion regarding what graph construction and GNN architecture choice is preferable depending on the task to be solved. Surprisingly, although Transformer-based Language Models (LMs) yield outstanding results for the considered tasks, they often have difficulties converging when dealing with short texts.\nThe study is structured around three research questions, which are discussed in Section 4:\n1. How does the choice of GNN architecture and setup affect the classification effectiveness? 2. What graph construction method is most effective for text classification? 3. Can graphs compete with state-of-the-art sequence classification models?\n2 Prior Work on Graphs in NLP Previous graph-based text representation methods can be categorized into three categories based on the nature of the underlying graph structure.\nEarly graph constructions primarily relied on cooccurrence and textual statistical patterns. Subsequently, more advanced representations integrated linguistic features as graph components. Recently, specialized graph constructions have evolved, entailing intricate structures that encompass the uti-lization of graph neural networks as essential components within the learning framework."
},
{
"figure_ref": [],
"heading": "Early Graph Constructions",
"publication_ref": [
"b25",
"b12",
"b32",
"b2"
],
"table_ref": [],
"text": "For graph-based text representation, a simple approach is to consider word co-occurrence within a fixed-size sliding window: Words are modeled as nodes, and two nodes are connected if the respective words co-occur within a window of at most N words. Mihalcea and Tarau (2004) used such co-occurrence graphs for N ∈ {2, . . . , 10} as a ranking model for keyword extraction. They found smaller N to be preferable, as the connection between words further apart is often weaker. Hassan and Banea (2006) used the same approach with N = 2 along with TextRank to replace term frequency weights, and then conducted text classification with classic machine learning models. In most of their experiments, this scheme outperformed using TF-IDF vectors. Rousseau et al. (2015) also used a fixed-size sliding window graph (calling it graph-of-words). They cast text classification as a classification problem by applying graph mining to obtain subgraph features to train a classifier.\nSequence graphs are another simple scheme with edges reflecting the original order of words in the text (Castillo et al., 2015). The authors used the number of times the corresponding two words appear consecutively in the text as edge weights."
},
{
"figure_ref": [],
"heading": "Linguistic Features as Graphs",
"publication_ref": [
"b25",
"b0",
"b7",
"b37"
],
"table_ref": [],
"text": "Other graph construction methods have been proposed. Mihalcea and Tarau (2004) highlighted that multiple text units and characteristics can be considered as vertices depending on the application at hand. They invoked application-specific criteria to define edges, such as lexical or semantic relations. To this end, they also proposed a similarityweighted graph for sentence extraction. Every node represents an entire sentence, while edges are defined by measuring their content overlap as the number of shared tokens. Although this scheme can be applied in other tasks (text classification or summarization), it tends to yield fairly densely connected graphs, making it difficult to extract local patterns and discern the content of the text.\nGiven that traditional work in linguistics and computational linguistics often considers tree and graph-structured formalisms as the principal way of analyzing individual sentences, these may also serve as building blocks for document-level representations (Arora et al., 2009;Joshi and Rosé, 2009, inter alia). For instance, a neural parsing model (Dozat and Manning, 2016;Yuan et al., 2021) can infer word dependencies to obtain syntactic dependency trees. However, the overall graph representation becomes rather sparse, as nodes share edges with only a limited number of other units."
},
{
"figure_ref": [],
"heading": "Specialized Graph Constructions",
"publication_ref": [
"b36",
"b23",
"b31",
"b6",
"b30",
"b22",
"b4",
"b39",
"b28",
"b11",
"b15",
"b34",
"b39",
"b8"
],
"table_ref": [],
"text": "Text Graph Convolutional Network (TextGCN; Yao et al. 2019) was one of the first approaches to include a Graph Convolutional Neural Network (GCN) as a classification method. TextGCN proposes a heterogeneous graph construction using words and documents as nodes. However, this means that new documents cannot be processed without re-training. It employs Point-wise Mutual Information (PMI) similarity as an edge weighting function for word pairs and TF-IDF for wordin-document edges. Other proposals also suggested integrating heterogeneous contextual information such as TensorGCN (Liu et al., 2020), Het-eGCN (Ragesh et al., 2021), and HyperGAT (Ding et al., 2020). However, such approaches are fairly resource-intensive.\nTextLevelGCN (Huang et al., 2019a) creates one graph per input text. The proposal defines every word as a node, which can be duplicated if a word appears more than once in a text. Edges are defined for word nodes within a sliding window using PMI edge weights. Despite promising results, the experiments were limited to very short documents.\nGraphIE (Qian et al., 2019) uses a homogeneous scheme based on co-reference, integrating a GCN with an RNN encoder-decoder architecture for tagging and information extraction tasks. Nodes can be defined as words or entire sentences, connected via co-reference and identical mention edges, to account for non-local and non-sequential ties. A downside of this is that prior domain knowledge is required to establish the edge types.\nSome studies have brought back the classic cooccurrence graph construction methods, but using a different message passing function based on Gated Recurrent Units (Li et al., 2015;Cho et al., 2014) for updating node feature vectors (Zhang et al., 2020).\nMPAD (Nikolentzos et al., 2020) included an extra master node connected to every other node in the graph. Therefore, the network is densely connected, and the structural information is vague during message passing. Text-MGNN (Gu et al., 2023) proposes a heterogeneous graph construction, introducing topic nodes to enhance class-aware representation learning. However, it has the same limitations as TextGCN.\nAlternatively, two inductive models have reported good results on traditional text classification benchmarks, but the improvement is mostly due to the combination of GNN and BERT models (Huang et al., 2022;Wang et al., 2023). Thus, these strategies are resource-intensive, hard to apply to long documents, and beyond the scope of our study.\nSince Zhang et al. (2020) outperform Textlevel-GCN despite using the same graph construction, it is clear that the graph construction method and the way patterns are extracted from it are closely related. Hence, an in-depth study analyzing multiple factors in a controlled setting is necessary.\nIn terms of broader empirical comparisons, one previous study also conducted a comparative analysis of different approaches for text classification to evaluate the necessity of text-graphs. The authors compared multiple Bag of Words (BoW), sequence, and graph models (Galke and Scherp, 2022), arguing that a multi-layer perceptron enhanced with BoW is a strong baseline for text classification. Nevertheless, the authors limited their analysis to standard data collections with only short texts. In contrast, with the aim to study how graphs perform in more challenging scenarios, our study considers a broader range of domains including much longer documents and unbalanced classification contexts. In addition, we assess the balance between the effectiveness and efficiency of the proposals, a facet typically overlooked in prior research."
},
{
"figure_ref": [],
"heading": "Comparing Graph-Based Text Representations",
"publication_ref": [],
"table_ref": [],
"text": "To study the merits of prominent graph-based text representation strategies, we conducted comprehensive experiments on five well-known text classification datasets. For each task, we compare different graph construction schemes using a variety of GNN models to separate the effect of the graph construction strategy from that of the message-passing technique in the model."
},
{
"figure_ref": [],
"heading": "Methods",
"publication_ref": [],
"table_ref": [],
"text": ""
},
{
"figure_ref": [
"fig_1"
],
"heading": "Graph-Based Text Representation",
"publication_ref": [
"b12",
"b2",
"b36"
],
"table_ref": [],
"text": "Among the studied techniques, there are some graph construction methods that follow an intuitive construction process. They are based solely on sim-ple relationships between pairs of nodes and only consider basic co-occurrence statistics if needed. Thus, they do not require a deep understanding of the semantic structure. In the following, we refer to these sorts of networks as Intuitive Graphs.\nFigure 1 illustrates how they work.\nWindow-based: Following Hassan and Banea (2006), given an input text, if a term has not been previously seen, then a node is added to the graph, and an undirected edge is induced between two nodes if they are two consecutive terms in the text.\nWindow-based extended: As for the above construction, but with a window size of three. With this, each word will ultimately be tied to the two previous terms and the two subsequent ones.\nSequence-weighted: This strategy (Castillo et al., 2015) defines a directed graph with nodes for words and edges that represent that the two corresponding lexical units appear together in the text sequence and follow the order in which they appear. Additional edge weights capture the number of times that two words appear together, which is intended to reflect the strength of their relationship.\nSequence simplified: Inspired by the above, a simplified version omits the edge weights. Thus, the effect of the edge importance function over the pure graph structure can be studied in isolation.\nA more sophisticated graph-based text representation strategy requiring a more elaborate graph construction process is also considered. TextLevelGCN: Every word appearing in a text is treated as a node, and edges are defined between adjacent words in a fixed-size window. Unlike the above Intuitive Graphs, TextLevelGCN (Huang et al., 2019b) considers each word token occurrence as a separate node, i.e., it allows multiple nodes if the corresponding term occurs more than once in the text. Therefore, the specific in-context meaning can be determined by the influence of weighted information from its neighbors. The authors further employed PMI as an edge weighting function for the word associations, as in Yao et al. (2019)."
},
{
"figure_ref": [],
"heading": "Mainstream Text Representations",
"publication_ref": [
"b5",
"b1"
],
"table_ref": [],
"text": "We further considered several mainstream representation schemes, allowing us to better understand how the graph approaches fare in comparison. Bag of Words (BoW): Given a vocabulary of known words, this strategy uses vectors of term frequencies, discarding any information about the order of words in the text.\nTransformer-based LMs: We also include BERT (Devlin et al., 2018) and Longformer (Beltagy et al., 2020) Transformers as powerful masked language model-based encoders. While BERT has a maximum input length of 512 tokens, the Longformer extends this limit via a modified attention mechanism that scales linearly with sequence length. The latter trait is desirable when comparing LMs to graphs, which use the complete source text. Please note that Transformer-based LMs are included merely as an informative point of reference for comparison."
},
{
"figure_ref": [],
"heading": "Datasets",
"publication_ref": [
"b12",
"b36",
"b9",
"b38",
"b21",
"b24",
"b10",
"b18"
],
"table_ref": [
"tab_0"
],
"text": "The literature review reveals that many graph-based text representation methods have been evaluated on different datasets. Most of the time, the proposals were each introduced for a specific task domain and validated on text with very restricted characteristics, such as a limited vocabulary and an average document length of up to 221 words (Hassan and Banea, 2006;Yao et al., 2019). Hence, it is unclear how well these approaches can generalize to other kinds of data in different domains and be applied to longer documents.\nWe assess the generalizability of graph strategies in text classification, including sentiment analysis, topic classification, and hyperpartisan news detection, across balanced and unbalanced scenarios, including longer documents. We utilize five publicly available datasets (see Table 1), with further details provided in Appendix A. App Reviews (Grano et al., 2017) -English user reviews of Android applications for fine-grained sentiment analysis in an imbalanced setting. DBpedia (Zhang et al., 2015) -A dataset for topic classification consisting of Wikipedia articles based on DBpedia 2014 classes (Lehmann et al., 2015). IMDB (Maas et al., 2011) -Movie reviews from the Internet Movie Database for binary sentiment classification. BBC News (Greene and Cunningham, 2006) -A topic classification dataset 1 consisting of 2,225 English documents from the BBC News website. Hyperpartisan News Detection (HND) (Kiesel et al., 2018) -A collection of 645 news articles 2 labeled according to whether it shows blind or unreasoned allegiance to one party or entity. The dataset exhibits a minor class imbalance. ), the imbalance rate between the minority and majority classes (IR), and the proportion of long documents."
},
{
"figure_ref": [],
"heading": "Experimental Setup",
"publication_ref": [],
"table_ref": [],
"text": ""
},
{
"figure_ref": [],
"heading": "Data Preparation",
"publication_ref": [],
"table_ref": [],
"text": "A fixed-size data partition was taken from each dataset to conduct a fair comparative analysis among the methods. Thus, a training and test split was defined, consisting of 7,000 and 3,000 samples, respectively. For those datasets that did not have that many examples, i.e., BBC News and HND, 80% of the samples were used for training and the remaining 20% for testing. For all datasets, we randomly reserve 10% of the samples from the training set for building the validation set.\nSince each graph node represents a word of the input text, a consistent text normalization scheme is needed: We applied lowercase conversion, punctuation mark and stop word removal, as well as eliminating any other non-ASCII characters.\nNote that our TextLevelGCN experiments are conducted using the official implementation 3 , which incorporates additional preprocessing. This includes removing tokens with fewer than three characters, limiting document lengths to 350 terms, eliminating words with a frequency less than 5, applying lemmatization, as well as applying expansion rules to remove English contractions.\n3 https://github.com/mojave-pku/TextLevelGCN"
},
{
"figure_ref": [],
"heading": "Model Settings",
"publication_ref": [
"b35",
"b33",
"b29",
"b26"
],
"table_ref": [],
"text": "Graph Neural Networks. For GNN experiments on Intuitive Graphs, we vary the number of hidden layers from 1 to 4 and vary the dimensionality of node representations in {16, 32, 64}. We applied Dropout after every convolutional layer with a retention probability of 0.8 and used average pooling for node-level aggregation. The final representation is fed into a softmax classifier.\nWe compared three types of graph convolutional neural layers: (i) the traditional one (GCN; Kipf and Welling 2016), (ii) using a graph isomorphism operator (GIN; Xu et al. 2018), which has shown improved structural discriminative power compared to GCNs, and (iii) including a graph attentional operator (GAT; Velickovic et al. 2017) with 4 attention heads. Our experiments were based on PyTorch Geometric (see Appendix E).\nFor TextLevelGCN, we used default parameter settings as in the original implementation, varying the window size (n-gram parameter) from 1 to 4.\nFour different node vector initialization strategies were also compared. We considered GloVe Wiki-Gigaword 300-dim. embeddings (Pennington et al., 2014), Word2Vec Google News 300-dim. embeddings (Mikolov et al., 2013), static BERT pre-trained embeddings (encoding each token independently and averaging for split terms), and contextualized BERT embeddings. The latter involves encoding the entire input text using BERT and using token embeddings from the 12th layer.\nBag of Words Baseline. We employed a cut-off for building the BoW vocabulary by eliminating terms with a document frequency higher than 99% or lower than 0.5%. Once the BoW representations are obtained, a Multi-Layer Perceptron model with one hidden layer is trained for text classification (BoW MLP). We varied the number of hidden units in {32, 64, 128, 256} and applied Dropout right before the final classification layer, as done for GNNs.\nAll GNNs and BoW MLP used a batch size of 64 samples and were trained for a maximum of 100 epochs using Adam optimization (Kingma and Ba, 2014) with an initial learning rate of 10 -4 . The training was stopped if the validation macroaveraged F1 score did not improve for ten consecutive epochs. Only for HND, the patience was 20.\nTransformer-based Baselines. We fully finetuned BERT-base uncased, including a Dropout layer right after it with a retention probability of 80%, and a final dense layer for conducting the text classification. During training, the batch size and learning rate were set to 32 and 10 -6 , respectively. The maximum number of epochs was 10, and the patience was 5. The same procedure was followed for Longformer-base 4 . However, given the complexity of the model (148 M trainable parameters) and computing resource constraints, the maximum sequence length was set to 1,024 tokens, and the batch size was set to 16.\nGeneral Setup. The objective function of each model was to minimize the cross-entropy loss. Supplementary experimental details are provided in Appendix A, Appendix C, and Appendix E. For reproducibility, we release our code on https: //github.com/Buguemar/GRTC_GNNs."
},
{
"figure_ref": [],
"heading": "Results and Analysis",
"publication_ref": [],
"table_ref": [
"tab_1",
"tab_2",
"tab_3"
],
"text": "Table 2 and Table 3 show the best architecture and setup for each dataset employing Intuitive Graphs and TextLevelGCN, respectively. The results correspond to the average obtained from 10 independent runs. As some datasets exhibit class imbalance, each table reports the accuracy and the macroaveraged F1-score. The best results are reported in bold, while a star mark is used to indicate the best architecture across the entire dataset. For a full comparison, see Appendix B and Appendix C.\nA comparison with baselines such as BERT is given in Table 4."
},
{
"figure_ref": [],
"heading": "How do GNN Architecture and Setup",
"publication_ref": [
"b35"
],
"table_ref": [
"tab_1",
"tab_1",
"tab_5"
],
"text": "Affect the Classification Effectiveness?\nGNN Message Passing. Table 2 shows GAT as the most effective strategy for DBpedia, IMDB, and BBC News, compared to other convolutional layers. Due to its attention mechanism, GAT can identify those nodes that are relevant for the final prediction. GAT models also proved to be more robust to variations in parameters such as the number of layers and the hidden units (Appendix B). However, for imbalanced classification with very short texts (as on App Reviews), GAT is not as effective. In such settings, the graphs have very few nodes, and the attention heads appear to fail to identify the most pertinent ones. Similarly, GAT struggled on HND: Although HND contains extremely long documents and thus there are sufficient elements to exploit, many of the tokens are HTML and PHP markers, or similar source artifacts. Thus, much of the input is insignificant for the task and the attention heads fail to identify relevant nodes. GIN proves to be the best choice for such cases, exploiting the graph structural information for superior discriminative power over traditional GCNs (Xu et al., 2018). While GCNs use simple averages of neighboring node representations, GIN defines a weighted average by learning to determine the importance of a node compared to its neighboring nodes (ϵ-value), which is then fed into an MLP. Thus, GIN can distinguish node neighborhoods, discerning structural information among graph classes. Since our document graphs are based on word co-occurrence, GIN can exploit structural regularities and identify recurrent associations between specific words, which can be crucial for predicting the correct graph-level label.\nNode Feature Initialization. A noteworthy finding is that the best results were mostly obtained with non-BERT initializations. Well-known static word embeddings with a much lower dimensionality appear to yield better results than BERT embeddings. This is the case for App Reviews and IMDB using Word2Vec, and BBC News using GloVe. Similarly, when using TextLevelGCN as an elaborated graph construction, Word2Vec obtained better results than BERT initialization for some tasks. Moreover, a 1-gram graph construction is sufficient for medium and long text classification when using such an initialization strategy. However, denser graphs are required for short texts.\nConvolutional layers. The results indicate that the optimal number of convolutional layers is taskdependent, with 1 or 2 layers favored for tasks centered on local patterns and more layers nec- essary for tasks requiring broader information.\nThe contextual understanding, whether local or global, is also influenced by the document length.\nFor instance, to comprehensively grasp the document's sentiment, a sentence-level analysis is vital, whereas if the document comprises only one or two sentences, a wider document-level view is preferable. This is shown in Table 2 andTable 5, where using 3 layers produced the best App Reviews results."
},
{
"figure_ref": [],
"heading": "What Graph Construction Method is",
"publication_ref": [],
"table_ref": [
"tab_2",
"tab_1"
],
"text": "Most Effective for Text Classification?\nIntuitive Graphs. The sequence construction in general shows worse performance than its simplified version, which indicates that the use of discrete weights in the edges does not provide relevant information for datasets such as App Reviews, DBpedia, and IMDB. BBC News appears to be an exception: Since news articles tend to reiterate key facts in the news multiple times, exact co-occurrences of word pairs appear to be frequent and might be meaningful. Despite also consisting of news articles, HND behaves similarly to other datasets in that Sequence simp significantly outperforms the weighted version, which fails to learn the task. This may be due to noisy tokens such as HTML tags that may occur numerous times. When omitting edge weights, the model may be less affected by such noise.\nRegarding the window-based graph construction, the extended version does not show a significant improvement over the base version with N = 2. This is because a higher N increases the average degree of the graph, making it difficult to extract local patterns and discern the content of the text. Hence, Window mostly outperformed Window ext .\nOverall, the window-based construction is recommended when the classification task is as simple as topic recognition. This allows a faster and more direct identification of the input document's vocabulary, as each token accesses both its left and right context immediately and can identify recurrent words. Moreover, a quick vocabulary exploration is achieved as N grows.\nIn contrast, for tasks such as sentiment analysis or identifying writing styles and biases in a given article, a detailed analysis of the term order is necessary. In this case, a sequence-based construction seems preferable. Although directed graphs may be limited to a left-to-right construction, GNNs spread the node information between neighbors and thus exploit structural and linguistic textual features, as local and global contexts of the document.\nTextLevelGCN. Table 3 shows that TextLevel-GCN is the best-performing graph-based model for App Reviews, implying that the task benefits from edge weights, but that they should be soft values for a smoother learning curve. Otherwise, it is preferable to omit them by employing a Sequence simp construction. Nonetheless, TextLevelGCN underperforms Intuitive Graphs on all other datasets, even when processing medium-length documents. As in Table 2, for TextLevelGCN there is a connection between the classification task and node feature initialization. Topic classification tasks obtained better results when employing BERT for 2-gram and 3-gram setups. Since vocabulary exploration is relevant to solve the task, an extended left-right context graph construction is beneficial. Likewise, since BERT embeddings are highdimensional vectors, they are more valuable than other strategies. In turn, the best results for sentiment analysis and detection of biased writing were obtained by 1-gram graphs using Word2Vec. In these cases, only 300 dimensions are sufficient to get competitive results. Given that App Reviews documents are extremely short, the local context in the text is insignificant and exploring the global context through denser 3-gram graphs is required."
},
{
"figure_ref": [
"fig_2"
],
"heading": "Can Graphs Compete with",
"publication_ref": [],
"table_ref": [],
"text": "State-Of-The-Art Sequence Models?\nAlthough graphs do not attain the results of Transformer-based ones for short and mediumlength document classification, Intuitive Graphs perform better the longer the documents are. Graph representations are designed to harness the text's structure, and as such, their performance is expected to excel in longer documents as there is more information and structural patterns to exploit. For BBC News, Window ext has the secondbest accuracy at only 0.2 points behind the bestperforming model, Longformer. Intuitive Graphs dominate as the best way to represent longer documents (HND). For this scenario, there is a noticeable gap between the best and the second-best model. Therefore, graph-based document representations appear to provide clear advantages when processing long texts. Note that in this task, TextLevelGCN performs better than BERT but worse than BoW MLP. This suggests that, despite its effectiveness, TextLevelGCN loses a significant part of the input document by defining a much smaller maximum length for text documents (350 tokens). BoW MLP represents each document by considering the entire dataset's vocabulary, granting access to terms beyond TextLevelGCN's scope.\nOne of the strongest aspects of Intuitive Graphs methods is that they require much less time and compute resources than popular alternatives during training. Although an extra step is required to create document graph representations, the results indicate that the total execution time, including graph creation and model execution, is not an issue. For short texts as in DBpedia, e.g., the window graph is on par with the top-performing LLM, with just a 0.8% accuracy difference and 5.7 times faster speed. Likewise, BERT beats Sequence graphs on IMDB by only 0.5% in accuracy, while being 3.5 times slower. Note that BoW MLP is not included in Figure 2, since it did not obtain good results.\nIn contrast, since BERT and Longformer are highly complex models in terms of the number of learnable parameters, a higher execution time than for graph-based models is expected. Interestingly, shorter documents, such as those in App Reviews and DBpedia, take even longer than medium-length documents. This suggests that the models require several iterations to converge on these particular tasks. Beyond this, note the abrupt decrease in the execution time for the BBC and HND datasets is because they have a small number of samples. Therefore, the total runtime is much shorter compared to the others. See Appendix D for more details on the runtime and resource utilization."
},
{
"figure_ref": [],
"heading": "Discussion",
"publication_ref": [],
"table_ref": [],
"text": "The results show that graph-based document representation holds promise as a way of providing struc-tural information to deep neural networks. Graphbased learning models are powerful and allow the extraction of complex patterns from text. However, they are particularly task-sensitive and depend on the lexical features of the documents to be represented. Thus, special care must be taken to properly define the components of the structure (nodes, edges, and the similarity function as edge label). Despite this, the most simplistic graph constructions can address text classification fairly well, proving competitive even in challenging scenarios such as with data imbalance and noisy documents.\nAn interesting finding is that when the focus of the text classification task is on the vocabulary, the global context is much more relevant than the local context of the document. Thus, the best graph construction strategies are those based on extended cooccurrence windows, yielding denser graphs. On the other hand, when the focus is on understanding the document as a whole and how the various parts of the text are connected, the local context becomes much more valuable. Therefore, Window (N=2) or Sequential graphs are recommended."
},
{
"figure_ref": [],
"heading": "Conclusion",
"publication_ref": [],
"table_ref": [],
"text": "We present an empirical analysis of graph representations for text classification by comprehensively analyzing their effectiveness across several GNN architectures and setups. The experiments consider a heterogeneous set of five datasets, encompassing short and long documents. The results show that the strength of graph-based models is closely tied to the textual features and the source domain of documents. Thus, the choice of nodes and edges is found to be crucial. Despite this, Intuitive Graphs are shown to be a strong option, reaching competitive results across all considered tasks, especially for longer documents, exceeding those of BERT and Longformer. Additionally, we observed that pre-trained static word embeddings, instead of BERT vectors, allow reaching outstanding results on some tasks.\nWe are enthusiastic about extending our study to further tasks in future work. To this end, we are releasing our code on GitHub5 and hope that it can grow to become a community resource. Additionally, we will expand this study by exploring approaches for learning the graph structure to eliminate the need for picking a design manually, being less domain-dependent."
},
{
"figure_ref": [],
"heading": "Limitations",
"publication_ref": [],
"table_ref": [],
"text": "While this study successfully shows the impact and potential of graphs for document representation, there are some points to keep in mind.\nFirst, despite all the judgments and conclusions presented being supported by the results of the experiments, they were based on graph neural network models trained on particular sub-partitions, as stated in Section 3.3.1, so as to allow a fairer comparison between models. However, this means that the results reported here are not directly comparable with those reported in the literature. To assess how the models are positioned with regard to the state-of-the-art in the different tasks, it is advisable to train on the original training partitions and thus learn from all the available data.\nIt is also important to note that our study analyzes multiple text representation strategies on text classification only. Although this is one of the most important classes of NLP tasks, we cannot ensure that such graph approaches show the same behavior in other tasks. Therefore, tackling other types of problems that require a deep level of understanding of the local and global context of the text is an important direction for future work.\nFinally, all the experiments were run on English data. As English has comparatively simple grammar and well-known rules for conjugations and plurals, it is possible that graph-based models may not be as effective in other languages. Analyzing this aspect would be particularly interesting for low-resource languages."
},
{
"figure_ref": [],
"heading": "Ethics Statement",
"publication_ref": [],
"table_ref": [],
"text": "This work studies fundamental questions that can be invoked in a multitude of different application contexts. Different applications entail different ethical considerations that need to be accounted for before deploying graph-based representations. For instance, applying a trained hyperpartisan news detection model in an automated manner bears the risk of false positives, where legitimate articles get flagged merely for a choice of words that happens to share some resemblance with words occurring in hyperpartisan posts. For sentiment classification, Mohammad (2022) provides an extensive discussion of important concerns. Hence, ethical risks need to be considered depending on the relevant target use case."
},
{
"figure_ref": [],
"heading": "A Dataset Descriptions",
"publication_ref": [
"b9",
"b38",
"b21",
"b24",
"b10",
"b18",
"b17"
],
"table_ref": [],
"text": "We provide a detailed description of the datasets used for our text classification experiments. All of them were labeled by experts and validated by the community.\nApp Reviews. The dataset is a collection of 288,065 English user reviews of Android applications from 23 different app categories (Grano et al., 2017). The goal of the dataset is the fine-grained sentiment analysis in an imbalanced setting, where 60.5% of the total samples correspond to 4-star reviews. Each example includes the name of the software application package, the comment, the date when the user posted the evaluation, and the rating provided.\nDBpedia. For topic classification, the DBpedia ontology classification dataset (Zhang et al., 2015) was constructed by picking 14 non-overlapping classes from DBpedia 2014 (Lehmann et al., 2015). For each category, the authors randomly chose 40,000 Wikipedia articles as training samples and 5,000 samples for testing. Every article contains the title, content, and class label. Although the original DBpedia is a multilingual knowledge base, this dataset only contains English data.\nIMDB. English language movie reviews from the Internet Movie Database for binary sentiment classification (Maas et al., 2011). The dataset is composed of 25,000 reviews for training and 25,000 for testing, with balanced numbers of positive and negative reviews.\nBBC News. This is a publicly available6 dataset consisting of 2,225 English documents from the BBC News website (Greene and Cunningham, 2006). The articles correspond to stories from 2004-2005 in the areas of business, entertainment, politics, sport, and technology. The dataset exhibits minor class imbalance, with sports being the majority class with 511 articles, while entertainment is the smallest one with 386 samples.\nHyperpartisan News Detection (HND). A dataset7 for binary news classification (Kiesel et al., 2018). Although it comprises two parts, byarticle and bypublisher, this study only uses the first one. The dataset has 645 English samples labeled through crowdsourcing, with 238 (37%) labeled as hyperpartisan and 407 (63%) as not hyperpartisan. The challenge of this task is to detect the hyperpartisan language, which may be distinguishable from regular news at the levels of style, syntax, semantics, and pragmatics (Kiesel et al., 2019)."
},
{
"figure_ref": [],
"heading": "B Word Embeddings for Node Initialization",
"publication_ref": [],
"table_ref": [],
"text": "In the following, we provide further more detailed investigations pertaining to the choice of word embeddings to initialize node representations."
},
{
"figure_ref": [],
"heading": "B.1 Intuitive Graphs",
"publication_ref": [],
"table_ref": [
"tab_5",
"tab_9",
"tab_1",
"tab_2"
],
"text": "We include the results reported by the GNN models trained on the different datasets using four different node feature initialization strategies.\nThe results are shown from Table 5 to Table 9 and include BERT pre-trained word embeddings (BERT), contextualized BERT (BERT-C), GloVe, and Word2Vec. Each table presents the accuracy and macro averaged F1-score as averages over 10 runs. Note that the underlined embedding strategy is the one that attained the best performance, as shown in Table 2 and Table 3."
},
{
"figure_ref": [],
"heading": "B.2 TextLevelGCN",
"publication_ref": [],
"table_ref": [
"tab_2",
"tab_0"
],
"text": "As discussed in Section 3.1, one of the main contributions of TextLevelGCN is that it allows duplicate nodes when a term occurs more than once in the input text. Therefore, it takes care of polysemy. Hence, using the message-passing function, the model can infer the proper meaning of the token given its local context. Given this peculiarity, we exclude contextualized BERT (BERT-C) as a node feature initialization strategy. Thus, the performance of TextLevelGCN was analyzed using BERT pre-trained word embeddings, GloVe, and Word2Vec. Note that the underlined embedding strategy is the one that attained the best performance, as in Table 3. The results are presented in Table 10 and correspond to the average over 10 independent trials."
},
{
"figure_ref": [],
"heading": "C Transformer-based language models",
"publication_ref": [],
"table_ref": [
"tab_0"
],
"text": "In order to provide results on a broader spectrum regarding the behavior of Transformer-based LMs, we performed additional experiments using the pretrained BERT and Longformer models. The corresponding results are shown in Table 11.\nA pre-trained BERT-base uncased model was included by freezing the encoder architecture and stacking a final dense layer for conducting the corresponding text classification, as done for the fully fine-tuned version described in Section 3.3.2. The same process was followed for the pre-trained Longformer-base. In this case, we conducted experiments setting a maximum sequence length of 512, and 1,024. This was done to have a fair comparison regarding BERT and thus separate the effect that attention has on both approaches.\nFor training, we used Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 10 -4 , a batch size of 64 samples, 100 epochs as a maximum, and early stopping with patience 10. Only for HND dataset, the patience was 20. All the experiments conducted in this study were run on an NVIDIA RTX A6000 with 48GB VRAM."
},
{
"figure_ref": [],
"heading": "D Runtime & Resource Utilization",
"publication_ref": [],
"table_ref": [
"tab_3",
"tab_10"
],
"text": "To complement the results reported in Table 4, we measured the GPU utilization (%) and GPU memory usage (%) for each of the models. We also measured these metrics for each graph construction when applied to each of the datasets to find out how the strategies behave when scaling to longer documents. We tracked model performance by using Weights & Biases (W&B)8 platform. We reran all the models using the same batch size for a fair comparison.\nTable 13 suggests: i) The increase in GPU utilization is minimal as the document length increases. Specifically, as the document length increases by one order of magnitude, GPU utilization increases by about 1.5% when employing Intuitive Graphs and 8-10% for TLGCN. ii) The GPU memory allocated for graph strategies is constrained to below 6%, representing a mere fifth of the memory consumed by BERT and less than a tenth of the memory consumed by Longformer. This is a significant consideration when computational resources are restricted."
},
{
"figure_ref": [],
"heading": "E Libraries Used",
"publication_ref": [],
"table_ref": [
"tab_11"
],
"text": "In order to provide the reader and practitioners with the necessary details to regenerate the reported results, Table 14 presents all the libraries used to perform the experiments. "
},
{
"figure_ref": [],
"heading": "Acknowledgments",
"publication_ref": [],
"table_ref": [],
"text": "This study was possible due to the funding of the Data Science and Engineering (DSE) Research School program at Hasso Plattner Institute."
}
] | 2024-01-22 | 10.18653/v1/D19-1345 | [
{
"authors": "Shilpa Arora; Mahesh Joshi; Carolyn Rosé",
"journal": "",
"ref_id": "b0",
"title": "Identifying types of claims in online customer reviews",
"year": "2009"
},
{
"authors": "Iz Beltagy; Matthew E Peters; Arman Cohan",
"journal": "",
"ref_id": "b1",
"title": "Longformer: The long-document transformer",
"year": "2020"
},
{
"authors": "Esteban Castillo; Ofelia Cervantes; Darnes Vilari; David",
"journal": "International Journal of Computer Applications",
"ref_id": "b2",
"title": "Author verification using a graphbased representation",
"year": "2015"
},
{
"authors": "Esteban Castillo; Ofelia Cervantes; Darnes Vilarino",
"journal": "Computación y Sistemas",
"ref_id": "b3",
"title": "Text analysis using different graph-based representations",
"year": "2017"
},
{
"authors": "Kyunghyun Cho; Bart Van Merriënboer; Caglar Gulcehre; Dzmitry Bahdanau; Fethi Bougares; Holger Schwenk; Yoshua Bengio",
"journal": "",
"ref_id": "b4",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"year": "2014"
},
{
"authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova",
"journal": "",
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"year": "2018"
},
{
"authors": "Kaize Ding; Jianling Wang; Jundong Li; Dingcheng Li; Huan Liu",
"journal": "",
"ref_id": "b6",
"title": "Be more with less: Hypergraph attention networks for inductive text classification",
"year": "2020"
},
{
"authors": "Timothy Dozat; Christopher D Manning",
"journal": "",
"ref_id": "b7",
"title": "Deep biaffine attention for neural dependency parsing",
"year": "2016"
},
{
"authors": "Lukas Galke; Ansgar Scherp",
"journal": "",
"ref_id": "b8",
"title": "Bag-of-words vs. graph vs. sequence in text classification: Questioning the necessity of text-graphs and the surprising strength of a wide mlp",
"year": "2022"
},
{
"authors": "Giovanni Grano; Andrea Di Sorbo; Francesco Mercaldo; A Corrado; Gerardo Visaggio; Sebastiano Canfora; Panichella",
"journal": "",
"ref_id": "b9",
"title": "Android apps and user feedback: a dataset for software evolution and quality improvement",
"year": "2017"
},
{
"authors": "Derek Greene; Pádraig Cunningham",
"journal": "",
"ref_id": "b10",
"title": "Practical solutions to the problem of diagonal dominance in kernel document clustering",
"year": "2006"
},
{
"authors": "Yongchun Gu; Yi Wang; Heng-Ru Zhang; Jiao Wu; Xingquan Gu",
"journal": "IEEE Access",
"ref_id": "b11",
"title": "Enhancing text classification by graph neural networks with multi-granular topicaware graph",
"year": "2023"
},
{
"authors": "Samer Hassan; Carmen Banea",
"journal": "Association for Computational Linguistics",
"ref_id": "b12",
"title": "Random-walk term weighting for improved text classification",
"year": "2006"
},
{
"authors": "Lianzhe Huang; Dehong Ma; Sujian Li; Xiaodong Zhang; Houfeng Wang; ; ",
"journal": "Association for Computational Linguistics",
"ref_id": "b13",
"title": "Text level graph neural network for text classification",
"year": "2019"
},
{
"authors": "Lianzhe Huang; Dehong Ma; Sujian Li; Xiaodong Zhang; Houfeng Wang",
"journal": "Association for Computational Linguistics",
"ref_id": "b14",
"title": "Text level graph neural network for text classification",
"year": "2019"
},
{
"authors": "Yen-Hao Huang; Yi-Hsin Chen; Yi-Shin Chen",
"journal": "",
"ref_id": "b15",
"title": "Contexting: Granting document-wise contextual embeddings to graph neural networks for inductive text classification",
"year": "2022"
},
{
"authors": "Mahesh Joshi; Carolyn Rosé",
"journal": "",
"ref_id": "b16",
"title": "Generalizing dependency features for opinion mining",
"year": "2009"
},
{
"authors": "Johannes Kiesel; Maria Mestre; Rishabh Shukla; Emmanuel Vincent; Payam Adineh; David Corney; Benno Stein; Martin Potthast",
"journal": "",
"ref_id": "b17",
"title": "Semeval-2019 task 4: Hyperpartisan news detection",
"year": "2019"
},
{
"authors": "Johannes Kiesel; Maria Mestre; Rishabh Shukla; Emmanuel Vincent; David Corney; Payam Adineh; Benno Stein; Martin Potthast",
"journal": "",
"ref_id": "b18",
"title": "Data for pan at semeval 2019 task 4: Hyperpartisan news detection",
"year": "2018"
},
{
"authors": "P Diederik; Jimmy Kingma; Ba",
"journal": "",
"ref_id": "b19",
"title": "Adam: A method for stochastic optimization",
"year": "2014"
},
{
"authors": "N Thomas; Max Kipf; Welling",
"journal": "",
"ref_id": "b20",
"title": "Semisupervised classification with graph convolutional networks",
"year": "2016"
},
{
"authors": "Jens Lehmann; Robert Isele; Max Jakob; Anja Jentzsch; Dimitris Kontokostas; Pablo N Mendes; Sebastian Hellmann; Mohamed Morsey; Patrick Van Kleef; Sören Auer",
"journal": "Semantic web",
"ref_id": "b21",
"title": "Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia",
"year": "2015"
},
{
"authors": "Yujia Li; Daniel Tarlow; Marc Brockschmidt; Richard Zemel",
"journal": "",
"ref_id": "b22",
"title": "Gated graph sequence neural networks",
"year": "2015"
},
{
"authors": "Xien Liu; Xinxin You; Xiao Zhang; Ji Wu; Ping Lv",
"journal": "",
"ref_id": "b23",
"title": "Tensor graph convolutional networks for text classification",
"year": "2020"
},
{
"authors": "Andrew L Maas; Raymond E Daly; Peter T Pham; Dan Huang; Andrew Y Ng; Christopher Potts",
"journal": "Association for Computational Linguistics",
"ref_id": "b24",
"title": "Learning word vectors for sentiment analysis",
"year": "2011"
},
{
"authors": "Rada Mihalcea; Paul Tarau",
"journal": "Association for Computational Linguistics",
"ref_id": "b25",
"title": "TextRank: Bringing order into text",
"year": "2004"
},
{
"authors": "Tomas Mikolov; Ilya Sutskever; Kai Chen; Greg S Corrado; Jeff Dean",
"journal": "Advances in neural information processing systems",
"ref_id": "b26",
"title": "Distributed representations of words and phrases and their compositionality",
"year": "2013"
},
{
"authors": "M Saif; Mohammad",
"journal": "Computational Linguistics",
"ref_id": "b27",
"title": "Ethics Sheet for Automatic Emotion Recognition and Sentiment Analysis",
"year": "2022"
},
{
"authors": "Giannis Nikolentzos; Antoine Tixier; Michalis Vazirgiannis",
"journal": "",
"ref_id": "b28",
"title": "Message passing attention networks for document understanding",
"year": "2020"
},
{
"authors": "Jeffrey Pennington; Richard Socher; Christopher D Manning",
"journal": "",
"ref_id": "b29",
"title": "Glove: Global vectors for word representation",
"year": "2014"
},
{
"authors": "Yujie Qian; Enrico Santus; Zhijing Jin; Jiang Guo; Regina Barzilay",
"journal": "Association for Computational Linguistics",
"ref_id": "b30",
"title": "GraphIE: A graph-based framework for information extraction",
"year": "2019"
},
{
"authors": "Rahul Ragesh; Sundararajan Sellamanickam; Arun Iyer; Ramakrishna Bairi; Vijay Lingam",
"journal": "",
"ref_id": "b31",
"title": "Hetegcn: heterogeneous graph convolutional networks for text classification",
"year": "2021"
},
{
"authors": "François Rousseau; Emmanouil Kiagias; Michalis Vazirgiannis",
"journal": "",
"ref_id": "b32",
"title": "Text categorization as a graph classification problem",
"year": "2015"
},
{
"authors": "Petar Velickovic; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Lio; Yoshua Bengio",
"journal": "stat",
"ref_id": "b33",
"title": "Graph attention networks",
"year": "1050"
},
{
"authors": "Yizhao Wang; Chenxi Wang; Jieyu Zhan; Wenjun Ma; Yuncheng Jiang",
"journal": "Expert Systems with Applications",
"ref_id": "b34",
"title": "Text fcg: Fusing contextual information via graph learning for text classification",
"year": "2023"
},
{
"authors": "Keyulu Xu; Weihua Hu; Jure Leskovec; Stefanie Jegelka",
"journal": "",
"ref_id": "b35",
"title": "How powerful are graph neural networks?",
"year": "2018"
},
{
"authors": "Liang Yao; Chengsheng Mao; Yuan Luo",
"journal": "",
"ref_id": "b36",
"title": "Graph convolutional networks for text classification",
"year": "2019"
},
{
"authors": "Fan Hao Yuan; Mengnan Yang; Shuiwang Du; Xia Ji; Hu",
"journal": "Applied AI Letters",
"ref_id": "b37",
"title": "Towards structured nlp interpretation via graph explainers",
"year": "2021"
},
{
"authors": "Xiang Zhang; Junbo Zhao; Yann Lecun",
"journal": "Advances in neural information processing systems",
"ref_id": "b38",
"title": "Character-level convolutional networks for text classification",
"year": "2015"
},
{
"authors": "Yufeng Zhang; Xueli Yu; Zeyu Cui; Shu Wu; Zhongzhen Wen; Liang Wang",
"journal": "Association for Computational Linguistics",
"ref_id": "b39",
"title": "Every document owns its structure: Inductive text classification via graph neural networks",
"year": "2020"
},
{
"authors": "",
"journal": "App Reviews BERT",
"ref_id": "b40",
"title": "1-gram 2-gram 3-gram 4-gram Dataset Emb. Acc F1-ma Acc F1-ma Acc F1-ma Acc F1-ma",
"year": ""
},
{
"authors": "",
"journal": "",
"ref_id": "b41",
"title": "0 Table 10: Word embeddings as TextLevelGCN node initialization. Accuracy and macro averaged F1-score are reported",
"year": ""
}
] | [] | Connecting the Dots: What Graph-Based Text Representations Work Best for Text Classification using Graph Neural Networks? | Given the success of Graph Neural Networks (GNNs) for structure-aware machine learning, many studies have explored their use for text classification, but mostly in specific domains with limited data characteristics. Moreover, some strategies prior to GNNs relied on graph mining and classical machine learning, making it difficult to assess their effectiveness in modern settings. This work extensively investigates graph representation methods for text classification, identifying practical implications and open challenges. We compare different graph construction schemes using a variety of GNN architectures and setups across five datasets, encompassing short and long documents as well as unbalanced scenarios in diverse domains. Two Transformer-based large language models are also included to complement the study. The results show that i) although the effectiveness of graphs depends on the textual input features and domain, simple graph constructions perform better the longer the documents are, ii) graph representations are especially beneficial for longer documents, outperforming Transformer-based models, iii) graph methods are particularly efficient at solving the task. | Margarita Bugueño; Gerard De Melo | [
{
"figure_caption": "1http://derekgreene.com/bbc/ 2 https://zenodo.org/HNDrecord",
"figure_data": "",
"figure_id": "fig_0",
"figure_label": "",
"figure_type": "figure"
},
{
"figure_caption": "Figure 1 :1Figure 1: Graph Construction Methods. Given the input text \"Start working! The sooner you start working, the sooner you will have money\", the five co-occurrence graph representations studied are shown. From left to right: window-based graph, window-based graph extended (new edges are shown as dashed in blue), sequence-weighted, sequence simplified omitting edge weights, and TextLevelGCN (edge weights shown for first and last node, in blue).",
"figure_data": "",
"figure_id": "fig_1",
"figure_label": "1",
"figure_type": "figure"
},
{
"figure_caption": "Figure 2 :2Figure 2: Execution time. Average execution time and shaded standard deviation. Time is shown in minutes.",
"figure_data": "",
"figure_id": "fig_2",
"figure_label": "2",
"figure_type": "figure"
},
{
"figure_caption": "Statistics",
"figure_data": "DatasetADLKIR>512 >1,024App Reviews1451:80%0%DBpedia51141:10%0%IMDB28321:112%1.4%BBC News43854:5 28.5%1.6%HND91221:2 63.3%29.8%",
"figure_id": "tab_0",
"figure_label": "1",
"figure_type": "table"
},
{
"figure_caption": "Best-performing GNN for Intuitive Graphs. The node feature initialization (Emb.) and architecture details are reported. L-Conv and #U stand for the hidden convolutional layer and units, respectively. The results report the average obtained from 10 independent runs. Full comparison in Appendix B.",
"figure_data": "WindowWindowextSequenceSequence simpDatasetEmb.L-Conv#UAcc F1-maAcc F1-maAcc F1-maAcc F1-maApp ReviewsWord2Vec 3-GIN16 32 6464.7 62.0 61.131.0 34.9 35.163.6 63.2 62.433.9 35.0 35.463.3 62.0 60.026.4 31.0 33.065.3 ⋆63.7 62.529.1 ⋆35.7 34.816⋆97.5⋆97.497.397.397.397.297.397.3DBpediaBERT-C 1-GAT3297.297.297.397.297.096.997.097.06497.197.197.197.196.796.797.096.91687.387.387.387.387.787.7⋆87.9⋆87.9IMDBWord2Vec 1-GAT3287.387.386.986.987.587.587.587.56487.487.386.786.687.287.287.487.4BBC NewsGloVe4-GAT16 32 6497.8 97.8 97.897.7 97.7 97.7⋆98.0 97.6 ⋆98.0⋆98.0 97.6 ⋆98.097.8 97.8 97.697.8 97.7 97.597.4 97.4 97.297.3 97.3 97.11677.676.875.273.956.636.177.476.5HNDBERT2-GIN3275.373.677.476.856.636.178.377.66477.176.576.975.856.636.1⋆79.1⋆78.51-gram2-gram3-gram4-gramDatasetEmb.AccF1-maAccF1-maAccF1-maAccF1-maApp ReviewsWord2Vec66.634.764.735.2⋆64.5⋆35.864.335.5DBpediaBERT95.795.7⋆96.1⋆96.095.995.996.096.0IMDBWord2Vec⋆86.8⋆86.886.586.486.286.286.186.1BBC NewsBERT97.097.097.297.2⋆97.3⋆97.397.097.0HNDWord2Vec⋆75.7⋆73.471.667.972.269.870.467.0",
"figure_id": "tab_1",
"figure_label": "2",
"figure_type": "table"
},
{
"figure_caption": "Best-performing TextLevelGCN. Results for the best node feature initialization (Emb.). The results report the average obtained from 10 independent runs. Full comparison in Appendix B.",
"figure_data": "",
"figure_id": "tab_2",
"figure_label": "3",
"figure_type": "table"
},
{
"figure_caption": "General performance. The average results over 10 runs for graph models and sequential baselines are",
"figure_data": "DatasetModelNode Init.AccF1-ma Exec. Time [s] #ParamsBoW MLP-64.7 ± 0.332.7 ± 0.7104.410.3 KApp ReviewsBERT Longformer--62.0 ± 1.2 63.5 ± 0.9† 36.9 ± 1.1 37.6 ± 0.81,891.8 5,552.2108 M 148 MTextLevelGCNWord2Vec† 64.5 ± 1.235.8 ± 1.0546.4561 KSequence simpWord2Vec63.7 ± 0.735.7 ± 1.3168.816.3 KBoW MLP-91.5 ± 0.291.5 ± 0.224.552.4 KDBpediaBERT Longformer--98.3 ± 0.1 † 98.1 ± 0.298.3 ± 0.1 † 98.1 ± 0.22,201.2 5,451.9108 M 148 MTextLevelGCNBERT96.1 ± 0.196.0 ± 0.2426.84.8 MWindowBERT-C97.5 ± 0.197.4 ± 0.1384.350.3 KBoW MLP-83.7 ± 0.283.7 ± 0.240.8192 KIMDBBERT Longformer--† 88.4 ± 0.7 90.5 ± 0.6† 88.4 ± 0.8 90.5 ± 0.61,640.1 4,645.4108 M 148 MTextLevelGCNWord2Vec86.8 ± 0.286.8 ± 0.31,022.310.9 MSequence simpWord2Vec87.9 ± 0.187.9 ± 0.1473.619.5 KBoW MLP-97.9 ± 0.197.8 ± 0.18.4329 KBBC NewsBERT Longformer--97.8 ± 0.3 98.2 ± 0.397.7 ± 0.3 98.2 ± 0.3398.9 1,470.5108 M 148 MTextLevelGCNBERT97.3 ± 0.497.3 ± 0.4684.29.6 MWindowextGloVe† 98.0 ± 0.3† 98.0 ± 0.3170.632.6 KBoW MLP-75.6 ± 1.274.5 ± 1.45.4444 KHNDBERT Longformer--72.6 ± 2.9 † 77.2 ± 3.870.6 ± 4.5 † 75.5 ± 6.1346.1 475.1108 M 148 MTextLevelGCNWord2Vec75.7 ± 2.673.4 ± 3.5426.83.2 MSequence simpBERT79.1 ± 1.178.5 ± 1.1116.366.1 K",
"figure_id": "tab_3",
"figure_label": "4",
"figure_type": "table"
},
{
"figure_caption": "Table 12 presents additional information concerning the execution time for graph models. The average total execution time is broken down into graph representation generation time and GNN running time.",
"figure_data": "",
"figure_id": "tab_4",
"figure_label": "",
"figure_type": "table"
},
{
"figure_caption": "Word embedding (Emb.) effect on App Reviews. Accuracy and macro averaged F1-score for Intuitive Graphs using a GIN convolutional neural network.",
"figure_data": "WindowWindowextSequenceSequence simpEmb.LayersUnitsAccF1-maAccF1-maAccF1-maAccF1-ma1663.432.459.832.459.923.463.232.423261.634.058.031.957.926.560.333.2BERT64 1660.3 63.932.0 29.657.9 60.231.9 31.057.1 59.626.6 23.459.3 64.033.4 27.533262.134.859.132.759.022.661.333.26460.033.957.832.657.325.160.533.21662.230.262.429.860.226.662.631.323261.132.060.032.557.131.359.532.1BERT-C64 1658.5 62.531.8 29.658.7 62.531.1 28.356.7 60.830.3 24.958.7 63.031.5 26.633260.432.460.132.257.831.060.632.26459.731.760.832.556.731.058.932.11663.231.463.432.163.427.364.531.023261.234.060.833.859.533.463.333.1GloVe64 1659.6 64.532.9 28.860.2 63.833.0 30.758.3 63.134.3 27.061.2 64.933.8 28.633261.232.961.134.261.632.162.532.86459.834.359.733.559.034.460.434.41664.032.764.433.863.128.264.833.723262.134.063.134.260.931.162.934.7Word2Vec64 1661.7 64.735.0 31.060.9 63.634.5 33.959.9 63.333.4 26.462.2 65.334.3 29.133262.034.963.235.062.031.0⋆63.7⋆35.76461.135.162.435.460.033.062.534.8WindowWindowextSequenceSequence simpEmb.LayersUnitsAccF1-maAccF1-maAccF1-maAccF1-ma1695.995.895.895.895.995.895.895.713295.995.995.995.995.995.996.095.9BERT64 1695.8 95.695.8 95.595.9 95.595.9 95.496.0 95.695.9 95.595.9 95.695.9 95.523295.595.495.595.495.695.595.495.46495.395.395.295.195.395.395.395.316⋆97.5⋆97.497.397.397.397.297.397.313297.297.297.397.297.096.997.097.0BERT-C64 1697.1 97.497.1 97.397.1 97.397.1 97.396.7 97.396.7 97.397.0 97.396.9 97.323297.297.297.397.397.097.097.297.26497.397.297.397.397.097.097.197.01695.995.995.995.895.895.796.096.013295.995.996.196.096.095.996.096.0GloVe64 1695.9 95.995.8 95.896.0 95.895.9 95.895.9 95.995.8 95.996.0 96.096.0 95.923295.995.895.795.695.995.996.196.06495.795.795.895.895.995.895.995.91695.995.895.795.695.795.795.895.813296.096.095.895.795.795.795.895.8Word2Vec64 1695.9 95.695.9 95.595.5 95.495.4 95.395.6 95.695.5 95.595.7 95.795.7 95.623295.495.495.495.395.595.495.395.26495.495.395.395.395.495.495.595.4",
"figure_id": "tab_5",
"figure_label": "5",
"figure_type": "table"
},
{
"figure_caption": "Word embedding (Emb.) effect on DBpedia. Accuracy and macro averaged F1-score for Intuitive Graphs using a GAT convolutional neural network.",
"figure_data": "WindowWindowextSequenceSequence simpEmb.LayersUnitsAccF1-maAccF1-maAccF1-maAccF1-ma1686.886.886.386.386.686.686.486.413286.986.986.086.086.686.586.386.3BERT64 1686.7 86.986.7 86.986.0 86.786.0 86.786.3 86.886.2 86.786.3 86.586.3 86.423286.586.586.085.986.886.886.186.16485.785.786.386.286.286.186.286.21685.785.785.985.984.984.885.785.613285.685.685.585.585.485.485.585.5BERT-C64 1685.2 84.685.1 84.585.3 85.085.3 84.985.3 85.885.2 85.885.9 85.185.9 85.123285.285.284.984.985.385.385.385.26485.385.384.684.585.685.685.084.91685.985.985.785.786.186.185.585.513285.385.385.285.285.885.885.585.5GloVe64 1685.1 85.185.1 85.184.7 84.684.7 84.585.6 86.185.6 86.185.4 86.085.4 86.023284.984.983.783.785.585.585.385.36484.784.783.783.685.285.184.784.61687.387.387.387.387.787.7⋆87.9⋆87.913287.387.386.986.987.587.587.587.5Word2Vec64 1687.4 87.587.3 87.486.7 87.386.6 87.387.2 87.687.2 87.687.4 87.887.4 87.823286.986.987.187.087.087.087.387.36486.786.786.186.187.287.286.686.6",
"figure_id": "tab_6",
"figure_label": "6",
"figure_type": "table"
},
{
"figure_caption": "Word embedding (Emb.) effect on IMDB. Accuracy and macro averaged F1-score for Intuitive Graphs using a GAT convolutional neural network.",
"figure_data": "WindowWindowextSequenceSequence simpEmb.LayersUnitsAccF1-maAccF1-maAccF1-maAccF1-ma1696.996.797.197.197.096.996.796.533296.596.396.996.896.496.396.796.5BERT64 1696.5 96.596.3 96.497.0 96.796.9 96.697.0 96.796.8 96.596.7 96.996.5 96.743296.496.496.396.396.095.896.095.86496.596.496.796.795.895.696.496.21696.296.196.796.696.496.396.196.033296.196.096.896.796.596.396.896.7BERT-C64 1697.0 96.296.9 96.196.0 96.896.0 96.796.7 96.696.5 96.596.0 96.595.8 96.443296.496.396.896.796.596.496.496.36496.696.596.796.696.696.596.296.11697.697.598.097.997.997.897.397.233297.597.497.997.897.897.797.697.5GloVe64 1697.7 97.897.6 97.797.6 ⋆98.097.6 ⋆98.097.7 97.897.6 97.897.3 97.497.2 97.343297.897.797.697.697.897.797.497.36497.897.7⋆98.0⋆98.097.697.597.297.11696.996.897.597.497.397.297.196.933297.197.097.196.997.196.997.597.3Word2Vec64 1697.3 96.997.2 96.896.8 97.596.6 97.397.6 97.397.4 97.297.7 97.297.5 97.043297.197.097.597.397.697.497.497.36496.996.897.697.497.497.297.397.0",
"figure_id": "tab_7",
"figure_label": "7",
"figure_type": "table"
},
{
"figure_caption": "Word embedding (Emb.) effect on BBC News. Accuracy and macro averaged F1-score for Intuitive Graphs using a GAT convolutional neural network.",
"figure_data": "WindowWindowextSequenceSequence simpEmb.LayersUnitsAccF1-maAccF1-maAccF1-maAccF1-ma1677.676.875.273.956.636.177.476.523275.373.677.476.856.636.178.377.6BERT64 1677.1 76.776.5 75.876.9 74.975.8 73.956.6 56.636.1 36.1⋆79.1 73.5⋆78.5 70.933275.773.975.273.556.636.177.977.16477.276.675.674.656.636.177.376.11673.673.071.670.872.872.566.465.623274.073.673.171.470.269.367.766.4BERT-C64 1674.0 72.873.2 72.071.8 70.870.6 69.970.5 72.269.1 71.867.8 68.066.7 66.833274.373.671.970.870.569.467.165.46472.772.071.570.170.069.666.865.41673.571.970.969.868.466.470.769.323273.672.672.271.370.269.373.773.0GloVe64 1676.7 74.375.9 72.973.9 69.173.0 68.070.2 66.968.8 63.173.0 74.372.3 73.533274.773.572.371.569.767.874.773.76473.773.074.373.470.870.175.074.41673.373.274.073.459.142.772.371.723275.074.773.072.071.069.472.672.0Word2Vec64 1673.1 73.372.7 72.775.6 74.274.9 73.766.0 59.857.8 43.173.5 72.573.2 71.433274.573.974.574.168.462.673.272.86474.073.575.074.561.447.675.375.0",
"figure_id": "tab_8",
"figure_label": "8",
"figure_type": "table"
},
{
"figure_caption": "Word embedding (Emb.) effect on HND. Accuracy and macro averaged F1-score for Intuitive Graphs using a GIN convolutional neural network.",
"figure_data": "App ReviewsDBpediaIMDBBBC NewsHNDMethodUtil.Mem.Util.Mem.Util.Mem.Util.Mem.Util.Mem.Window3.134.743.674.754.534.793.674.811.934.83Windowext3.074.743.604.754.734.834.334.842.534.89Sequence2.874.742.874.743.934.793.674.792.474.83Sequence simp3.074.743.734.744.274.793.674.792.004.82TextLevelGCN 1-g4.075.006.735.1212.335.569.075.416.135.21TextLevelGCN 2-g3.935.006.805.1313.405.568.205.554.605.29TextLevelGCN 3-g3.675.006.535.1310.405.716.805.583.935.37TextLevelGCN 4-g4.335.005.405.139.535.864.135.623.935.32BERT94.4729.4294.7029.4295.2729.4289.4029.4268.9329.42Longformer99.2767.8699.2767.8699.6067.8699.8067.8699.4067.86",
"figure_id": "tab_9",
"figure_label": "9",
"figure_type": "table"
},
{
"figure_caption": "GPU statistics (%). GPU utilization (Util.) and GPU memory usage (Mem.) for each of the studied models. The i-g notation accompanying TextLevelGCN stands for i-gram graph construction.",
"figure_data": "LibraryVersiondatasets2.4.0gensim4.2.0nltk3.7numpy1.23.1pytorch-lightning1.7.4scikit-learn1.1.2torch1.11.0torch-cluster1.6.0torch-geometric2.1.0torch-scatter2.0.9torch-sparse0.6.15torch-spline-conv1.2.1torchmetrics0.9.3torchvision0.12.0transformers4.21.2word2vec0.11.1",
"figure_id": "tab_10",
"figure_label": "13",
"figure_type": "table"
},
{
"figure_caption": "Libraries. Versions of Python libraries used for the experimental implementation.",
"figure_data": "",
"figure_id": "tab_11",
"figure_label": "14",
"figure_type": "table"
}
] | [{"Category": "Methodological Basis", "Citation": "(Castillo et al., 2015)", "Explanation": "The cited work provides a foundation for the use of graphs in text classification tasks, as it discusses the applicability and effectiveness of graphs in broader settings."}, {"Category": "Extension or Continuation", "Citation": "(Castillo et al., 2017)", "Explanation": "The cited work is a continuation of the research on graph representations in text classification, as it further explores the use of graphs in more diverse scenarios."}, {"Category": "Data Source", "Citation": "(Castillo et al., , 2017)", "Explanation": "The cited work is a data source for the text classification tasks used in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Mihalcea and Tarau, 2004)", "Explanation": "The cited work by Mihalcea and Tarau (2004) provides a method of using co-occurrence graphs for keyword extraction, which the citing paper adopts in their research on text representation."}, {"Category": "Methodological Basis", "Citation": "(Hassan and Banea, 2006)", "Explanation": "The cited work by Hassan and Banea (2006) uses a co-occurrence graph with N = 2 and TextRank to replace term frequency weights, which the citing paper builds upon in their text classification research."}, {"Category": "Methodological Basis", "Citation": "(Rousseau et al., 2015)", "Explanation": "The cited work by Rousseau et al. (2015) uses a graph-of-words approach to cast text classification as a classification problem, which the citing paper adopts in their research on text representation."}, {"Category": "Methodological Basis", "Citation": "(Castillo et al., 2015)", "Explanation": "The cited work by Castillo et al. (2015) uses sequence graphs to reflect the original order of words in text, which the citing paper builds upon in their research on text representation."}, {"Category": "Methodological Basis", "Citation": "(Arora et al., 2009)", "Explanation": "The cited work by Arora et al. provides a method of analyzing individual sentences using tree and graph-structured formalisms, which the citing paper adopts in its research to build document-level representations."}, {"Category": "Methodological Basis", "Citation": "(Joshi and Ros\u00e9, 2009)", "Explanation": "The cited work by Joshi and Ros\u00e9 offers a method of analyzing individual sentences using tree and graph-structured formalisms, which the citing paper utilizes in its research to build document-level representations."}, {"Category": "Methodological Basis", "Citation": "(Dozat and Manning, 2016)", "Explanation": "The cited work by Dozat and Manning presents a method of inferring word dependencies to obtain syntactic dependency trees, which the citing paper employs in its research to build document-level representations."}, {"Category": "Methodological Basis", "Citation": "(Yuan et al., 2021)", "Explanation": "The cited work by Yuan et al. provides a method of inferring word dependencies to obtain syntactic dependency trees, which the citing paper adopts in its research to build document-level representations."}, {"Category": "Methodological Basis", "Citation": "(Yao et al., 2019)", "Explanation": "TextGCN proposes a heterogeneous graph construction using words and documents as nodes, which serves as a methodological basis for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Liu et al., 2020)", "Explanation": "TensorGCN is a data source for the research conducted in the citing paper, as it is one of the proposals that integrate heterogeneous contextual information."}, {"Category": "Data Source", "Citation": "(Ragesh et al., 2021)", "Explanation": "Het-eGCN is another data source for the research conducted in the citing paper, as it is another proposal that integrates heterogeneous contextual information."}, {"Category": "Data Source", "Citation": "(Ding et al., 2020)", "Explanation": "HyperGAT is a data source for the research conducted in the citing paper, as it is a proposal that integrates heterogeneous contextual information."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2019a)", "Explanation": "TextLevelGCN creates one graph per input text, which serves as a methodological basis for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Qian et al., 2019)", "Explanation": "The cited work by Qian et al. introduces a GCN-based method for tagging and information extraction tasks, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2015;Cho et al., 2014)", "Explanation": "The cited works by Li et al. and Cho et al. use a Gated Recurrent Unit-based message passing function for updating node feature vectors, which the citing paper adapts in their research."}, {"Category": "Extension or Continuation", "Citation": "(Nikolentzos et al., 2020)", "Explanation": "The cited work by Nikolentzos et al. introduces the master node concept in the graph construction method, which the citing paper extends by including a master node in their research."}, {"Category": "Methodological Basis", "Citation": "(Gu et al., 2023)", "Explanation": "The cited work by Gu et al. proposes a heterogeneous graph construction method with topic nodes for class-aware representation learning, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Galke and Scherp, 2022)", "Explanation": "The cited work by Galke and Scherp (2022) provides a comparison of different text classification approaches, including Bag of Words (BoW), sequence, and graph models. The citing paper adopts this analysis to evaluate the necessity of text-graphs in text classification."}, {"Category": "Methodological Basis", "Citation": "(Hassan and Banea, 2006)", "Explanation": "The cited work by Hassan and Banea (2006) provides the basis for the window-based method of graph construction used in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Castillo et al., 2015)", "Explanation": "The cited work by Castillo et al. (2015) serves as the methodological basis for the sequence-weighted method of graph construction in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2019b)", "Explanation": "The cited work introduces the TextLevelGCN model, which the citing paper adopts to create a more sophisticated graph-based text representation strategy that considers each word token occurrence as a separate node and uses weighted information from neighbors to determine the in-context meaning."}, {"Category": "Methodological Basis", "Citation": "(Devlin et al., 2018)", "Explanation": "The cited work introduces the BERT Transformer as a powerful masked language model-based encoder that the citing paper adopts for comparison in the study of text representation schemes."}, {"Category": "Methodological Basis", "Citation": "(Beltagy et al., 2020)", "Explanation": "The cited work presents the Longformer Transformer as a modified attention mechanism that extends the maximum input length of the BERT model, providing a basis for comparison in the study of text representation schemes."}, {"Category": "Data Source", "Citation": "(Grano et al., 2017)", "Explanation": "The cited work provides the App Reviews dataset for fine-grained sentiment analysis in an imbalanced setting, which the citing paper utilizes in their research on assessing the generalizability of graph strategies in text classification."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2015)", "Explanation": "The cited work provides the DBpedia dataset for topic classification based on DBpedia 2014 classes, which the citing paper utilizes in their research on assessing the generalizability of graph strategies in text classification."}, {"Category": "Data Source", "Citation": "(Maas et al., 2011)", "Explanation": "The cited work provides the IMDB dataset for movie reviews, which the citing paper utilizes in its research for binary sentiment classification."}, {"Category": "Data Source", "Citation": "(Greene and Cunningham, 2006)", "Explanation": "The BBC News dataset is cited as a source of English documents for topic classification research in the cited work."}, {"Category": "Data Source", "Citation": "(Kiesel et al., 2018)", "Explanation": "The HND dataset is referenced as a source of news articles for hyperpartisan news detection research in the cited work."}, {"Category": "Methodological Basis", "Citation": "(Kipf and Welling 2016)", "Explanation": "The cited work provides the traditional graph convolutional neural layer (GCN) that the citing paper uses in their experiments on Intuitive Graphs."}, {"Category": "Methodological Basis", "Citation": "(Xu et al. 2018)", "Explanation": "The cited work introduces the graph isomorphism operator (GIN) that the citing paper uses in their experiments to improve structural discriminative power in GNNs."}, {"Category": "Methodological Basis", "Citation": "(Velickovic et al. 2017)", "Explanation": "The cited work includes the graph attentional operator (GAT) with 4 attention heads that the citing paper uses in their experiments to improve the performance of GNNs."}, {"Category": "Data Source", "Citation": "(see Appendix E)", "Explanation": "The cited work is a PyTorch Geometric implementation that the citing paper uses in their experiments on Intuitive Graphs."}, {"Category": "Methodological Basis", "Citation": "(default parameter settings in the original implementation)", "Explanation": "The cited work provides the default parameter settings for TextLevelGCN that the citing paper uses in their experiments."}, {"Category": "Data Source", "Citation": "(GloVe Wiki-Gigaword 300-dim.)", "Explanation": "The cited work is a node vector initialization strategy that the citing paper uses in their experiments to compare different node vector initialization strategies in TextLevelGCN."}, {"Category": "Methodological Basis", "Citation": "(Pennington et al., 2014)", "Explanation": "The cited work by Pennington et al. provides the embeddings used in the study conducted in the citing paper, which serves as a methodological basis for the text classification task."}, {"Category": "Methodological Basis", "Citation": "(Mikolov et al., 2013)", "Explanation": "The cited work by Mikolov et al. provides the Word2Vec Google News 300-dim. embeddings used in the study, which is a methodological basis for the text classification task."}, {"Category": "Methodological Basis", "Citation": "(Pennington et al., 2014)", "Explanation": "The cited work by Pennington et al. provides the static BERT pre-trained embeddings used in the study, which is a methodological basis for the text classification task."}, {"Category": "Methodological Basis", "Citation": "(Pennington et al., 2014)", "Explanation": "The cited work by Pennington et al. provides the contextualized BERT embeddings used in the study, which is a methodological basis for the text classification task."}, {"Category": "Data Source", "Citation": "(Pennington et al., 2014)", "Explanation": "The cited work by Pennington et al. serves as the data source for the BoW vocabulary used in the study, which is a foundational element for the text classification task."}, {"Category": "Methodological Basis", "Citation": "(Kingma and Ba, 2014)", "Explanation": "The cited work by Kingma and Ba provides the Adam optimization method used in the study, which is a methodological basis for the text classification task."}, {"Category": "Methodological Basis", "Citation": "(Xu et al., 2018)", "Explanation": "The cited work by Xu et al. (2018) introduces the GCN model, which the citing paper adopts as a method for improving discriminative power in GNN message passing for certain tasks."}, {"Category": "Data Source", "Citation": "(Grano et al., 2017)", "Explanation": "The App Reviews dataset is a collection of user reviews of Android applications that is used in the text classification experiments conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2015)", "Explanation": "The DBpedia ontology classification dataset is a collection of Wikipedia articles that is used for topic classification in the text classification experiments."}, {"Category": "Data Source", "Citation": "(Lehmann et al., 2015)", "Explanation": "The DBpedia ontology classification dataset is based on the DBpedia 2014 knowledge base, which is a multilingual knowledge base that is referenced in the original DBpedia."}, {"Category": "Data Source", "Citation": "(Maas et al., 2011)", "Explanation": "The cited work provides a dataset of English language movie reviews for binary sentiment classification that the citing paper uses in its research."}, {"Category": "Data Source", "Citation": "(Greene and Cunningham, 2006)", "Explanation": "The cited work provides a dataset of English documents from the BBC News website that the citing paper uses in its research."}, {"Category": "Data Source", "Citation": "(Kiesel et al., 2018)", "Explanation": "The cited work provides a dataset of English samples for hyperpartisan news detection that the citing paper uses in its research."}, {"Category": "Supporting Evidence", "Citation": "(Kiesel et al., 2019)", "Explanation": "The cited work by Kiesel et al. provides a detailed analysis of the characteristics of hyperpartisan language, which serves as a foundational basis for the citing paper in understanding the nature of the task and the challenges involved in detecting hyperpartisan language."}] |
[{"figure_ref":[],"heading":"Introduction","publication_ref":["b0","b1","b2","b3","b4","b5","b6","b7(...TRUNCATED) | 2023-07 | 10.1177/0361198105191800112 | [{"authors":"","journal":"OECD","ref_id":"b0","title":"The Economic Consequences of Outdoor Air Poll(...TRUNCATED) | [{"formula_coordinates":[4.0,147.2,542.39,325.78,11.3],"formula_id":"formula_0","formula_text":"M = (...TRUNCATED) | Real-Time Idling Vehicles Detection Using Combined Audio-Visual Deep Learning | "Combustion vehicle emissions contribute to poor air quality and release greenhouse gases into the a(...TRUNCATED) | "Xiwen Li; Tristalee Mangin; Surojit Saha; Rehman Mohammed; Evan Blanchard; Dillon Tang; Henry Poppe(...TRUNCATED) | [{"figure_caption":"Figure 1 .1Figure 1. Proposed System Design. The yellow arrow collects vehicle m(...TRUNCATED) | "[{\"Category\": \"Methodological Basis\", \"Citation\": \"[1]\", \"Explanation\": \"The cited work (...TRUNCATED) |
[{"figure_ref":[],"heading":"Introduction","publication_ref":["b1","b30","b53","b29","b33"],"table_r(...TRUNCATED) | 2023-10-25 | 10.18653/v1/D15-1008 | [{"authors":"Elliott Ash; Germain Gauthier; Philine Widmer","journal":"","ref_id":"b0","title":"Rela(...TRUNCATED) | [{"formula_coordinates":[2.0,306.43,160.53,194.83,27.4],"formula_id":"formula_0","formula_text":"###(...TRUNCATED) | Natural Language Decompositions of Implicit Content Enable Better Text Representations | "When people interpret text, they rely on inferences that go beyond the observed language itself. In(...TRUNCATED) | Alexander Hoyle; Rupak Sarkar; Pranav Goel; Philip Resnik | [{"figure_caption":"Federallands and waters must not be opened up to fossil fuel extraction. Public (...TRUNCATED) | "[{\"Category\": \"Methodological Basis\", \"Citation\": \"(Bach, 1994)\", \"Explanation\": \"The ci(...TRUNCATED) |
[{"figure_ref":[],"heading":"INTRODUCTION","publication_ref":["b29","b28","b71","b8","b42","b65","b1(...TRUNCATED) | 2024-03-11 | 10.48550/arXiv.1106.6251 | [{"authors":"Ekin Akyürek; Tolga Bolukbasi; Frederick Liu; Binbin Xiong; Ian Tenney; Jacob Andreas;(...TRUNCATED) | [{"formula_coordinates":[3.0,242.49,536.67,262.18,8.99],"formula_id":"formula_0","formula_text":"kGL(...TRUNCATED) | "FAITHFUL AND EFFICIENT EXPLANATIONS FOR NEU-RAL NETWORKS VIA NEURAL TANGENT KERNEL SUR-ROGATE MODEL(...TRUNCATED) | "A recent trend in explainable AI research has focused on surrogate modeling, where neural networks (...TRUNCATED) | "Andrew Engel; Zhichao Wang; Natalie S Frank; Ioana Dumitriu; Sutanay Choudhury; Anand Sarwate; Tony(...TRUNCATED) | [{"figure_caption":"Figure 1 :1Figure 1: Linear Realization of Bert-base Model. Each panel shows a l(...TRUNCATED) | "[{\"Category\": \"Methodological Basis\", \"Citation\": \"(Leavitt & Morcos, 2020)\", \"Explanation(...TRUNCATED) |
[{"figure_ref":[],"heading":"Introduction","publication_ref":["b2"],"table_ref":[],"text":"Generativ(...TRUNCATED) | 2023-08-29 | 10.48550/arXiv.2302.05446 | [{"authors":"E Bengio; M Jain; M Korablyov; D Precup; Y Bengio","journal":"","ref_id":"b0","title":"(...TRUNCATED) | [] | torchgfn: A PyTorch GFlowNet library | "The growing popularity of generative flow networks (GFlowNets or GFNs) from a range of researchers (...TRUNCATED) | Salem Lahlou; Joseph D Viviano; Mila Victor Schmidt; Yoshua Bengio | [{"figure_caption":"Figure 1 :1Figure 1: Hierarchy of the codebase for the v1 release. States and Ac(...TRUNCATED) | "[{\"Category\": \"Supporting Evidence\", \"Citation\": \"(Lahlou et al., 2023)\", \"Explanation\": (...TRUNCATED) |
[{"figure_ref":[],"heading":"Introduction","publication_ref":["b15","b17","b33","b16","b45","b13","b(...TRUNCATED) | 2023-11-05 | 10.18653/v1/2022.naacl-main.135 | [{"authors":"Eneko Agirre; Carmen Banea; Daniel Cer; Mona Diab; Aitor González-Agirre; Rada Mihalce(...TRUNCATED) | [{"formula_coordinates":[3.0,70.87,581.19,189.71,10.82],"formula_id":"formula_0","formula_text":"•(...TRUNCATED) | "Bridging Continuous and Discrete Spaces: Interpretable Sentence Representation Learning via Composi(...TRUNCATED) | "Traditional sentence embedding models encode sentences into vector representations to capture usefu(...TRUNCATED) | James Y Huang; Wenlin Yao; Kaiqiang Song; Hongming Zhang; Muhao Chen; Dong Yu | [{"figure_caption":" 34.1 51.0 28.1 45.0 Model performance on four textual generation tasks for inte(...TRUNCATED) | "[{\"Category\": \"Methodological Basis\", \"Citation\": \"(Conneau et al., 2018)\", \"Explanation\"(...TRUNCATED) |
[{"figure_ref":[],"heading":"Introduction","publication_ref":["b0","b1","b2","b3"],"table_ref":[],"t(...TRUNCATED) | 2023-05-24 | [{"authors":"Yaobin Zhang; Weihong Deng; Yaoyao Zhong; Jiani Hu; Xian Li; Dongyue Zhao; Dongchao Wen(...TRUNCATED) | [{"formula_coordinates":[2.0,335.22,550.21,209.89,30.32],"formula_id":"formula_0","formula_text":"L (...TRUNCATED) | FaceFusion: Exploiting Full Spectrum of Multiple Datasets | "The size of training dataset is known to be among the most dominating aspects of training high-perf(...TRUNCATED) | Chiyoung Song; Dongjae Lee; Naver Cloud | [{"figure_caption":"Figure 1 :1Figure 1: Overview of FaceFusion. L cls,k and L cls shares the same c(...TRUNCATED) | "[{\"Category\": \"Methodological Basis\", \"Citation\": \"[1]\", \"Explanation\": \"The cited work (...TRUNCATED) |
|
[{"figure_ref":["fig_0"],"heading":"Introduction","publication_ref":["b39","b1","b8","b33","b31","b3(...TRUNCATED) | 10.18653/v1/2020.findings-emnlp.91 | [{"authors":"Antoine Bosselut; Omer Levy; Ari Holtzman; Corin Ennis; Dieter Fox; Yejin Choi","journa(...TRUNCATED) | [] | OPENPI2.0: An Improved Dataset for Entity Tracking in Texts | "Much text describes a changing world (e.g., procedures, stories, newswires), and understanding them(...TRUNCATED) | Li Zhang; Hainiu Xu; Abhinav Kommula; Chris Callison-Burch; Niket Tandon | [{"figure_caption":"Figure 1 :1Figure 1: For each step in a procedure, OPENPI annotates the state ch(...TRUNCATED) | "[{\"Category\": \"Methodological Basis\", \"Citation\": \"(Weston et al., 2015)\", \"Explanation\":(...TRUNCATED) |
|
[{"figure_ref":[],"heading":"Introduction","publication_ref":["b16","b10","b3"],"table_ref":[],"text(...TRUNCATED) | 2024-02-13 | 10.18653/v1/2020.acl-main.485 | [{"authors":"Abubakar Abid; Maheen Farooqi; James Zou","journal":"Nature Machine Intelligence","ref_(...TRUNCATED) | [{"formula_coordinates":[5.0,330.4,78.67,169.74,113.25],"formula_id":"formula_0","formula_text":"CS((...TRUNCATED) | "This Land is {Your, My} Land: Evaluating Geopolitical Bias in Language Models through Territorial D(...TRUNCATED) | "Do the Spratly Islands belong to China, the Philippines, or Vietnam? A pretrained large language mo(...TRUNCATED) | Bryan Li; Samar Haider; Chris Callison-Burch | [{"figure_caption":"Figure 2 :2Figure 2: Illustration of comparisons made for the CS metrics. KB CS,(...TRUNCATED) | "[{\"Category\": \"Supporting Evidence\", \"Citation\": \"(Petroni et al., 2019)\", \"Explanation\":(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 32